Please use this identifier to cite or link to this item:
http://archives.univ-biskra.dz/handle/123456789/4279
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Lakhmissi Cherroun | - |
dc.contributor.author | Mohamed Boumehraz | - |
dc.date.accessioned | 2014-11-25T08:57:08Z | - |
dc.date.available | 2014-11-25T08:57:08Z | - |
dc.date.issued | 2014-11-25 | - |
dc.identifier.uri | http://archives.univ-biskra.dz/handle/123456789/4279 | - |
dc.description.abstract | One of the standing challenging aspects in mobile robotics is the ability to navigate autonomously. It is a difficult task, which requiring a complete modeling of the environment. This paper presents an intelligent navigation method for an autonomous mobile robot which requires only a scalar signal like a feedback indicating the quality of the applied action. Instead of programming a robot, we will let it only learn its own strategy. The Q-learning algorithm of reinforcement learning is used for the mobile robot navigation by discretizing states and actions spaces. In order to improve the mobile robot performances, an optimization of fuzzy controllers will be discussed for the robot navigation; based on prior knowledge introduced by a fuzzy inference system so that the initial behavior is acceptable. The effectiveness of this optimization method is verified by simulation. | en_US |
dc.language.iso | en | en_US |
dc.subject | mobile robot; Q-learning;Fuzzy Q-learning. | en_US |
dc.title | Using Q-Learning and Fuzzy Q-Learning Algorithms for Mobile Robot Navigation in Unknown Environment | en_US |
dc.type | Article | en_US |
Appears in Collections: | Publications Internationales |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Using Q-Learning and Fuzzy Q-Learning Algorithms for Mobile Robot Navigation in Unknown Environment”,.pdf | 443,13 kB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.