Please use this identifier to cite or link to this item:
http://archives.univ-biskra.dz/handle/123456789/4258
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Lakhmissi CHERROUN | - |
dc.contributor.author | Mohamed BOUMEHRAZ | - |
dc.date.accessioned | 2014-11-25T07:43:49Z | - |
dc.date.available | 2014-11-25T07:43:49Z | - |
dc.date.issued | 2014-11-25 | - |
dc.identifier.uri | http://archives.univ-biskra.dz/handle/123456789/4258 | - |
dc.description.abstract | One of the standing challenging aspects in mobile robotics is the ability to navigate autonomously. It is a difficult task, which requiring a complete modeling of the environment. This paper presents an intelligent navigation method for an autonomous mobile robot which requires only a scalar signal like a feedback indicating the quality of the applied action. Instead of programming a robot, we will let it only learn its own strategy. The Q-learning algorithm of reinforcement learning is used for the mobile robot navigation by discretizing states and actions spaces. In order to improve the mobile robot performances, an optimization of fuzzy controllers will be discussed for the robot navigation; based on prior knowledge introduced by a fuzzy inference system so that the initial behavior is acceptable. The effectiveness of this optimization method is verified by simulation. | en_US |
dc.language.iso | en | en_US |
dc.subject | mobile robot, intelligent navigation, fuzzy controller, Q-learning, fuzzy Q-learning. | en_US |
dc.title | Tuning Fuzzy Controllers By Q-Learning For Mobile Robot Navigation | en_US |
dc.type | Article | en_US |
Appears in Collections: | Communications Internationales |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Tuning Fuzzy Controllers By Q-Learning For Mobile Robot Navigation.pdf | 176,9 kB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.