Please use this identifier to cite or link to this item: http://archives.univ-biskra.dz/handle/123456789/4258
Full metadata record
DC FieldValueLanguage
dc.contributor.authorLakhmissi CHERROUN-
dc.contributor.authorMohamed BOUMEHRAZ-
dc.date.accessioned2014-11-25T07:43:49Z-
dc.date.available2014-11-25T07:43:49Z-
dc.date.issued2014-11-25-
dc.identifier.urihttp://archives.univ-biskra.dz/handle/123456789/4258-
dc.description.abstractOne of the standing challenging aspects in mobile robotics is the ability to navigate autonomously. It is a difficult task, which requiring a complete modeling of the environment. This paper presents an intelligent navigation method for an autonomous mobile robot which requires only a scalar signal like a feedback indicating the quality of the applied action. Instead of programming a robot, we will let it only learn its own strategy. The Q-learning algorithm of reinforcement learning is used for the mobile robot navigation by discretizing states and actions spaces. In order to improve the mobile robot performances, an optimization of fuzzy controllers will be discussed for the robot navigation; based on prior knowledge introduced by a fuzzy inference system so that the initial behavior is acceptable. The effectiveness of this optimization method is verified by simulation.en_US
dc.language.isoenen_US
dc.subjectmobile robot, intelligent navigation, fuzzy controller, Q-learning, fuzzy Q-learning.en_US
dc.titleTuning Fuzzy Controllers By Q-Learning For Mobile Robot Navigationen_US
dc.typeArticleen_US
Appears in Collections:Communications Internationales

Files in This Item:
File Description SizeFormat 
Tuning Fuzzy Controllers By Q-Learning For Mobile Robot Navigation.pdf176,9 kBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.