作者投稿和查稿 主编审稿 专家审稿 编委审稿 远程编辑

计算机工程

• 人工智能及识别技术 • 上一篇    下一篇

基于RDC-Q学习算法的移动机器人路径规划

王子强,武继刚   

  1. (天津工业大学计算机科学与软件学院,天津 300387)
  • 收稿日期:2013-05-10 出版日期:2014-06-15 发布日期:2014-06-13
  • 作者简介:王子强(1989-),男,硕士研究生,主研方向:人工智能,强化学习;武继刚,教授。

Mobile Robot Path Planning Based on RDC-Q Learning Algorithm

WANG Zi-qiang, WU Ji-gang   

  1. (College of Computer Science and Software, Tianjin Polytechnic University, Tianjin 300387, China)
  • Received:2013-05-10 Online:2014-06-15 Published:2014-06-13

摘要: 传统Q算法对于机器人回报函数的定义较为宽泛,导致机器人的学习效率不高。为解决该问题,给出一种回报详细分类Q(RDC-Q)学习算法。综合机器人各个传感器的返回值,依据机器人距离障碍物的远近把机器人的状态划分为20个奖励状态和 15个惩罚状态,对机器人每个时刻所获得的回报值按其状态的安全等级分类,使机器人趋向于安全等级更高的状态,从而帮助机器人更快更好地学习。通过在一个障碍物密集的环境中进行仿真实验,证明该算法收敛速度相对传统回报Q算法有明显提高。

关键词: 路径规划, 移动机器人, 强化学习, Q学习算法, 回报函数, 学习效率

Abstract: Reward function is always simple in traditional Q-learning algorithm, which makes a low learning efficiency. To solve this problem, a Reward Detailed Classification Q(RDC-Q) learning algorithm is proposed. It synthesizes all sensors’ value of the robot and divides the robot’s states into 20 damp states and 15 reward states according to the distance between the robot and obstacles. The reward value that a robot gets at each time step is classified by the robot’s security level, which makes the robot go towards the states in higher security levels, so the robot can learn quicker and better. By simulating the new algorithm in an environment of dense barriers, it is proved that convergence speed of the new method is obviously improved than traditional reward Q methods.

Key words: path planning, mobile robot, reinforcement learning, Q-learning algorithm, reward function, learning efficiency

中图分类号: