1 |
ZHU K, ZHANG T. Deep reinforcement learning based mobile robot navigation: a review. Tsinghua Science and Technology, 2021, 26 (5): 674- 691.
doi: 10.26599/TST.2021.9010012
|
2 |
KARUR K, SHARMA N, DHARMATTI C, et al. A survey of path planning algorithms for mobile robots. Vehicles, 2021, 3 (3): 448- 468.
doi: 10.3390/vehicles3030027
|
3 |
VAN DEN BERG J, LIN M, MANOCHA D. Reciprocal velocity obstacles for real-time multi-agent navigation[C]//Proceedings of 2008 IEEE International Conference on Robotics and Automation. Washington D. C., USA: IEEE Press, 2008: 1928-1935.
|
4 |
ZANLUNGO F, IKEDA T, KANDA T. Social force model with explicit collision prediction. EPL (Europhysics Letters), 2011, 93 (6): 68005.
doi: 10.1209/0295-5075/93/68005
|
5 |
OGREN P, LEONARD N E. A convergent dynamic window approach to obstacle avoidance. IEEE Transactions on Robotics, 2005, 21 (2): 188- 195.
doi: 10.1109/TRO.2004.838008
|
6 |
ARULKUMARAN K, DEISENROTH M P, BRUNDAGE M, et al. Deep reinforcement learning: a brief survey. IEEE Signal Processing Magazine, 2017, 34 (6): 26- 38.
doi: 10.1109/MSP.2017.2743240
|
7 |
孙世光, 兰旭光, 张翰博, 等. 基于模型的机器人强化学习研究综述. 模式识别与人工智能, 2022, 35 (1): 1- 16.
doi: 10.16451/j.cnki.issn1003-6059.202201001
|
|
SUN S G, LAN X G, ZHANG H B, et al. Model-based reinforcement learning in robotics: a survey. Pattern Recognition and Artificial Intelligence, 2022, 35 (1): 1- 16.
doi: 10.16451/j.cnki.issn1003-6059.202201001
|
8 |
SUN H H, ZHANG W J, YU R X, et al. Motion planning for mobile robots—focusing on deep reinforcement learning: a systematic review. IEEE Access, 2021, 9, 69061- 69081.
doi: 10.1109/ACCESS.2021.3076530
|
9 |
张瀚, 解明扬, 张民, 等. 融合DDPG算法的移动机器人路径规划研究. 控制工程, 2021, 28 (11): 2136- 2142.
URL
|
|
ZHANG H, XIE M Y, ZHANG M, et al. Path planning of mobile robot with fusion DDPG algorithm. Control Engineering of China, 2021, 28 (11): 2136- 2142.
URL
|
10 |
SATHYAMOORTHY A J, LIANG J, PATEL U, et al. DenseCAvoid: real-time navigation in dense crowds using anticipatory behaviors[C]//Proceedings of 2020 IEEE International Conference on Robotics and Automation. Washington D. C., USA: IEEE Press, 2020: 11345-11352.
|
11 |
CHEN Y F, LIU M, EVERETT M, et al. Decentralized non-communicating multiagent collision avoidance with deep reinforcement learning[C]//Proceedings of 2017 IEEE International Conference on Robotics and Automation. Washington D. C., USA: IEEE Press, 2017: 285-292.
|
12 |
EVERETT M, CHEN Y F, HOW J P. Motion planning among dynamic, decision-making agents with deep reinforcement learning[C]//Proceedings of 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems. Washington D. C., USA: IEEE Press, 2018: 3052-3059.
|
13 |
刘国名, 李彩虹, 李永迪, 等. 基于改进PPO算法的机器人局部路径规划. 计算机工程, 2023, 49 (2): 119-126, 135.
URL
|
|
LIU G M, LI C H, LI Y D, et al. Local path planning of robot based on improved PPO algorithm. Computer Engineering, 2023, 49 (2): 119-126, 135.
URL
|
14 |
WU Z H, PAN S R, CHEN F W, et al. A comprehensive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems, 2021, 32 (1): 4- 24.
doi: 10.1109/TNNLS.2020.2978386
|
15 |
王健宗, 孔令炜, 黄章成, 等. 图神经网络综述. 计算机工程, 2021, 47 (4): 1- 12.
URL
|
|
WANG J Z, KONG L W, HUANG Z C, et al. Survey of graph neural network. Computer Engineering, 2021, 47 (4): 1- 12.
URL
|
16 |
CHEN C G, HU S, NIKDEL P, et al. Relational graph learning for crowd navigation[C]//Proceedings of 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems. Washington D. C., USA: IEEE Press, 2020: 10007-10013.
|
17 |
LIU S J, CHANG P X, LIANG W H, et al. Decentralized Structural-RNN for robot crowd navigation with deep reinforcement learning[C]//Proceedings of 2021 IEEE International Conference on Robotics and Automation. Washington D. C., USA: IEEE Press, 2021: 3517-3524.
|
18 |
CHEN Y Y, LIU C C, SHI B E, et al. Robot navigation in crowds by graph convolutional networks with attention learned from human gaze. IEEE Robotics and Automation Letters, 2020, 5 (2): 2754- 2761.
doi: 10.1109/LRA.2020.2972868
|
19 |
DEY R, SALEM F M. Gate-variants of Gated Recurrent Unit(GRU) neural networks[C]//Proceedings of 2017 IEEE International Midwest Symposium on Circuits and Systems. Washington D. C., USA: IEEE Press, 2017: 1597-1600.
|
20 |
|
21 |
|
22 |
ZHANG X Y, XI W, GUO X, et al. Relational navigation learning in continuous action space among crowds[C]//Proceedings of 2021 IEEE International Conference on Robotics and Automation. Washington D. C., USA: IEEE Press, 2021: 3175-3181.
|
23 |
孙立香, 孙晓娴, 刘成菊, 等. 人群环境中基于深度强化学习的移动机器人避障算法. 信息与控制, 2022, 51 (1): 107- 118.
URL
|
|
SUN L X, SUN X X, LIU C J, et al. Obstacle avoidance algorithm for mobile robot based on deep reinforcement learning in crowd environment. Information and Control, 2022, 51 (1): 107- 118.
URL
|
24 |
ZHOU Z Q, ZHU P M, ZENG Z W, et al. Robot navigation in a crowd by integrating deep reinforcement learning and online planning. Applied Intelligence, 2022, 52 (13): 15600- 15616.
doi: 10.1007/s10489-022-03191-2
|
25 |
胡琴, 赵一亭, 夏方平, 等. 基于Soft-Actor-Critic算法的机器人局部路径规划算法. 武汉理工大学学报, 2021, 43 (9): 79- 84.
URL
|
|
HU Q, ZHAO Y T, XIA F P, et al. Robot local path planning algorithm based on Soft-Actor-Critic algorithm. Journal of Wuhan University of Technology, 2021, 43 (9): 79- 84.
URL
|
26 |
|