1 |
TENG S Y, HU X M, DENG P, et al. Motion planning for autonomous driving: the state of the art and future perspectives. IEEE Transactions on Intelligent Vehicles, 2023, 8 (6): 3692- 3711.
doi: 10.1109/TIV.2023.3274536
|
2 |
章军辉, 陈大鹏, 李庆. 自动驾驶技术研究现状及发展趋势. 科学技术与工程, 2020, 20 (9): 3394- 3403.
doi: 10.3969/j.issn.1671-1815.2020.09.005
|
|
ZHANG J H, CHEN D P, LI Q. Research status and development trend of technologies for autonomous vehicles. Science Technology and Engineering, 2020, 20 (9): 3394- 3403.
doi: 10.3969/j.issn.1671-1815.2020.09.005
|
3 |
CHEN Y, CHEN S Z, REN H B, et al. Path tracking and handling stability control strategy with collision avoidance for the autonomous vehicle under extreme conditions. IEEE Transactions on Vehicular Technology, 2020, 69 (12): 14602- 14617.
doi: 10.1109/TVT.2020.3031661
|
4 |
SILVER D, HUANG A, MADDISON C J, et al. Mastering the game of Go with deep neural networks and tree search. Nature, 2016, 529, 484- 489.
doi: 10.1038/nature16961
|
5 |
GU S X, HOLLY E, LILLICRAP T, et al. Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates[C]//Proceedings of IEEE International Conference on Robotics and Automation. Washington D. C., USA: IEEE Press, 2017: 3389-3396.
|
6 |
|
7 |
BADUE C, GUIDOLINI R, CARNEIRO R V, et al. Self-driving cars: a survey. Expert Systems with Applications, 2021, 165, 113816.
doi: 10.1016/j.eswa.2020.113816
|
8 |
茅智慧, 朱佳利, 吴鑫, 等. 基于YOLO的自动驾驶目标检测研究综述. 计算机工程与应用, 2022, 58 (15): 68- 77.
|
|
MAO Z H, ZHU J L, WU X, et al. Review of YOLO based target detection for autonomous driving. Computer Engineering and Applications, 2022, 58 (15): 68- 77.
|
9 |
LI G F, QIU Y F, YANG Y F, et al. Lane change strategies for autonomous vehicles: a deep reinforcement learning approach based on transformer[EB/OL]. [2023-09-30]. https://arxiv.org/pdf/2304.13732.
|
10 |
HE Y, LIU Y, YANG L, et al. Deep adaptive control: deep reinforcement learning-based adaptive vehicle trajectory control algorithms for different risk levels[EB/OL]. [2023-09-30]. https://arxiv.org/pdf/2023.03408.
|
11 |
钱玉宝, 余米森, 郭旭涛, 等. 无人驾驶车辆智能控制技术发展. 科学技术与工程, 2022, 22 (10): 3846- 3858.
doi: 10.3969/j.issn.1671-1815.2022.10.002
|
|
QIAN Y B, YU M S, GUO X T, et al. Development of intelligent control technology for unmanned vehicle. Science Technology and Engineering, 2022, 22 (10): 3846- 3858.
doi: 10.3969/j.issn.1671-1815.2022.10.002
|
12 |
WANG W R, ZHU M C, WANG X M, et al. An improved artificial potential field method of trajectory planning and obstacle avoidance for redundant manipulators. International Journal of Advanced Robotic Systems, 2018, 15 (5): 172988.
doi: 10.1177/1729881418799562
|
13 |
FENG S, QIAN Y B, WANG Y. Collision avoidance method of autonomous vehicle based on improved artificial potential field algorithm. Journal of Automobile Engineering, 2021, 235 (14): 3416- 3430.
doi: 10.1177/09544070211014319
|
14 |
周慧子, 胡学敏, 陈龙, 等. 面向自动驾驶的动态路径规划避障算法. 计算机应用, 2017, 37 (3): 883- 888.
|
|
ZHOU H Z, HU X M, CHEN L, et al. Dynamic path planning for autonomous driving with avoidance of obstacles. Journal of Computer Applications, 2017, 37 (3): 883- 888.
|
15 |
FERACO S, LUCIANI S, BONFITTO A, et al. A local trajectory planning and control method for autonomous vehicles based on the RRT algorithm[C]//Proceedings of AEIT International Conference of Electrical and Electronic Technologies for Automotive. Washington D. C., USA: IEEE Press, 2020: 1-6.
|
16 |
HNEWA M, RADHA H. Object detection under rainy conditions for autonomous vehicles: a review of state-of-the-art and emerging techniques. IEEE Signal Processing Magazine, 2021, 38 (1): 53- 67.
doi: 10.1109/MSP.2020.2984801
|
17 |
BEN ELALLID B, BENAMAR N, MRANI N, et al. DQN-based reinforcement learning for vehicle control of autonomous vehicles interacting with pedestrians[C]//Proceedings of International Conference on Innovation and Intelligence for Informatics, Computing, and Technologies. Washington D. C., USA: IEEE Press, 2022: 489-493.
|
18 |
BIN ISSA R, DAS M, RAHMAN M S, et al. Double deep Q-learning and faster R-CNN-based autonomous vehicle navigation and obstacle avoidance in dynamic environment. Sensors, 2021, 21 (4): 1468.
doi: 10.3390/s21041468
|
19 |
SAXENA D M, BAE S, NAKHAEI A, et al. Driving in dense traffic with model-free reinforcement learning[C]//Proceedings of IEEE International Conference on Robotics and Automation. Washington D. C., USA: IEEE Press, 2020: 5385-5392.
|
20 |
CODEVILLA F, MULLER M, LOPEZ A, et al. End-to-end driving via conditional imitation learning[C]//Proceedings of IEEE International Conference on Robotics and Automation. Washington D. C., USA: IEEE Press, 2018: 4693-4700.
|
21 |
SADAT A, CASAS S, REN M Y, et al. Perceive, predict, and plan: safe motion planning through interpretable semantic representations[C]//Proceedings of European Conference on Computer Vision. Berlin, Germany: Springer, 2020: 414-430.
|
22 |
WANG Z, YAN Z H, NAKANO K. Comfort-oriented haptic guidance steering via deep reinforcement learning for individualized lane keeping assist[C]//Proceedings of IEEE International Conference on Systems, Man and Cybernetics. Washington D. C., USA: IEEE Press, 2019: 4283-4289.
|
23 |
ZHOU W, CHEN D, YAN J, et al. Multi-agent reinforcement learning for cooperative lane changing of connected and autonomous vehicles in mixed traffic. Autonomous Intelligent Systems, 2022, 2 (1): 5.
doi: 10.1007/s43684-022-00023-5
|
24 |
LI X X, QIU X Y, WANG J, et al. A deep reinforcement learning based approach for autonomous overtaking[C]//Proceedings of IEEE International Conference on Communications. Washington D. C., USA: IEEE Press, 2020: 1-5.
|
25 |
MNIH V, KAVUKCUOGLU K, SILVER D, et al. Human-level control through deep reinforcement learning. Nature, 2015, 518, 529- 533.
doi: 10.1038/nature14236
|
26 |
MOUSAVI S S , SCHUKAT M , HOWLEY E . Deep reinforcement learning: an overview. Berlin, Germany: Springer, 2017.
|
27 |
KIRAN B R, SOBH I, TALPAERT V, et al. Deep reinforcement learning for autonomous driving: a survey. IEEE Transactions on Intelligent Transportation Systems, 2022, 23 (6): 4909- 4926.
|
28 |
HOEL C J, WOLFF K, LAINE L. Automated speed and lane change decision making using deep reinforcement learning[C]//Proceedings of the 21st International Conference on Intelligent Transportation Systems. Washington D. C., USA: IEEE Press, 2018: 2148-2155.
|
29 |
WANG J J, ZHANG Q C, ZHAO D B, et al. Lane change decision-making through deep reinforcement learning with rule-based constraints[C]//Proceedings of International Joint Conference on Neural Networks. Washington D. C., USA: IEEE Press, 2019: 1-6.
|
30 |
CYBENKO G. Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals and Systems, 1989, 2 (4): 303- 314.
|
31 |
|
32 |
WANG Z Y, SCHAUL T, HESSEL M, et al. Dueling network architectures for deep reinforcement learnings[C]//Proceedings of the 33rd International Conference on Machine Learning. Washington D. C., USA: IEEE Press, 2016: 2939-2947.
|
33 |
WANG G, HU J M, LI Z H, et al. Harmonious lane changing via deep reinforcement learning. IEEE Transactions on Intelligent Transportation Systems, 2022, 23 (5): 4642- 4650.
|
34 |
CHEN L, HU X M, TANG B, et al. Conditional DQN-based motion planning with fuzzy logic for autonomous driving. IEEE Transactions on Intelligent Transportation Systems, 2022, 23 (4): 2966- 2977.
|