[1] 王若萱, 吴建平, 徐辉. 自动驾驶汽车感知系统仿真的研究及应用综述[J]. 系统仿真学报, 2022, 34(12): 2507-2521. WANG R X, WU J P, XU H. Overview of research and application on autonomous vehicle oriented perception system simulation[J]. Journal of System Simulation, 2022, 34(12): 2507-2521. (in Chinese) [2] AYALA R, MOHD T K. Sensors in autonomous vehicles: a survey[J]. Journal of Autonomous Vehicles and Systems, 2021, 1(3): 031003. [3] TAN R T. Visibility in bad weather from a single image[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Washington D.C.,USA:IEEE Press,2008: 1-8. [4] 陈熙源, 戈明明, 姚志婷, 等. 雨雪天气下的激光雷达滤波算法研究[J]. 仪器仪表学报, 2023, 44(7): 172-181. CHEN X Y, GE M M, YAO Z T, et al. Research on lidar filtering algorithm for rainy and snowy weather[J]. Chinese Journal of Scientific Instrument, 2023, 44(7): 172-181. (in Chinese) [5] SUN R, SUZUKI K, OWADA Y, et al. A millimeter-wave automotive radar with high angular resolution for identification of closely spaced on-road obstacles[J]. Scientific Reports, 2023, 13: 3233. [6] 任珈民, 宫宁生, 韩镇阳. 基于YOLOv3与卡尔曼滤波的多目标跟踪算法[J]. 计算机应用与软件, 2020, 37(5): 169-176. REN J M, GONG N S, HAN Z Y. Multi-target tracking algorithm based on YOLOv3 and Kalman filter[J]. Computer Applications and Software, 2020, 37(5): 169-176. (in Chinese) [7] BAI J, LI S, HUANG L B, et al. Robust detection and tracking method for moving object based on radar and camera data fusion[J]. IEEE Sensors Journal, 2021, 21(9): 10761-10774. [8] BANSAL K, RUNGTA K, BHARADIA D. RadSegNet: a reliable approach to radar camera fusion[EB/OL].[2024-05-05].https://arxiv.org/abs/2208.03849. [9] NABATI R, QI H R. CenterFusion: center-based radar and camera fusion for 3D object detection[C]//Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV). Washington D.C.,USA:IEEE Press,2021: 1526-1535. [10] TAN B, MA Z X, ZHU X C, et al. 3-D object detection for multiframe 4-D automotive millimeter-wave radar point cloud[J]. IEEE Sensors Journal, 2023, 23(11): 11125-11138. [11] BAI J, LI S, TAN B, et al. Traffic participants classification based on 3D radio detection and ranging point clouds[J]. IET Radar, Sonar & Navigation, 2022, 16(2): 278-290. [12] MEYER M, KUSCHK G. Automotive radar dataset for deep learning based 3D object detection[C]//Proceedings of the 16th European Radar Conference (EuRAD). Washington D.C.,USA:IEEE Press,2019: 129-132. [13] PALFFY A, POOL E, BARATAM S, et al. Multi-class road user detection with 31D radar in the view-of-delft dataset[J]. IEEE Robotics and Automation Letters, 2022, 7(2): 4961-4968. [14] PAEK D H, KONG S H, WIJAYA K T. K-Radar: 4D radar object detection for autonomous driving in various weather conditions[EB/OL].[2024-05-05].https://arxiv.org/abs/2206.08171. [15] ZHENG L Q, MA Z X, ZHU X C, et al. TJ4DRadSet: a 4D radar dataset for autonomous driving[C]//Proceedings of the 25th IEEE International Conference on Intelligent Transportation Systems (ITSC). Washington D.C.,USA:IEEE Press,2022: 493-498. [16] LIU Z J, TANG H T, AMINI A, et al. BEVFusion: multi-task multi-sensor fusion with unified bird’s-eye view representation[C]//Proceedings of the IEEE International Conference on Robotics and Automation (ICRA). Washington D.C.,USA:IEEE Press,2023: 2774-2781. [17] SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[EB/OL].[2024-05-05].https://arxiv.org/abs/1409.1556. [18] HE K M, ZHANG X Y, REN S Q, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Washington D.C.,USA:IEEE Press,2016: 770-778. [19] REDMON J, FARHADI A. YOLOv3: an incremental improvement[EB/OL].[2024-05-05].https://arxiv.org/abs/1804.02767. [20] CHADWICK S, MADDERN W, NEWMAN P. Distant vehicle detection using radar and vision[C]//Proceedings of the International Conference on Robotics and Automation (ICRA). Washington D.C.,USA:IEEE Press, 2019: 8311-8317. [21] LIU W, ANGUELOV D, ERHAN D, et al. SSD: single shot MultiBox detector[EB/OL].[2024-05-05]. https://arxiv.org/abs/1512.02325. [22] CHANG S, ZHANG Y F, ZHANG F, et al. Spatial attention fusion for obstacle detection using MmWave radar and vision sensor[J]. Sensors, 2020, 20(4): 956. [23] KIM Y, CHOI J W, KUM D. GRIF Net: gated region of interest fusion network for robust 3D object detection from radar point cloud and monocular image[C]//Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Washington D.C.,USA:IEEE Press,2021: 10857-10864. [24] LIN T Y, DOLLÁR P, GIRSHICK R, et al. Feature pyramid networks for object detection[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Washington D.C.,USA:IEEE Press,2017: 936-944. [25] REN M Y, POKROVSKY A, YANG B, et al. SBNet: sparse blocks network for fast inference[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D.C.,USA:IEEE Press,2018: 8711-8720. [26] KIM Y, KIM S, CHOI J W, et al. CRAFT: camera-radar 3D object detection with spatio-contextual fusion transformer[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2023, 37(1): 1160-1168. [27] WU Z Z, CHEN G L, GAN Y Z, et al. MVFusion: multi-view 3D object detection with semantic-aligned radar and camera fusion[C]//Proceedings of the IEEE International Conference on Robotics and Automation (ICRA). Washington D.C.,USA:IEEE Press,2023: 2766-2773. [28] ZHENG L Q, LI S, TAN B, et al. RCFusion: fusing 4-D radar and camera with bird’s-eye view features for 3-D object detection[J]. IEEE Transactions on Instrumentation and Measurement, 2023, 72: 8503814. [29] RODDICK T, KENDALL A, CIPOLLA R. Orthographic feature transform for monocular 3D object detection[EB/OL].[2024-05-05].https://arxiv.org/abs/1811.08188. [30] LANG A H, VORA S, CAESAR H, et al. PointPillars: fast encoders for object detection from point clouds[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Washington D.C.,USA:IEEE Press,2020: 12689-12697. [31] YAN H, XIONG S L, WANG L, et al. ATFusion: an alternate cross-attention transformer network for infrared and visible image fusion[EB/OL].[2024-05-05].https://arxiv.org/abs/2401.11675. [32] YAN Y, MAO Y X, LI B. SECOND: sparsely embedded convolutional detection[J]. Sensors, 2018, 18(10): 3337. |