[1] 中国汽车工程学会. 节能与新能源汽车技术路线图2.0[M]. 2.机械工业出版社: 539.
China Society of Automotive Engineers. Technology Roadmap for Energy-Saving and New Energy Vehicles 2.0 [M]. 2nd ed. Beijing: Mechanical Industry Press, p. 539.
[2] 郭延永, 刘佩, 袁泉, 等. 网联自动驾驶车辆道路交通安全研究综述[J]. 交通运输工程学报, 2023, 23(5): 19-38.
Guo Yanyong, Liu Pei, Yuan Quan, et al. A Review of Road Traffic Safety Research for Connected and Automated Vehicles [J]. Journal of Traffic and Transportation Engineering, 2023, 23(5): 19-38.
[3] 张阳婷, 黄德启, 王东伟, 等. 基于深度学习的目标检测算法研究与应用综述[J]. 计算机工程与应用, 2023, 59(18).
Zhang Yangting, Huang Deqi, Wang Dongwei, et al. Review of Research and Applications of Deep Learning-Based Object Detection Algorithms [J]. Computer Engineering and Applications, 2023, 59(18).
[4] Wang Q, Wu B, Zhu P, et al. ECA-Net: Efficient channel attention for deep convolutional neural networks[C], Proceedings of the IEEE/CVF conference on computer vision and pattern recognition(CVPR). 2020: 11534-11542.
[5] Qin Z, Zhang P, Wu F, et al. Fcanet: Frequency channel attention networks[C], Proceedings of the IEEE/CVF international conference on computer vision(ICCV). 2021: 783-792.
[6] Zheng Z, Wang P, Liu W, et al. Distance-IoU loss: Faster and better learning for bounding box regression[C], Proceedings of the AAAI conference on artificial intelligence(AAAI). 2020: 12993-13000.
[7] Yu J, Jiang Y, Wang Z, et al. Unitbox: An advanced object detection network[C], Proceedings of the 24th ACM international conference on Multimedia(MM). 2016: 516-520.
[8] Zhang Y-F, Ren W, Zhang Z, et al. Focal and efficient IOU loss for accurate bounding box regression[J]. Neurocomputing, 2022, 506: 146-157.
[9] Girshick R, Donahue J, Darrell T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation[C], Proceedings of the IEEE conference on computer vision and pattern recognition(CVPR). 2014: 580-587.
[10] Girshick R. Fast r-cnn[C], Proceedings of the IEEE
international conference on computer vision(ICCV). 2015:
1440-1448.
[11] Ren S, He K, Girshick R, et al. Faster R-CNN: Towards
real-time object detection with region proposal
networks[J]. IEEE transactions on pattern analysis and
machine intelligence, 2016, 39(6): 1137-1149.
[12] He K, Gkioxari G, Dollár P, et al. Mask r-cnn[C],
Proceedings of the IEEE international conference on
computer vision(ICCV). 2017: 2961-2969.
[13] Redmon J, Divvala S, Girshick R, et al. You Only Look
Once: Unified, Real-Time Object Detection[C], Computer
Vision & Pattern Recognition(CVPR). 2016: 779-788.
[14] Liu W, Anguelov D, Erhan D, et al. Ssd: Single shot
multibox detector[C], Computer Vision–ECCV 2016: 14th
European Conference, Amsterdam, The Netherlands,
October 11–14, 2016, Proceedings, Part I 14. Springer,
2016: 21-37.
[15] Lin T-Y, Goyal P, Girshick R, et al. Focal loss for dense
object detection[C], Proceedings of the IEEE international
conference on computer vision(ICCV). 2017: 2980-2988.
[16] Duan K, Bai S, Xie L, et al. Centernet: Keypoint triplets
for object detection[C], Proceedings of the IEEE/CVF
international conference on computer vision(ICCV). 2019:
6569-6578.
[17] Bochkovskiy A, Wang C-Y, Liao H-Y M. YOLOv4:
Optimal speed and accuracy of object detection[EB/OL]. https:
//arxiv.org/abs/2004.10934, 2020-04-23.
[18] Wang A, Chen H, Liu L, et al. YOLOv10: Real-time
end-to-end
object
detection[EB/OL].
arxiv.org/abs/240 5.14458, 2024-10-30.
https://
[19] Khanam R, Hussain M. YOLO11: An overview of the key
architectural enhancements[EB/OL]. https://arxiv.org/abs/
2410. 17725, 2024-10-23.
[20] Carion N, Massa F, Synnaeve G, et al. End-to-end object
detection with transformers[C], European conference on
computer vision(ECCV). Springer, 2020: 213-229.
[21] Zhu X, Su W, Lu L, et al. Deformable detr: Deformable
transformers for end-to-end object detection[EB/OL].
https://arxiv.org/abs/2010.04159, 2021-03-18.
[22] Meng D, Chen X, Fan Z, et al. Conditional detr for fast
training convergence[C], Proceedings of the IEEE/CVF
international conference on computer vision(ICCV). 2021:
3651-3660.
[23] Liu S, Li F, Zhang H, et al. Dab-detr: Dynamic anchor
boxes are better queries for detr[EB/OL].https://arxiv.
org/abs/2201.12329, 2022-03-30.
[24] Zhao Y, Lv W, Xu S, et al. Detrs beat yolos on real-time
object detection[C], Proceedings of the IEEE/CVF
Conference on Computer Vision and Pattern Recognition
(CVPR). 2024: 16965-16974.
[25] Tan M, Pang R, Le Q V. Efficientdet: Scalable and
efficient
object
detection[C], Proceedings of the
IEEE/CVF conference on computer vision and pattern
recognition(CVPR). 2020: 10781-10790.
[26] Qin D, Leichner C, Delakis M, et al. MobileNetV4:
universal models for the mobile ecosystem[C], European
Conference on Computer Vision(ECCV). Springer, 2024:
78-96.
[27] Yu F, Koltun V. Multi-scale context aggregation by dilated
convolutions[EB/OL].
2016-04-30.
https://arxiv.org/abs/1511.07122,
[28] Chollet F. Xception: Deep learning with depthwise
separable convolutions[C], Proceedings of the IEEE
conference
on
computer
vision
recognition(CVPR). 2017: 1251-1258.
and
pattern
[29] Ma N, Zhang X, Zheng H-T, et al. Shufflenet v2: Practical
guidelines for efficient cnn architecture design[C],
Proceedings of the European conference on computer
vision (ECCV). 2018: 116-131.
|