[1] NISTÉR D, NARODITSKY O, BERGEN J.Visual odometry for ground vehicle applications[J]. Journal of Field Robotics, 2006, 23(1): 3-20. [2] HOWARD A.Real-time stereo visual odometry for autonomous ground vehicles[C]//Proceedings of 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems.Washington D.C., USA:IEEE Press, 2008:3946-3952. [3] JOO S-H, MANZOOR S, ROCHA Y G, et al. A realtime autonomous robot navigation framework for human like high-level interaction and task planning in global dynamic environment[EB/OL]. [2020-06-01]. https://arxiv.org/ftp/arxiv/papers/1905/1905.12942.pdf. [4] KLEIN G, MURRAY D.Parallel tracking and mapping for small AR workspaces[C]//Proceedings of the 6th IEEE and ACM International Symposium on Mixed and Augmented Reality.Washington D.C., USA:IEEE Press, 2007:1-10. [5] 赵越, 李晶皎, 王爱侠, 等. 基于IEKF-SLAM的未知场景增强现实跟踪注册算法[J]. 计算机工程, 2016, 42(1): 272-277. ZHAO Y, LI J J, WANG A X, et al. Tracking and registration algorithm of augmented reality on unknown scene based on IEKF-SLAM[J]. Computer Engineering, 2016, 42(1): 272-277.(in Chinese) [6] MUR-ARTAL R, TARDÓS J D.ORB-SLAM2:an open-source SLAM system for monocular, stereo, and RGB-D cameras[J]. IEEE Transactions on Robotics, 2017, 33(5): 1255-1262. [7] ENGEL J, KOLTUN V, CREMERS D.Direct sparse odometry[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 40(3): 611-625. [8] GOMEZ-OJEDA R, MORENO F-A, ZUÑIGA-NOËL D, et al. PL-SLAM:a stereo SLAM system through the combination of points and line segments[J]. IEEE Transactions on Robotics, 2019, 35(3): 734-746. [9] SHI J.Good features to track[C]//Proceedings of 1994 IEEE Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 1994:593-600. [10] ROSTEN E, DRUMMOND T.Machine learning for high-speed corner detection[C]//Proceedings of European Conference on Computer Vision.Berlin, Germany:Springer, 2006:430-443. [11] RUBLEE E, RABAUD V, KONOLIGE K, et al. ORB:an efficient alternative to SIFT or SURF[C]//Proceedings of the 2011 International Conference on Computer Vision.Washington D.C., USA:IEEE Press, 2011:1-2. [12] RAGURAM R, FRAHM J-M, POLLEFEYS M.A comparative analysis of RANSAC techniques leading to adaptive real-time random sample consensus[C]//Proceedings of European Conference on Computer Vision.Berlin, Germany:Springer, 2008:500-513. [13] FANG Y, DAI B.An improved moving target detecting and tracking based on optical flow technique and Kalman filter[C]//Proceedings of 2009 International Conference on Computer Science & Education.Berlin, Germany:Springer, 2009:1197-1202. [14] WANG Y, HUANG S.Motion segmentation based robust RGB-D SLAM[C]//Proceeding of the 11th World Congress on Intelligent Control and Automation.Berlin, Germany:Springer, 2014:3122-3127. [15] ALCANTARILLA P F, YEBES J J, ALMAZÁN J, et al. On combining visual SLAM and dense scene flow to increase the robustness of localization and mapping in dynamic environments[C]//Proceedings of 2012 IEEE International Conference on Robotics and Automation.Washington D.C., USA:IEEE Press, 2012:1290-1297. [16] 杨雪, 范勇, 高琳, 等. 基于纹理基元块识别与合并的图像语义分割[J]. 计算机工程, 2015, 41(3): 253-257. YANG X, FAN Y, GAO L, et al. Image semantic segmentation based on texture element block recognition and merging[J]. Computer Engineering, 2015, 41(3): 253-257.(in Chinese) [17] 夏胡云, 叶学义, 罗宵晗, 等. 多尺度空间金字塔池化PCANet的行人检测[J]. 计算机工程, 2019, 45(2): 270-277. XIA H Y, YE X Y, LUO X H, et al. Pedestrian detection using multi-scale principal component analysis network of spatial pyramid pooling[J]. Computer Engineering, 2019, 45(2): 270-277.(in Chinese) [18] YU C, LIU Z, LIU X J, et al. DS-SLAM:a semantic visual slam towards dynamic environments[C]//Proceedings of 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems.Washington D.C., USA:IEEE Press, 2018:1168-1174. [19] BADRINARAYANAN V, KENDALL A, CIPOLLA R.SEGNet:a deep convolutional encoder-decoder architecture for image segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(12): 2481-2495. [20] BESCOS B, FÁCIL J M, CIVERA J, et al. DynaSLAM:tracking, mapping, and inpainting in dynamic scenes[J]. IEEE Robotics and Automation Letters, 2018, 3(4): 4076-4083. [21] HE K, GKIOXARI G, DOLLÁR P, et al. Mask R-CNN[C]//Proceedings of the IEEE International Conference on Computer Vision.Washington D.C., USA:IEEE Press, 2017:2961-2969. [22] BÂRSAN I A, LIU P, POLLEFEYS M, et al. Robust dense mapping for large-scale dynamic environments[C]//Proceedings of 2018 IEEE International Conference on Robotics and Automation.Washington D.C., USA:IEEE Press, 2018:7510-7517. [23] CALONDER M, LEPETIT V, STRECHA C, et al. Brief:Binary robust independent elementary features[C]//Proceedings of European Conference on Computer Vision.Berlin, Germany:Springer, 2010:778-792. [24] BIAN J, LIN W-Y, MATSUSHITA Y, et al. GMS:grid-based motion statistics for fast, ultra-robust feature correspondence[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2017:4181-4190. [25] LIN T-Y, MAIRE M, BELONGIE S, et al. Microsoft coco:common objects in context[C]//Proceedings of European Conference on Computer Vision.Berlin, Germany:Springer, 2014:740-755. [26] REN S, HE K, GIRSHICK R, et al. Faster R-CNN:towards real-time object detection with region proposal networks[C]//Proceedings of Advances in Neural Information Processing Systems.Berlin, Germany:Springer, 2015:91-99. [27] STURM J, ENGELHARD N, ENDRES F, et al. A benchmark for the evaluation of RGB-D SLAM systems[C]//Proceedings of 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.Washington D.C., USA:IEEE Press, 2012:573-580. |