[1] Nistér D, Naroditsky O, Bergen J. Visual odometry for
ground vehicle applications[J]. Journal of Field Robotics, 2006,
23(1): 3-20.
[2] Howard A. Real-time stereo visual odometry for
autonomous ground vehicles[C]. 2008 IEEE/RSJ International
Conference on Intelligent Robots and Systems, 2008:
3946-3952.
[3] Joo S-H, Manzoor S, Rocha Y G, et al. A Realtime
Autonomous Robot Navigation Framework for Human like
High-level Interaction and Task Planning in Global Dynamic
Environment[J]. arXiv preprint arXiv:1905.12942, 2019.
[4] Klein G, Murray D. Parallel tracking and mapping for small
AR workspaces[C]. Proceedings of the 2007 6th IEEE and
ACM International Symposium on Mixed and Augmented
Reality, 2007: 1-10.
[5] ZHAO Yue,LI Jingjiao,WANG Aixia, et al. Tracking and
Registration Algorithm of Augmented Reality on Unknown
Scene Based on IEKF-SLAM[J]. Computer Engineering,
2016,42(001):272-277. (in Chinese)
赵越, 李晶皎, 王爱侠, et al. 基于 IEKF-SLAM 的未知场景
增强现实跟踪注册算法[J]. 计算机工程, 2016, 42(001):
272-277.
[6] Mur-Artal R, Tardós J D. Orb-slam2: An open-source slam
system for monocular, stereo, and rgb-d cameras[J]. IEEE
Transactions on Robotics, 2017, 33(5): 1255-1262.
[7] Engel J, Koltun V, Cremers D. Direct sparse odometry[J].
IEEE transactions on pattern analysis and machine intelligence,
2017, 40(3): 611-625.
[8] Gomez-Ojeda R, Moreno F-A, Zuñiga-Noël D, et al.
PL-SLAM: A stereo SLAM system through the combination of
points and line segments[J]. IEEE Transactions on Robotics,
2019, 35(3): 734-746.
[9] Shi J. Good features to track[C]. 1994 Proceedings of IEEE
conference on computer vision and pattern recognition, 1994:
593-600.
[10] Rosten E, Drummond T. Machine learning for high-speed
corner detection[C]. European conference on computer vision,
2006: 430-443.
[11] Rublee E, Rabaud V, Konolige K, et al. ORB: An efficient
alternative to SIFT or SURF[C]. ICCV, 2011: 2.
[12] Raguram R, Frahm J-M, Pollefeys M. A comparative
analysis of RANSAC techniques leading to adaptive real-time
random sample consensus[C]. European Conference on
Computer Vision, 2008: 500-513.
[13] Fang Y, Dai B. An improved moving target detecting and
tracking based on optical flow technique and kalman filter[C].2009 4th International Conference on Computer Science &
Education, 2009: 1197-1202.
[14] Wang Y, Huang S. Motion segmentation based robust
RGB-D SLAM[C]. Proceeding of the 11th World Congress on
Intelligent Control and Automation, 2014: 3122-3127.
[15] Alcantarilla P F, Yebes J J, Almazán J, et al. On combining
visual SLAM and dense scene flow to increase the robustness of
localization and mapping in dynamic environments[C]. 2012
IEEE International Conference on Robotics and Automation,
2012: 1290-1297.
[16] XI Zhihong, HAN Shuangquan, WANG Hongxu.
Simultaneous localization and semantic mapping of indoor
dynamic scene based on semantic segmentation[J]. Journal of
Computer Applications, 2019, 39(10): 2847-2851. (in Chinese)
席志红, 韩双全, 王洪旭. 基于语义分割的室内动态场景同
步定位与语义建图[J]. 计算机应用, 2019, 39(10): 2847-2851.
[17] XIA Huyun, YE Xueyi, LUO Xiaohan, et al. Pedestrian
Detection Using Multi-scale Principal Component Analysis
Network of Spatial Pyramid Pooling[J]. Computer Engineering,
2019, 45(2): 270-277. (in Chinese)
夏胡云,叶学义,罗宵晗,王鹏. 多尺度空间金字塔池化
PCANet 的行人检测[J]. 计算机工程, 2019, 45(2): 270-277.
[18] Yu C, Liu Z, Liu X-J, et al. Ds-slam: A semantic visual
slam towards dynamic environments[C]. 2018 IEEE/RSJ
International Conference on Intelligent Robots and Systems
(IROS), 2018: 1168-1174.
[19] Badrinarayanan V, Kendall A, Cipolla R. Segnet: A deep
convolutional encoder-decoder architecture for image
segmentation[J]. IEEE transactions on pattern analysis and
machine intelligence, 2017, 39(12): 2481-2495.
[20] Bescos B, Fácil J M, Civera J, et al. DynaSLAM: Tracking,
mapping, and inpainting in dynamic scenes[J]. IEEE Robotics
and Automation Letters, 2018, 3(4): 4076-4083.
[21] He K, Gkioxari G, Dollár P, et al. Mask r-cnn[C].
Proceedings of the IEEE international conference on computer
vision, 2017: 2961-2969.
[22] Bârsan I A, Liu P, Pollefeys M, et al. Robust dense
mapping for large-scale dynamic environments[C]. 2018 IEEE
International Conference on Robotics and Automation (ICRA),
2018: 7510-7517.
[23] Calonder M, Lepetit V, Strecha C, et al. Brief: Binary
robust independent elementary features[C]. European
conference on computer vision, 2010: 778-792.
[24] Bian J, Lin W-Y, Matsushita Y, et al. Gms: Grid-based
motion statistics for fast, ultra-robust feature correspondence[C].
Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, 2017: 4181-4190.
[25] Lin T-Y, Maire M, Belongie S, et al. Microsoft coco:
Common objects in context[C]. European conference on
computer vision, 2014: 740-755.
[26] Ren S, He K, Girshick R, et al. Faster r-cnn: Towards
real-time object detection with region proposal networks[C].
Advances in neural information processing systems, 2015:
91-99.
[27] Sturm J, Engelhard N, Endres F, et al. A benchmark for the
evaluation of RGB-D SLAM systems[C]. 2012 IEEE/RSJ
International Conference on Intelligent Robots and Systems,
2012: 573-580.
|