[1] NISTER D,NARODITSKY O,BERGEN J,et al.Visual odometry[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.Washington D.C.,USA:IEEE Press,2004:625-659. [2] MATTHIES L,SHAFER S.Error modeling in stereo navigation[J].IEEE Journal on Robotics and Automation,1987,3(3):239-248. [3] DURRANT-WHYTE H,BAILEY T.Simultaneous localization and mapping:part I[J].IEEE Robotics & Automation Magazine,2006,13(2):99-110. [4] BAILEY T,DURRANT-WHYTE H.Simultaneous localization and mapping:part II[J].IEEE Robotics & Automation Magazine,2006,13(3):108-117. [5] KARLSSON N,DI BERNARDO E,OSTROWSKI J,et al.The SLAM algorithm for robust localization and mapping[C]//Proceedings of IEEE International Conference on Robotics and Automation.Washington D.C.,USA:IEEE Press,2005:24-29. [6] KLEIN G,MURRAY D.Parallel tracking and mapping for small AR workspaces[C]//Proceedings of the 6th IEEE and ACM International Symposium on Mixed and Augmented Reality.Washington D.C.,USA:IEEE Press,2007:225-234. [7] FORSTER C,PIZZOLI M,SCARAMUZZA D.SVO:fast semi-direct monocular visual odometry[C]//Proceedings of IEEE International Conference on Robotics and Automation.Washington D.C.,USA:IEEE Press,2014:15-22. [8] ENGEL J,KOLTUN V,CREMERS D.Direct sparse odometry[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2017,40(3):611-625. [9] MUR-ARTAL R,MONTIEL J M M,TARDOS J D.ORB-SLAM:a versatile and accurate monocular SLAM system[J].IEEE Transactions on Robotics,2015,31(5):1147-1163. [10] LABBE M,MICHAUD F.Online global loop closure detection for large-scale multi-session graph-based SLAM[C]//Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems.Washington D.C.,USA:IEEE Press,2014:2661-2666. [11] KERL C,STURM J,CREMERS D.Dense visual SLAM for RGB-D cameras[C]//Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems.Washington D.C.,USA:IEEE Press,2013:2100-2106. [12] ENDRES F,HESS J,STURM J,et al.3-D mapping with an RGB-D camera[J].IEEE Transactions on Robotics,2013,30(1):177-187. [13] LOWE D G.Distinctive image features from scale-invariant keypoints[J].International Journal of Computer Vision,2004,60(2):91-110. [14] BAY H,TUYTELAARS T,VAN GOOL L.SURF:speeded up robust features[C]//Proceedings of European Conference on Computer Vision.Berlin,Germany:Springer,2006:404-417. [15] RUBLEE E,RABAUD V,KONOLIGE K,et al.ORB:an efficient alternative to SIFT or SURF[C]//Proceedings of International Conference on Computer Vision.Washington D.C.,USA:IEEE Press,2011:2564-2571. [16] DAVISON A J,REID I D,MOLTON N D,et al.MonoSLAM:real-time single camera SLAM[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2007,29(6):1052-1067. [17] 杨冬冬,张晓林,李嘉茂.基于局部与全局优化的双目视觉里程计算法[J].计算机工程,2018,44(1):1-8. YANG D D,ZHANG X L,LI J M.Binocular visual odometry algorithm based on local and global optimization[J].Computer Engineering,2018,44(1):1-8.(in Chinese) [18] NEWCOMBE R A,LOVEGROVE S J,DAVISON A J.DTAM:dense tracking and mapping in real-time[C]//Proceedings of International Conference on Computer Vision.Washington D.C.,USA:IEEE Press,2011:2320-2327. [19] LEUTENEGGER S,LYNEN S,BOSSE M,et al.Keyframe-based visual-inertial odometry using nonlinear optimization[J].The International Journal of Robotics Research,2015,34(3):314-334. [20] QIN T,LI P,SHEN S.Vins-mono:a robust and versatile monocular visual-inertial state estimator[J].IEEE Transactions on Robotics,2018,34(4):1004-1020. [21] MUR-ARTAL R,TARDÓS J D.ORB-SLAM2:an open-source SLAM system for monocular,stereo,and RGB-D cameras[J].IEEE Transactions on Robotics,2017,33(5):1255-1262. [22] CAMPOS C,ELVIRA R,RODRíGUEZ J J G,et al.ORB-SLAM3:an accurate open-source library for visual,visual-inertial and multi-map SLAM[EB/OL].[2021-01-01].https://arxiv.org/pdf/2007.11898.pdf. [23] IZADI S,KIM D,HILLIGES O,et al.Kinect fusion:real-time 3D reconstruction and interaction using a moving depth camera[C]//Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology.New York,USA:ACM Press,2011:559-568. [24] MORAVEC H.Obstacle avoidance and navigation in the real world by a seeing robot rover[EB/OL].[2021-01-01].https://www.ri.cmu.edu/pub_files/pub4/moravec_hans_1980_1/moravec_hans_1980_1.pdf. [25] HARRIS C G,STEPHENS M.A combined corner and edge detector[EB/OL].[2021-01-01].https://home.cis.rit.edu/~cnspci/references/dip/feature_extraction/harris1988.pdf. [26] SHI J.Good features to track[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.Washington D.C.,USA:IEEE Press,1994:593-600. [27] ROSTEN E,DRUMMOND T.Machine learning for high-speed corner detection[C]//Proceedings of European Conference on Computer Vision.Berlin,Germany:Springer,2006:430-443. [28] CALONDER M,LEPETIT V,STRECHA C,et al.BRIEF:binary robust independent elementary features[C]//Proceedings of European Conference on Computer Vision.Berlin,Germany:Springer,2010:778-792. [29] MUJA M,LOWE D G.Scalable nearest neighbor algorithms for high dimensional data[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2014,36(11):2227-2240. [30] FISCHLER M A,BOLLES R C.Random sample consensus:a paradigm for model fitting with applications to image analysis and automated cartography[J].Communications of the ACM,1981,24(6):381-395. [31] ZHAO J,MA J,TIAN J,et al.A robust method for vector field learning with application to mismatch removing[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.Washington D.C.,USA:IEEE Press,2011:2977-2984. [32] LI H,HARTLEY R.Five-point motion estimation made easy[C]//Proceedings of the 18th International Conference on Pattern Recognition.Washington D.C.,USA:IEEE Press,2006:630-633. [33] NISTÉR D.An efficient solution to the five-point relative pose problem[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2004,26(6):756-770. [34] PIZARRO O,EUSTICE R M,SINGH H.Relative pose estimation for instrumented,calibrated imaging platforms[C]//Proceedings of DICTA'03.Washington D.C.,USA:IEEE Press,2003:601-612. [35] HARTLEY R I.In defense of the eight-point algorithm[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,1997,19(6):580-593. [36] LONGUET-HIGGINS H C.A computer algorithm for reconstructing a scene from two projections[J].Nature,1981,293(5828):133-135. [37] FRAUNDORFER F,TANSKANEN P,POLLEFEYS M.A minimal case solution to the calibrated relative pose problem for the case of two known orientation angles[C]//Proceedings of European Conference on Computer Vision.Berlin,Germany:Springer,2010:269-282. [38] BESL P J,MCKAY H D.A method for registration of 3D shapes[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,1992,14(2):239-256. [39] GAO X S,HOU X R,TANG J,et al.Complete solution classification for the perspective-three-point problem[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2003,25(8):930-943. [40] ENGEL J,SCHÖPS T,CREMERS D.LSD-SLAM:large-scale direct monocular SLAM[C]//Proceedings of European Conference on Computer Vision.Berlin,Germany:Springer,2014:834-849. [41] WANG S,CLARK R,WEN H,et al.DeepVO:towards end-to-end visual odometry with deep recurrent convolutional neural networks[C]//Proceedings of IEEE International Conference on Robotics and Automation.Washington D.C.,USA:IEEE Press,2017:2043-2050. [42] YIN Z,SHI J.GeoNet:unsupervised learning of dense depth,optical flow and camera pose[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.Washington D.C.,USA:IEEE Press,2018:1983-1992. [43] TATENO K,TOMBARI F,LAINA I,et al.CNN-SLAM:real-time dense monocular SLAM with learned depth prediction[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.Washington D.C.,USA:IEEE Press,2017:6243-6252. [44] ROBERTS R,NGUYEN H,KRISHNAMURTHI N,et al.Memory-based learning for visual odometry[C]//Proceedings of IEEE International Conference on Robotics and Automation.Washington D.C.,USA:IEEE Press,2008:47-52. [45] KONDA K R,MEMISEVIC R.Learning visual odometry with a convolutional network[C]//Proceedings of International Conference on Computer Vision Theory and Applications.Washington D.C.,USA:IEEE Press,2015:486-490. [46] COSTANTE G,MANCINI M,VALIGI P,et al.Exploring representation learning with CNNs for frame-to-frame ego-motion estimation[J].IEEE Robotics and Automation Letters,2015,1(1):18-25. [47] MULLER P,SAVAKIS A.Flowdometry:an optical flow and deep learning based approach to visual odometry[C]//Proceedings of IEEE Winter Conference on Applications of Computer Vision.Washington D.C.,USA:IEEE Press,2017:624-631. [48] LIN Y,LIU Z,HUANG J,et al.Deep global-relative networks for end-to-end 6-dof visual localization and odometry[C]//Proceedings of Pacific Rim International Conference on Artificial Intelligence.Berlin,Germany:Springer,2019:454-467. [49] JIAO J,JIAO J,MO Y,et al.MagicVO:end-to-end monocular visual odometry through deep bi-directional recurrent convolutional neural network[EB/OL].[2021-01-01].https://arxiv.org/ftp/arxiv/papers/1811/1811.10964.pdf. [50] YU C,LIU Z,LIU X J,et al.DS-SLAM:a semantic visual SLAM towards dynamic environments[C]//Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems.Washington D.C.,USA:IEEE Press,2018:1168-1174. [51] ALMALIOGLU Y,SAPUTRA M R U,DE GUSMAO P P,et al.GANVO:unsupervised deep monocular visual odometry and depth estimation with generative adversarial networks[C]//Proceedings of International Conference on Robotics and Automation.Washington D.C.,USA:IEEE Press,2019:5474-5480. [52] PANG S,MORRIS D,RADHA H.CLOCs:camera-LiDAR object candidates fusion for 3D object detection[EB/OL].[2021-01-01].https://arxiv.org/pdf/2009.00784.pdf. [53] YANG N,STUMBERG L V,WANG R,et al.D3VO:deep depth,deep pose and deep uncertainty for monocular visual odometry[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition.Washington D.C.,USA:IEEE Press,2020:1281-1292. [54] LOO S Y,AMIRI A J,MASHOHOR S,et al.CNN-SVO:improving the mapping in semi-direct visual odometry using single-image depth prediction[C]//Proceedings of International Conference on Robotics and Automation.Washington D.C.,USA:IEEE Press,2019:5218-5223. [55] LI R,WANG S,LONG Z,et al.UndeepVO:monocular visual odometry through unsupervised deep learning[C]//Proceedings of IEEE International Conference on Robotics and Automation.Washington D.C.,USA:IEEE Press,2018:7286-7291. [56] COSTANTE G,MANCINI M.Uncertainty estimation for data-driven visual odometry[J].IEEE Transactions on Robotics,2020,36(6):1738-1757. [57] ZHAN H,WEERASEKERA C S,BIAN J W,et al.Visual odometry revisited:what should be learnt?[C]//Proceedings of IEEE International Conference on Robotics and Automation.Washington D.C.,USA:IEEE Press,2020:4203-4210. [58] GEIGER A,LENZ P,URTASUN R.Are we ready for autonomous driving? The KITTI vision benchmark suite[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.Washington D.C.,USA:IEEE Press,2012:3354-3361. [59] STURM J,ENGELHARD N,ENDRES F,et al.A benchmark for the evaluation of RGB-D SLAM systems[C]//Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems.Washington D.C.,USA:IEEE Press,2012:573-580. [60] BURRI M,NIKOLIC J,GOHL P,et al.The EuRoC micro aerial vehicle datasets[J].The International Journal of Robotics Research,2016,35(10):1157-1163. [61] MADDERN W,PASCOE G,LINEGAR C,et al.1 year,1000 km:the Oxford robotcar dataset[J].The International Journal of Robotics Research,2017,36(1):3-15. [62] HODAN T,HALUZA P,OBDRŽÁLEK Š,et al.T-LESS:an RGB-D dataset for 6D pose estimation of texture-less objects[C]//Proceedings of IEEE Winter Conference on Applications of Computer Vision.Washington D.C.,USA:IEEE Press,2017:880-888. [63] GASPAR A R,NUNES A,PINTO A M,et al.Urban@CRAS dataset:benchmarking of visual odometry and SLAM techniques[J].Robotics and Autonomous Systems,2018,109:59-67. [64] WENZEL P,WANG R,YANG N,et al.4Seasons:a cross-season dataset for multi-weather SLAM in autonomous driving[EB/OL].[2021-01-01].https://vision.in.tum.de/_media/spezial/bib/wenzel2020fourseasons.pdf. [65] WANG W,ZHU D,WANG X,et al.TartanAir:a dataset to push the limits of visual SLAM[EB/OL].[2021-01-01].https://arxiv.org/pdf/2003.14338.pdf. [66] ZUÑIGA-NOËL D,JAENAL A,GOMEZ-OJEDA R,et al.The UMA-VI dataset:visual-inertial odometry in low-textured and dynamic illumination environments[J].The International Journal of Robotics Research,2020,39(9):1052-1060. [67] PIRE T,FISCHER T,CIVERA J,et al.Stereo parallel tracking and mapping for robot localization[C]//Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems.Washington D.C.,USA:IEEE Press,2015:1373-1378. |