[1] CHEN C, WANG B, LU C X, et al. Deep learning for visual localization and mapping: a survey[J]. IEEE Transactions on Neural Networks and Learning Systems, 2024, 35(12): 17000-17020.
[2] JIAO S, LI Y, SHAN Z. DFS-SLAM: a visual SLAM algorithm for deep fusion of semantic information[J]. IEEE Robotics and Automation Letters, 2024, 9(12): 11794-11801.
[3] Mur-Artal R, Montiel J M M, Tardós J D. ORB-SLAM: a versatile and accurate monocular SLAM system[J]. IEEE Transactions on Robotics, 2015, 31(5): 1147-1163.
[4] Mur-Artal R, Tardós J D. ORB-SLAM2: an open-source SLAM system for monocular, stereo, and RGB-D cameras[J]. IEEE Transactions on Robotics, 2017, 33(5): 1255-1262.
[5] CAMPOS C, ELVIRA R, RODRÍGUEZ J J G, et al. ORB-SLAM3: an accurate open-source library for visual, visual-inertial, and multimap SLAM[J]. IEEE Transactions on Robotics, 2021, 37(6): 1874-1890.
[6] ZHAO Z, ZHANG J, MA S, et al. DeepORB: a deep learning-based ORB-SLAM framework for monocular visual SLAM[J]. IEEE Transactions on Robotics, 2021, 37(6): 1805-1821.
[7] ZHAO C, TANG Y, SUN Q, et al. Deep direct visual odometry[J]. IEEE Transactions on Intelligent Transportation Systems, 2022, 23(7): 7733-7742.
[8] ZHANG X, DONG H, ZHANG H, et al. A real-time, robust, and versatile visual-SLAM framework based on deep learning networks[J]. IEEE Transactions on Instrumentation and Measurement, 2025, 74: 1-13.
[9] ZHANG H, HUO J, HUANG Y, et al. MD-SLAM: a robust deep-learning feature-based VSLAM system in dynamic environments combined hierarchical multidimensional clustering algorithm[J]. IEEE Transactions on Instrumentation and Measurement, 2025, 74: 1-22.
[10] CHEN K, XIAO J, LIU J, et al. Semantic visual simultaneous localization and mapping: a survey[J]. IEEE Transactions on Intelligent Transportation Systems, 2025, 26(6): 7426-7449.
[11] HOANG M L. Unlocking robotic perception: comparison of deep learning methods for simultaneous localization and mapping and visual simultaneous localization and mapping in robot[J]. International Journal of Intelligent Robotics and Applications, 2025, 9: 1011-1043.
[12] SERVIÈRES M, RENAUDIN V, DUPUIS A, et al. Visual and visual-inertial SLAM: state of the art, classification, and experimental benchmarking[J]. Journal of Sensors, 2021, 2021: 2054828.
[13] .CHEN K, XIAO J, LIU J, et al. Semantic visual simultaneous localization and mapping: a survey[J]. IEEE Transactions on Intelligent Transportation Systems, 2025, 26(6): 7426-7449.
[14] PU H, LUO J, WANG G, et al. Visual SLAM integration with semantic segmentation and deep learning: a review[J]. IEEE Sensors Journal, 2023, 23(19): 22119.
[15] LAGA H, JOSPIN L V, BOUSSAID F, et al. A survey on deep learning techniques for stereo-based depth estimation[J]. IEEE Transactions on Computer Vision, 2020, 22(12).
[16] ZHAO C, SUN Q, ZHANG C, et al. Monocular depth estimation based on deep learning: an overview[R]. East China University of Science and Technology, 2020.
[17] DUAN C, JUNGINGER S, HUANG. Deep learning for visual SLAM in transportation robotics: a review[J]. Transportation Safety and Environment, 2019, 1(3): 177-184.
[18] MOKSSIT S, LICEA D B, GUERMAH B, et al. Deep learning techniques for visual SLAM: a survey[J]. IEEE Access, 2023, 11: 139643.
[19] FAVORSKAYA M N. Deep learning for visual SLAM: the state-of-the-art and future trends[J]. Electronics, 2023, 12.
[20] WANG Y, TIAN Y, CHEN J, et al. A survey of visual SLAM in dynamic environment: the evolution from geometric to semantic approaches[J]. IEEE Transactions on Instrumentation and Measurement, 2024, 73: 1-21.
[21] SAHILI A R, HASSAN S, SAKHRIEH S M. A survey of visual SLAM methods[J]. IEEE Access, 2023, 11: 139643.
[22] KAZEROUNI I A, FITZGERALD L, DOOLY G, et al. A survey of state-of-the-art on visual SLAM[J]. Expert Systems with Applications, 2022, 205: 117734.
[23] HUR J, ROTH S. Optical flow estimation in the deep learning age[M]. Springer, 2020.
[24] 曾庆化,罗怡雪,孙克诚,李一能,刘建业.视觉及其融合惯性的SLAM技术发展综述[J].南京航空航天大学学报,2022,54(6):1007-1020.
Zeng, Q., Luo, Y., Sun, K., Li, Y., Liu, J. (2022). Visual and its fusion inertial SLAM technology development review. Journal of Nanjing University of Aeronautics and Astronautics, 54(6), 1007-1020.
[25] XU K, HAO Y, YUAN S, et al. AirVO: an illumination-robust point-line visual odometry[J]. IEEE Robotics and Automation Letters, 2023, 8(1): 41-48.
[26] BUCHANAN R, AGRAWAL V, CAMURRI M, et al. Deep IMU bias inference for robust visual-inertial odometry with factor graphs[J]. IEEE Robotics and Automation Letters, 2023, 8(1): 41-48.
[27] 于乃功, 程启明, 闫金涵, 付一凡, 谢秋生. 基于亚特兰大世界和语义信息的室内SLAM方法[J]. 工程科学学报, 2025, 47(10): 2079-2089.
Yu, N., Cheng, Q., Yan, J., Fu, Y., Xie, Q. (2025). Indoor SLAM method based on Atlanta world and semantic information. Journal of Engineering Science, 47(10), 2079-2089. DOI: 10.13374/j.issn2095-9389.2024.11.26.002.
[28] ZHOU Y, LI X, LI S, et al. DBA-fusion: tightly integrating deep dense visual bundle adjustment with multiple sensors for large-scale localization and mapping[J]. IEEE Robotics and Automation Letters, 2024, 9(7): 6138-6145.
[29] GURUMURTHY S, RAM K, CHEN B, et al. From variance to veracity: unbundling and mitigating gradient variance in differentiable bundle adjustment layers[J]. IEEE Transactions on Robotics, 2024, 41: 27497-27506.
[30] ZHANG W, WANG S, DONG X, et al. BAMF-SLAM: bundle adjusted multi-fisheye visual-inertial SLAM using recurrent field transforms[C]. 2023 IEEE International Conference on Robotics and Automation (ICRA), London, United Kingdom: IEEE, 2023: 6232-6238.
[31] SHI P, ZHU Z, SUN S, et al. Covariance estimation for pose graph optimization in visual-inertial navigation systems[J]. IEEE Transactions on Intelligent Vehicles, 2023, 8(6): 3657-3667.
[32] JIANG J, CHEN X, DAI W, et al. Thermal-inertial SLAM for the environments with challenging illumination[J]. IEEE Robotics and Automation Letters, 2022,7(4): 8767-8774.
[33] CHEN S, MAI K. Towards specialized hardware for learning-based visual odometry on the edge[J]. IEEE Transactions on Robotics, 2022, 10: 10603-10610.
[34] LISONDR A M, KIM J, SHIMODA G T, et al. TCB-VIO: tightly-coupled focal-plane binary-enhanced visual inertial odometry[J]. IEEE Robotics and Automation Letters, 2025.
[35] ZHANG R, WANG Y, LI Z, et al. Online adaptive keypoint extraction for visual odometry across different scenes[J]. IEEE Robotics and Automation Letters, 2025, 10(7): 7539-7546.
[36] KANG W, GAI S, DA F, et al. Occlusion-aware monocular visual odometry for robust trajectory tracking[J]. IEEE Robotics and Automation Letters, 2025, 10(10): 9924-9931.
[37] QU H, ZHANG L, HU X, et al. SelfOdom: self-supervised ego-motion and depth learning via bi-directional coarse-to-fine scale recovery[J]. IEEE Transactions on Intelligent Vehicles, 2024, 9(5): 4962-4978.
[38] WEI Y, LU S, LU W, et al. BEV-DWPVO: BEV-based differentiable weighted procrustes for low scale-drift monocular visual odometry on ground[J]. IEEE Robotics and Automation Letters, 2025, 10(5): 4244-4251.
[39] SCHMIDT F, DAUBERMANN J, MITSCHKE M, et al. Rover: a multiseason dataset for visual SLAM[J]. IEEE Transactions on Robotics, 2025, 41: 4005-4022.
[40] PAPADIMITRIOU A, KLEITSIOTIS I, KOSTAVELIS I, et al. Loop closure detection and SLAM in vineyards with deep semantic cues[C]//Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2022: 2251-2258.
[41] SHAO C, ZHANG L, PAN W. Faster R-CNN learning-based semantic filter for geometry estimation and its application in VSLAM systems[J]. IEEE Transactions on Intelligent Transportation Systems, 2022, 23(6): 5257-5266.
[42] LEE J, BACK M, HWANG S S, et al. Improved real-time monocular SLAM using semantic segmentation on selective frames[J]. IEEE Transactions on Intelligent Transportation Systems, 2023, 24(3): 2800-2813.
[43] TROMBLEY C M, DAS S K, POPA D O. Dynamic-GAN: learning spatial-temporal attention for dynamic object removal in feature dense environments[C]//Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2022: 12189-12195.
[44] ZHUANG Y, JIA P, LIU Z. AMOS-SLAM: an anti-dynamics two-stage RGB-D SLAM approach[J]. IEEE Transactions on Instrumentation and Measurement, 2024, 73: 1-10.
[45] LIU H, WANG L, LUO H, et al. SDD-SLAM: semantic-driven dynamic SLAM with Gaussian splatting[J]. IEEE Robotics and Automation Letters, 2025, 10(6): 5721-5728.
[46] ZHANG Z, SONG Y, PANG B, et al. SSF-SLAM: real-time RGB-D visual SLAM for complex dynamic environments based on semantic and scene flow geometric information[J]. IEEE Transactions on Instrumentation and Measurement, 2025, 74: 1-12.
[47] GONZALEZ M, MARCHAND E, KACETE A, et al. TwistSLAM: constrained SLAM in dynamic environment[J]. Robotics and Automation Letters, 2022, 7(3): 6846-6853.
[48] WANG Y, XU K, TIAN Y, et al. DRG-SLAM: a semantic RGB-D SLAM using geometric features for indoor dynamic scene[C]//Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2022: 1352-1359.
[49] ZHANG H, UCHIYAMA H, ONO S, et al. MOTSLAM: MOT-assisted monocular dynamic SLAM using single-view depth estimation[EB/OL]. arXiv preprint, 2022.
[50] YUAN C, XU Y, ZHOU Q. PLDS-SLAM: point and line features SLAM in dynamic environment[J]. Remote Sensing, 2023, 15(7): 1-21.
[51] XU Z, WEI H, TANG F, et al. PLPL-VIO: a novel probabilistic line measurement model for point-line-based visual-inertial odometry[C]//Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2023: 5211-5218.
[52] SONG S, LIM H, LEE A J, et al. Dynavins: a visual-inertial SLAM for dynamic environments[J]. IEEE Robotics and Automation Letters, 2022, 7(4): 11523-11530.
[53] WEN S, LI X, LIU X, et al. Dynamic SLAM: a visual SLAM in outdoor dynamic scenes[J]. IEEE Transactions on Instrumentation and Measurement, 2023, 72: 1-11.
[54] JI T, WANG C, XIE L. Towards real-time semantic RGB-D SLAM in dynamic environments[C]//Proceedings of the IEEE International Conference on Robotics and Automation (ICRA). 2021: 11175-11181.
[55] MOHANTY V, AGRAWAL S, DATTA S, et al. DeepVO: a deep learning approach for monocular visual odometry[EB/OL]. arXiv preprint arXiv:1611.06069, 2021.
[56] LI R, WANG S, LONG Z, et al. UndeepVO: monocular visual odometry through unsupervised deep learning [C]//Proceedings of the IEEE International Conference on Robotics and Automation (ICRA). 2018: 7286-7291.
[57] CLARK R, WANG S, WEN H, et al. VInET: visual-inertial odometry as a sequence-to-sequence learning problem[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2017, 31.
[58] VALADA A, RADWAN N, BURGARD W. Deep auxiliary learning for visual localization and odometry[EB/OL]. arXiv preprint arXiv:1803.02391, 2018.
[59] XU D, VEDALDI A, HENRIQUES J F. Moving SLAM: fully unsupervised deep learning in non-rigid scenes[C]//Proceedings of the IEEE International Conference on Robotics and Automation (ICRA). 2021: 4611-4617.
[60] TEED Z, DENG J. DROID-SLAM: deep visual SLAM for monocular, stereo, and RGB-D cameras[C]//Advances in Neural Information Processing Systems.2021,34:16558-16569.
[61] MING Y, YE W, CALWAY A. IDF-SLAM: end-to-end RGB-D SLAM with neural implicit mapping and deep feature tracking[EB/OL]. arXiv preprint arXiv:2203.08511, 2022.
[62] SHAMWELL E J, LINDGREN K, LEUNG S, et al. Unsupervised deep visual-inertial odometry with online error correction for RGB-D imagery[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42(10): 2478-2493.
[63] HAN L, LIN Y, DU G, et al. DeepVIO: self-supervised deep learning of monocular visual-inertial odometry using 3D geometric constraints[EB/OL]. arXiv preprint arXiv:1906.04500, 2019.
[64] WANG Y, TIAN Y, CHEN J, et al. MSSD-SLAM: multifeature semantic RGB-D inertial SLAM with structural regularity for dynamic environments[J]. IEEE Transactions on Instrumentation and Measurement, 2025, 74: 1-17.
[65] PENG X, LIU Z, LI W, et al. DVI-SLAM: a dual visual-inertial SLAM network[J]. IEEE Transactions on Robotics, 2024, 9(12): 12020-12026.
[66] WANG Z, ZHANG Y, XU X, et al. CMIF-VIO: a novel cross-modal interaction framework for visual-inertial odometry[J]. IEEE Robotics and Automation Letters, 2025, 10(2): 875-882.
[67] KONG D, ZHANG Y, DAI W. Direct near-infrared-depth visual SLAM with active lighting[J]. IEEE Robotics and Automation Letters, 2021, 6(4): 7057-7064.
[68] WANG Y, WAN R, YANG W, et al. Low-light image enhancement with normalizing flow[EB/OL]. arXiv preprint arXiv:2111.11745, 2021.
[69] WANG X, FAN X, LIU Y, et al. EUM-SLAM: an enhancing underwater monocular visual SLAM with deep-learning-based optical flow estimation[J]. IEEE Transactions on Instrumentation and Measurement, 2025, 74: 1-11.
[70] CAI Y, BIAN H, LIN J, et al. Retinexformer: one-stage retinex-based transformer for low-light image enhancement[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 2023: 12504-12513.
[71] CUI Z, LI K, GU L, et al. You only need 90k parameters to adapt light: a lightweight transformer for image enhancement and exposure correction[C]//Proceedings of the British Machine Vision Conference (BMVC). 2022.
[72] LEE K, SHIN U, LEE B-U. Learning to control camera exposure via reinforcement learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2024: 2975-2983.
[73] ZHANG S, HE J, ZHU Y, et al. Efficient camera exposure control for visual odometry via deep reinforcement learning[EB/OL]. arXiv preprint arXiv:2401.03685, 2024.
[74] LI J, ZHEN R, STEVENSON R. Automatic exposure strategy network for robust visual odometry in environments with high dynamic range[J]. Machine Vision and Applications, 2025, 36: 14.
[75] SARLIN P-E, DETONE D, MALISIEWICZ T, et al. SuperGlue: learning feature matching with graph neural networks[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2020: 4938-4947.
[76] SUN J, SHEN Z, WANG Y, et al. LoFTR: detector-free local feature matching with transformers[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2021: 8922-8931.
[77] LINDENBERGER P, SARLIN P-E, POLLEFEYS M. LightGlue: local feature matching at light speed[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 2023: 17627-17638.
[78] AZHARI M B, SHIM D H. DINO-VO: a feature-based visual odometry leveraging a visual foundation model[J]. IEEE Robotics and Automation Letters,2025,10(9):9152-9159.
[79] ZHAO Z, WU C, KONG X, et al. Light-SLAM: a robust deep-learning visual SLAM system based on LightGlue under challenging lighting conditions[J]. IEEE Transactions on Intelligent Transportation Systems, 2025, 26(7): 9918-9931.
[80] LIU Z, MALIS E, MARTINET P. Adaptive learning for hybrid visual odometry[J]. IEEE Robotics and Automation Letters, 2024, 9(8): 7341-7348.
[81] YU L, YANG E, YANG B, et al. A robust learned feature-based visual odometry system for UAV pose estimation in challenging indoor environments[J]. IEEE Transactions on Instrumentation and Measurement, 2023, 72.
[82] YANG J, GONG M, NAIR G, et al. Knowledge distillation for feature extraction in underwater VSLAM[C]//Proceedings of the IEEE International Conference on Robotics and Automation (ICRA). 2023: 5163-5169.
[83] ZHAN H, WEERASEKERA C S, BIAN J-W, et al. Visual odometry revisited: what should be learnt?[C]//Proceedings of the IEEE International Conference on Robotics and Automation (ICRA). 2020: 4203-4210.
[84] TANG J, FOLKESSON J, JENSFELT P. Sparse2Dense: from direct sparse odometry to dense 3-D reconstruction[J]. IEEE Robotics and Automation Letters, 2019, 4: 530-537.
[85] JEON J, LIM H, SEO D-U, et al. Struct-MDC: mesh-refined unsupervised depth completion leveraging structural regularities from visual SLAM[J]. IEEE Robotics and Automation Letters, 2022, 7: 6391-6398.
[86] ROSINOL A, LEONARD J J, CARLONE L. NeRF-SLAM: real-time dense monocular SLAM with neural radiance fields[C]//Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2023: 3437-3444.
[87] ZHU Z, PENG S, LARSSON V, et al. NICE-SLAM: neural implicit scalable encoding for SLAM[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2022: 12786-12796.
[88] YANG X, LI H, ZHAI H, et al. VoxFusion: dense tracking and mapping with voxel-based neural implicit representation[C]//Proceedings of the IEEE International Symposium on Mixed and Augmented Reality (ISMAR). 2022: 499-507.
[89] STUTTS A C, ERRICOLO D, TULABANDHULA T, et al. Lightweight, uncertainty-aware conformalized visual odometry[J]. IEEE Transactions on Robotics, 2023, 41: 7742-7749.
[90] HAN J, DONG R, KAN J. BASL-AD SLAM: a robust deep-learning feature-based visual SLAM system with adaptive motion model[J]. IEEE Transactions on Intelligent Transportation Systems, 2024, 25(9): 11794-11804.
[91] YAO H, HAO N, XIE C, et al. EdgePoint: efficient point detection and compact description via distillation[J]. IEEE Transactions on Robotics, 2024, 10: 766-772.
[92] QIU J, JIANG C, ZHANG P, et al. EVSMAP: an efficient volumetric-semantic mapping approach for embedded systems[C]//Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2024: 9839-9846.
[93] XIAO Z, CHEN C, YANG S, et al. EffLoc: lightweight vision transformer for efficient 6-DOF camera relocalization[J]. IEEE Robotics and Automation Letters, 2024, 10(4): 3915-3921.
[94] DONG Y, LI P, ZHANG L, et al. KINND: a keyframe insertion framework via neural network decision-making for VSLAM[J]. IEEE Robotics and Automation Letters, 2025, 10(4): 3908-3915.
[95] DETONE D, MALISIEWICZ T, RABINOVICH A. SuperPoint: self-supervised interest point detection and description[EB/OL]. arXiv preprint arXiv:1803.06123, 2018.
[96] 蔡显奇, 王晓松, 李玮. 一种室内弱纹理环境下的视觉SLAM算法[J]. 机器人, 2024, 46(3): 284-293, 304.
Cai, X., Wang, X., & Li, W. (2024). A weak-texture environment indoor visual SLAM algorithm. Robotics, 46(3), 284-293, 304. DOI: 10.13973/j.cnki.robot.230253.
[97] BAVLE H, SANCHEZ-LOPEZ J L, SHAHEER M, et al. Situational graphs for robot navigation in structured indoor environments[J]. Journal of Robotics and Automation,2022,24.
[98] WANG W, HU Y, SCHERER S. TartanVO: a generalizable learning-based VO[C]//Proceedings of Conference on Robot Learning (CoRL). 2021: 1761-1772.
[99] MAO X, LIU Y, SHEN W L, et al. Deep residual Fourier transformation for single image deblurring[EB/OL]. arXiv preprint, 2021.
[100] GODARD C, AODHA O M, FIRMAN M, et al. Digging into self-supervised monocular depth estimation[J]. IEEE Transactions on Robotics, 2019, 22: 1-13.
[101] BARNES D, POSNER I. Under the radar: learning to predict robust keypoints for odometry estimation and metric localisation in radar[EB/OL]. arXiv preprint arXiv:2007.02034, 2020.
[102] ZAFFAR M, GARG S, MILFORD M, et al. VPR-Bench: an open-source visual place recognition evaluation framework with quantifiable viewpoint and appearance change[J]. International Journal of Computer Vision, 2021, 129: 2136-2174.
[103] FONTAN A, CIVERA J, FISCHER T, et al. Look Ma, no ground truth! ground-truth-free tuning of structure from motion and visual SLAM[J]. IEEE Transactions on Robotics, 2024, 42: 1345-1352.
[104] Geiger A, Lenz P 等. The KITTI Vision Benchmark Suite[EB/OL]. https://www.cvlibs.net/datasets/kitti/
[105] Jürgen Sturm. RGB-D SLAM Dataset[EB/OL]. https://cvg.cit.tum.de/data/datasets/rgbd-dataset
[106] David Schuber. Visual-Inertial Dataset[EB/OL]. https://cvg.cit.tum.de/data/datasets/visual-inertial-dataset
[107] M. Burri, J. Nikolic, P. Gohl. kmavvisualinertialdatasets -ASL Datasets[EB/OL]. (2016-01-01)[2025-10-27]. https://ethz-asl.github.io/datasets/.
[108] Patel A, Smith J 等. TartanGround Dataset — TartanAir documentation[EB/OL]. (2025-05-20)[2025-10-27]. https://tartanair.org/tartanground
[109] Test vehicle and sensor system developers, ArtiSense AI. 4Seasons: A Cross-Season Dataset for Multi-Weather SLAM in Autonomous Driving[EB/OL]. (2023-05-20)[2025-10-27]. https://cvg.cit.tum.de/data/datasets/4seasons-dataset
[110] StachnissLab. Bonn RGB-D Dynamic Dataset[EB/OL]. https://www.ipb.uni-bonn.de/data/rgbd-dynamic-dataset/
[111] Monado - Developer Site. Monado - Developer Site[EB/OL].https://monado.freedesktop.org/
[112] Jianhao Jiao, Hexiang Wei, 等. visualSLAMbench[EB/OL].https://ram-lab.com/vslam_dataset
[113] Robotics and Perception Group, University of Zürich. Hilti SLAM Challenge 2023 - Dataset[EB/OL]. https://hilti-challenge.com/dataset-2023.html
[114] X Shi, D Li, et al. OpenLORIS-Scene | Lifelong Robotic Vision[EB/OL]. (2020-01-01)[2025-10-27]. https://lifelong-robotic-vision.github.io/dataset/scene
[115] Li Wenbin. InteriorNet: Mega-scale Multi-sensor Photo-realistic Indoor Scenes Dataset[EB/OL]. https://interiornet.org/
[116] Zerotype. ICL-NUIM RGB-D Benchmark Dataset[EB/OL]. (2023-01-01)[2025-10-27]. https://www.doc.ic.ac.uk/~ahanda/VaFRIC/iclnuim.html
[117] Cityscapes Dataset – Semantic Understanding of Urban Street Scenes[EB/OL]. (2020-10-17)[2025-10-27]. https://www.cityscapes-dataset.com/
[118] Smith J, Johnson M B 等. 1 Year, 1000km: The Oxford RobotCar Dataset[EB/OL]. (2014-05-06)[2025-10-27]. https://robotcar-dataset.robots.ox.ac.uk/datasets/
[119] larocs. QueensCAMP dataset tools[EB/OL]. https://github.com/larocs/queenscamp-dataset.
[120] QU Delin, YAN Chi, WANG Dong, et al. Implicit Event-RGBD Neural SLAM[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2024). Washington, DC: IEEE, 2024: 19584-19594.
[121] MING Y H, MA D, DAI W C, et al. SLC²-SLAM: semantic-guided loop closure using shared latent code for NeRF SLAM[J]. IEEE Robotics and Automation Letters, 2025, 10(5):
4978-4985.
[122] ZHANG Z, et al. R4-SLAM: toward real-time, robust, and resource-restricted visual SLAM in dynamic environments[J]. IEEE Transactions on Instrumentation and Measurement, 2025, 74: 1-12
|