| [1] |
MUR-ARTAL R, MONTIEL J M M, TARDOS J D. ORB-SLAM: a versatile and accurate monocular SLAM system[J]. IEEE Transactions on Robotics, 2015, 31(5): 1147-1163.
|
| [2] |
QIN T, LI P, SHEN S. VINS-Mono: a robust and versatile monocular visual-inertial state estimator[J]. IEEE Transactions on Robotics, 2018, 34(4): 1004-1020.
|
| [3] |
SHAN T, ENGLOT B. LeGO-LOAM: lightweight and ground-optimized lidar odometry and mapping on variable terrain[C]. 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, 2018: 4758-4765.
|
| [4] |
MUR-ARTAL R, TARDÓS J D. ORB-SLAM2: an open-source SLAM system for monocular, stereo, and RGB-D cameras[J]. IEEE Transactions on Robotics, 2017, 33(5): 1255-1262.
|
| [5] |
QIN T, CAO S, PAN J, et al. A general optimization-based framework for global pose estimation with multiple sensors[J]. arXiv preprint arXiv:, 2019.
|
| [6] |
DAI W, ZHANG Y, LI P, et al. RGB-D SLAM in dynamic environments using point correlations[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 44(1): 373-389.
|
| [7] |
CANOVAS B, ROMBAUT M, NÈGRE A, et al. Speed and memory efficient dense RGB-D SLAM in dynamic scenes[C]. 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2020: 4996-5001.
|
| [8] |
XIAO L H, WANG J G, QIU X S, et al. Dynamic-SLAM: semantic monocular visual localization and mapping based on deep learning in dynamic environment[J]. Robotics and Autonomous Systems, 2019, 117: 1-16.
|
| [9] |
BESCOS B, FÁCIL J M, CIVERA J, et al. DynaSLAM: tracking, mapping, and inpainting in dynamic scenes[J]. IEEE Robotics and Automation Letters, 2018, 3(4): 4076-4083.
|
| [10] |
YANG S, SCHERER S. CubeSLAM: monocular 3-D object SLAM[J]. IEEE Transactions on Robotics, 2019, 35(4): 925-938.
|
| [11] |
ZHANG J, HENEIN M, MAHONY R, et al. VDO-SLAM: a visual dynamic object-aware SLAM system[J]. arXiv preprint arXiv:, 2020.
|
| [12] |
BESCOS B, CAMPOS C, TARDÓS J D, et al. DynaSLAM II: tightly-coupled multi-object tracking and SLAM[J]. IEEE Robotics and Automation Letters, 2021, 6(3): 5191-5198.
|
| [13] |
PARK S Y, LEE J. TDO-SLAM: traffic sign and dynamic object based visual SLAM[J]. IEEE Access, 2024, 12: 24569-24582.
|
| [14] |
PENG Z, CHENG S, LI X, et al. Dynamic visual SLAM integrated with IMU for unmanned scenarios[C]. 2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC). IEEE, 2022: 4247-4253.
|
| [15] |
SONG S, LIM H, LEE A J, et al. DynaVINS: a visual-inertial SLAM for dynamic environments[J]. IEEE Robotics and Automation Letters, 2022, 7(4): 11523-11530.
|
| [16] |
ZHENG Z, LIN S, YANG C. RLD-SLAM: a robust lightweight VI-SLAM for dynamic environments leveraging semantics and motion information[J]. IEEE Transactions on Industrial Electronics, 2024.
|
| [17] |
YIN H, LI S, TAO Y, et al. Dynam-SLAM: an accurate, robust stereo visual-inertial SLAM method in dynamic environments[J]. IEEE Transactions on Robotics, 2022, 39(1): 289-308.
|
| [18] |
CHANG J R, CHEN Y S. Pyramid stereo matching network[C]. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018: 5410-5418.
|
| [19] |
WANG A, CHEN H, LIU L, et al. Yolov10: real-time end-to-end object detection[J]. Advances in Neural Information Processing Systems, 2025, 37: 107984-108011.
|
| [20] |
ZHANG C, HAN D, QIAO Y, et al. Faster segment anything: towards lightweight sam for mobile applications[J]. arXiv preprint arXiv:, 2023.
|
| [21] |
WONG X I, MAJJI M. Uncertainty quantification of lucas kanade feature track and application to visual odometry[C]. 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2017: 950-958.
|
| [22] |
WEDEL A, CREMERS D. Stereo scene flow for 3D motion analysis[M]. Springer Science & Business Media, 2011.
|
| [23] |
陈建华.面向自主地面车辆的立体视觉里程计定位技术研究[D]. 长春: 吉林大学,2019.
|
|
CHEN Jianhua. Research on positional technology of stereo visual odometry for autonomous land vehicles[D]. Changchun: Jilin University, 2019.
|
| [24] |
MINODA K, SCHILLING F, WÜEST V, et al. VIODE: a simulated dataset to address the challenges of visual-inertial odometry in dynamic environments[J]. IEEE Robotics and Automation Letters, 2021, 6(2): 1343-1350.
|
| [25] |
GEIGER A, LENZ P, STILLER C, et al. Vision meets robotics: the KITTI dataset[J]. The International Journal of Robotics Research, 2013, 32(11): 1231-1237.
|