| [1] |
CAMPOS C, ELVIRA R, RODRÍGUEZ J J G, et al. ORB-SLAM3: an accurate open-source library for visual, visual-inertial, and multimap SLAM[J]. IEEE Transactions on Robotics, 2021, 37(6): 1874-1890.
|
| [2] |
ENGEL J, KOLTUN V, CREMERS D. Direct sparse odometry[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 40(3): 611-625.
|
| [3] |
FORSTER C, PIZZOLI M, SCARAMUZZA D. SVO: fast semi-direct monocular visual odometry[C]. 2014 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2014: 15-22.
|
| [4] |
杨蒙蒙, 江昆, 温拓朴, 等. 自动驾驶高精度地图众源更新技术现状与挑战[J]. 中国公路学报, 2023, 36(5): 244-259.
|
|
YANG M, JIANG K, WEN T, et al. Review on status and challenges of crowdsourced updating of highly automated driving maps [J]. China Journal of Highway and Transport, 2023, 36(5): 244-259.
|
| [5] |
YAN M, WANG J, LI J, et al. Loose coupling visual-lidar odometry by combining VISO2 and LOAM[C]. 2017 36th Chinese Control Conference (CCC). IEEE, 2017: 6841-6846.
|
| [6] |
XU Y, OU Y, XU T. SLAM of robot based on the fusion of vision and LIDAR[C]. 2018 IEEE International Conference on Cyborg and Bionic Systems (CBS). IEEE, 2018: 121-126.
|
| [7] |
ZHANG J, SINGH S. Visual-lidar odometry and mapping: low-drift, robust, and fast[C]. 2015 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2015: 2174-2181.
|
| [8] |
ZUO X, GENEVA P, LEE W, et al. LIC-Fusion: LIDAR-inertial-camera odometry[C]. 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2019: 5848-5854.
|
| [9] |
SUN K, MOHTA K, PFROMMER B, et al. Robust stereo visual inertial odometry for fast autonomous flight[J]. IEEE Robotics and Automation Letters, 2018, 3(2): 965-972.
|
| [10] |
ZUO X, YANG Y, GENEVA P, et al. LIC-Fusion 2.0: LIDAR-inertial-camera odometry with sliding-window plane-feature tracking[C]. 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2020: 5112-5119.
|
| [11] |
WISTH D, CAMURRI M, DAS S, et al. Unified multi-modal landmark tracking for tightly coupled lidar-visual-inertial odometry[J]. IEEE Robotics and Automation Letters, 2021, 6(2): 1004-1011.
|
| [12] |
LIN J, ZHANG F. R3LIVE: a robust, real-time, RGB-colored, LiDAR-Inertial-Visual tightly-coupled state estimation and mapping package[C]. 2022 International Conference on Robotics and Automation (ICRA). IEEE, 2022: 10672-10678.
|
| [13] |
LIN J, ZHANG F. R3LIVE++: a robust, real-time, radiance reconstruction package with a tightly-coupled LiDAR-Inertial-Visual state estimator[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024.
|
| [14] |
ZOU D, TAN P. Coslam: collaborative visual SLAM in dynamic environments[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 35(2): 354-366.
|
| [15] |
KUNDU A, KRISHNA K M, SIVASWAMY J. Moving object detection by multi-view geometric techniques from a single camera mounted robot[C]. 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2009: 4306-4312.
|
| [16] |
YANG X, YUAN Z, ZHU D, et al. Robust and efficient RGB-D SLAM in dynamic environments[J]. IEEE Transactions on Multimedia, 2020, 23: 4208-4219.
|
| [17] |
KLAPPSTEIN J, VAUDREY T, RABE C, et al. Moving object segmentation using optical flow and depth information[C]. Advances in Image and Video Technology: Third Pacific Rim Symposium, PSIVT 2009, Tokyo, Japan, January 13-16, 2009. Proceedings 3. Springer Berlin Heidelberg, 2009: 611-623.
|
| [18] |
YU C, LIU Z, LIU X J, et al. DS-SLAM: a semantic visual SLAM towards dynamic environments[C]. 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018: 1168-1174.
|
| [19] |
陈吉清,车宇翔,田小强,等.动态场景下基于3D多目标追踪的实时视觉SLAM方法研究[J].汽车工程,2024,46(5):776-783.
|
|
CHEN J Q, CHE Y X, TIAN X Q, et al. Research on real-time visual SLAM method based on 3D multi-object tracking in dynamic scenes[J]. Automotive Engineering, 2024, 46(5): 776-783.
|
| [20] |
刘钰嵩,何丽,袁亮,等.动态场景下基于光流的语义RGBD-SLAM算法[J].仪器仪表学报,2022,43(12):139-148.
|
|
LIU Y S, HE L, YUAN L, et al. Semantic RGBD-SLAM in dynamic scene based on optical flow[J]. Chinese Journal of Scientific Instrument, 2022, 43(12):139-148.
|
| [21] |
YU C, GAO C, WANG J, et al. BiSeNet V2: bilateral network with guided aggregation for real-time semantic segmentation[J]. International Journal of Computer Vision, 2021, 129: 3051-3068.
|
| [22] |
LANCASTER P. Algebraic riccati equations[M]. Oxford Science Publications/The Clarendon Press, Oxford University Press, 1995.
|
| [23] |
CORDTS M, OMRAN M, RAMOS S, et al. The cityscapes dataset for semantic urban scene understanding[C]. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016: 3213-3223.
|
| [24] |
ZHOU B, ZHAO H, PUIG X, et al. Semantic understanding of scenes through the ADE20K dataset[J]. International Journal of Computer Vision, 2019, 127: 302-321.
|
| [25] |
CAESAR H, BANKITI V, LANG A H, et al. nuScenes: a multimodal dataset for autonomous driving[C]. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020: 11621-11631.
|
| [26] |
秦晓辉,周洪,廖毅霏,等. 动态环境下基于时序滑动窗口的鲁棒激光SLAM系统 [J]. 湖南大学学报(自然科学版), 2023, 12: 49-58.
|
|
QIN X H, ZHOU H, LIAO Y F, et al. Robust laser SLAM system based on temporal sliding window in dynamic scenes[J]. Journal of Hunan University (Natural Sciences), 2023, 12: 49-58.
|