1 |
XU Z, RONG Z, WU Y. A survey: which features are required for dynamic visual simultaneous localization and mapping?[J]. Vis Comput Ind Biomed Art, 2021, 4(1): 20.
|
2 |
BESCOS B, CAMPOS C, TARDÓS J D, et al. DynaSLAM II: tightly-coupled multi-object tracking and SLAM[J]. IEEE Robotics and Automation Letters, 2021, 6(3):5191-5198.
|
3 |
GONZALEZ M, MARCHAND E, KACET A, et al. TwistSLAM: constrained SLAM in dynamic environment[J]. IEEE Robotics and Automation Letters, 2022, 7(3): 6846-6853.
|
4 |
BESCOS B, FÁCIL J M, CIVERA J, et al. DynaSLAM: tracking, mapping, and inpainting in dynamic scenes[J]. IEEE Robotics and Automation Letters, 2018, 3(4):4076-4083.
|
5 |
YU C, LIU Z, LIU X J, et al. DS-SLAM: a semantic visual SLAM towards dynamic environments[C]. IEEE /RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018: 1168-1174.
|
6 |
RÜNZ M, AGAPITO L. Co-fusion: real-time segmentation, tracking and fusion of multiple objects[C]. IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2017: 4471-4478.
|
7 |
ZHANG J, HENEIN M, MAHONY R, et al. VDO-SLAM: a visual dynamic object-aware SLAM system[J]. 2020. DOI: 10.48550/arXiv.2005.11052.
|
8 |
YANG S, SCHERER S. CubeSLAM: monocular 3-D object SLAM[J]. IEEE Transactions on Robotics, 2019, 35(4): 925-938.
|
9 |
YE T, ZHAO G. RT-SLAM:real-time visual dynamic object tracking SLAM[C]. IEEE 6th Information Technology, Networking, Electronic and Automation Control Conference(ITNEC). IEEE, 2023:677-682.
|
10 |
HENEIN M, KENNEDY G, MAHONY R, et al. Exploiting rigid bodymotion for SLAM in dynamic environments[C]. IEEE ICRA. IEEE, 2018.
|
11 |
陈建华. 面向自主地面车辆的立体视觉里程计定位技术研究[D]. 长春: 吉林大学, 2019.
|
|
CHEN Jianhua. Research on positional technology of stereo visual odometry for autonomous land vehicles[D]. Changchun: Jilin University, 2019.
|
12 |
MUR-ARTAL R, TARD´OS J D. ORB-SLAM2:an open-source SLAM system for monocular,stereo, and RGB-D cameras[J]. IEEE Transactions on Robotics, 2017, 33(5): 1255-1262.
|
13 |
CAI Zhaowei, FAN Quanfu, FERIS R, et al. A unified multi-scale deep convolutional neural network for fast object detection[C]. Computer Vision -ECCV, 2016, 9908.
|
14 |
GERLACH N L, MEIJER G J, KROON D J, et al. Evaluation of the potential of automatic segmentation of the mandibular canal[J]. Br J Oral Maxillofac Surg., 2014,52(9):838-844.
|
15 |
WOJKE N, BEWLEY A, PAULUS D. Simple online and realtime tracking with a deep association metric[C]. IEEE International Conference on Image Processing (ICIP). IEEE, 2017:3645-3649.
|
16 |
GEIGER A, LENZ P, STILLER C, et al. Vision meets robotics:the KITTI dataset[J]. International Journal of Robotics Research (IJRR), 2013, 32(11): 1231-1237.
|
17 |
STURM J, ENGELHARD N, ENDRES F, et al. A benchmark for the evaluation of RGB-D SLAM systems[C]. IEEE International Conference on Intelligent Robots and Systems (IROS). IEEE, Oct. 2012.
|
18 |
ZHANG Z, SCARAMUZZA D. A tutorial on quantitative trajectory evaluation for visual (-inertial) odometry[C]. EEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018:7244-7251.
|
19 |
李继文. 面向城市环境智能车辆视觉位姿估计方法研究[D].广州: 华南理工大学,2023.
|
|
LI Jiwen. Research on the visual pose estimation method of intelligent vehicles in urban environment[D]. Guangzhou: South China University of Technology, 2023.
|