Administrator by China Associction for Science and Technology
Sponsored by China Society of Automotive Engineers
Published by AUTO FAN Magazine Co. Ltd.

Automotive Engineering ›› 2025, Vol. 47 ›› Issue (6): 1155-1168.doi: 10.19562/j.chinasae.qcgc.2025.06.014

Previous Articles    

Lane-Level LiDAR-Visual Fusion SLAM in Autonomous Driving Environment

Qinglu Ma1(),Qiuwei Jian2,Meiqiang Li1,Zheng Zou3   

  1. 1.School of Taffic and Transportation,Chongqing Jiaotong University,Chongqing 400074
    2.School of Mechatronics and Vehicle Engineering,Chongqing Jiaotong University,Chongqing 400041
    3.Tongji University,The Key Laboratory of Road and Traffic Engineering,Ministry of Education,Shanghai 201804
  • Received:2024-08-07 Revised:2024-12-14 Online:2025-06-25 Published:2025-06-20
  • Contact: Qinglu Ma E-mail:qlm@cqjtu.edu.cn

Abstract:

To enhance the road environment perception capability of autonomous vehicles during multi-lane driving and operations, a LLV-SLAM (lane-level LiDAR-visual fusion SLAM) for autonomous driving environment is proposed and a real-time localization and mapping algorithm (simultaneous localization and mapping, SLAM) suitable for LiDAR-Visual fusion is developed. Firstly, histogram equalization is introduced based on visual feature point extraction, and in-depth information of feature points is obtained using LiDAR. Visual feature tracking is employed to improve the robustness of the SLAM system. Secondly, visual keyframe information is used to correct the motion distortion of LiDAR point clouds, and LeGO-LOAM (lightweight and ground-optimized lidar odometry and mapping on variable terrain) is integrated with ORB-SLAM2 (oriented FAST and rotated BRIEF SLAM2) to enhance loop closure detection and correction performance, thereby reducing cumulative errors in the system. Finally, the pose obtained from the visual images is transformed into the coordinate system as the initial pose for the LiDAR odometry, assisting the LiDAR SLAM in 3D scene reconstruction. The experimental results show that the fused LLV-SLAM method outperforms traditional SLAM algorithms in several key aspects. It reduces the average localization latency by 41.61%. Furthermore, the average localization errors in the xy, and z directions are reduced by 34.63%, 38.16%, and 24.09%, respectively. Rotational errors in the roll, pitch, and yaw angles are also reduced by 40.8%, 37.52%, and 39.5%, respectively. Additionally, the LLV-SLAM method effectively mitigates scale drift in the LeGO-LOAM algorithm, improves real-time performance and robustness, meeting perception requirements in multi-lane road environment.

Key words: autonomous driving, SLAM, LiDAR-visual fusion, lane-level positioning