汽车工程 ›› 2025, Vol. 47 ›› Issue (6): 1155-1168.doi: 10.19562/j.chinasae.qcgc.2025.06.014

• • 上一篇    

自动驾驶环境下车道级雷视融合SLAM

马庆禄1(),蹇秋伟2,李美强1,邹政3   

  1. 1.重庆交通大学交通运输学院,重庆 400074
    2.重庆交通大学机电与车辆工程学院,重庆 400041
    3.同济大学,道路与交通工程教育部重点实验室,上海 201804
  • 收稿日期:2024-08-07 修回日期:2024-12-14 出版日期:2025-06-25 发布日期:2025-06-20
  • 通讯作者: 马庆禄 E-mail:qlm@cqjtu.edu.cn
  • 基金资助:
    第二十七届中国科协年会学术论文。国家自然科学基金(52075054);重庆市自然科学基金面上项目(CSTB2023NSCQ-MSX0551);交通部三峡库区奉建高速公路安全智能建造科技示范工程(Z29210003);重庆交通大学研究生科研创新项目(2024S0078)

Lane-Level LiDAR-Visual Fusion SLAM in Autonomous Driving Environment

Qinglu Ma1(),Qiuwei Jian2,Meiqiang Li1,Zheng Zou3   

  1. 1.School of Taffic and Transportation,Chongqing Jiaotong University,Chongqing 400074
    2.School of Mechatronics and Vehicle Engineering,Chongqing Jiaotong University,Chongqing 400041
    3.Tongji University,The Key Laboratory of Road and Traffic Engineering,Ministry of Education,Shanghai 201804
  • Received:2024-08-07 Revised:2024-12-14 Online:2025-06-25 Published:2025-06-20
  • Contact: Qinglu Ma E-mail:qlm@cqjtu.edu.cn

摘要:

为提升自动驾驶车辆在多车道行驶与作业时的道路环境感知能力,提出了自动驾驶环境下车道级雷视融合方法LLV-SLAM(lane-level LiDAR-visual fusion SLAM),并构建了适用于雷视融合的实时定位与建图算法(simultaneous localization and mapping,SLAM)。首先,在视觉特征点提取的基础上引入直方图均衡化,并利用激光雷达获取特征点深度信息,通过视觉特征跟踪以提升SLAM系统鲁棒性。其次,利用视觉关键帧信息对激光点云进行运动畸变校正,并将LeGO-LOAM (lightweight and groud-optimized lidar odometry and mapping)融入视觉ORB-SLAM2(oriented FAST and rotated BRIEF SLAM2)以增强闭环检测与矫正性能,降低系统累计误差。最后,将视觉图像所获取的位姿进行坐标转换作为激光里程计的位姿初值,辅助激光雷达SLAM进行三维场景重建。实验结果表明:相比于传统的SLAM方法,融合后的LLV-SLAM方法平均定位时延减少了41.61%;在xyz方向上的平均定位误差分别减少了34.63%、38.16%、24.09%;在滚转角、俯仰角、偏航角方向上的平均旋转误差减少了40.8%、37.52%、39.5% 。LLV-SLAM算法有效抑制了LeGO-LOAM算法的尺度漂移,实时性和鲁棒性有显著提升,能够满足自动驾驶车辆对多车道道路环境的感知需要。

关键词: 自动驾驶, 同步定位与建图, 雷视融合, 车道级定位

Abstract:

To enhance the road environment perception capability of autonomous vehicles during multi-lane driving and operations, a LLV-SLAM (lane-level LiDAR-visual fusion SLAM) for autonomous driving environment is proposed and a real-time localization and mapping algorithm (simultaneous localization and mapping, SLAM) suitable for LiDAR-Visual fusion is developed. Firstly, histogram equalization is introduced based on visual feature point extraction, and in-depth information of feature points is obtained using LiDAR. Visual feature tracking is employed to improve the robustness of the SLAM system. Secondly, visual keyframe information is used to correct the motion distortion of LiDAR point clouds, and LeGO-LOAM (lightweight and ground-optimized lidar odometry and mapping on variable terrain) is integrated with ORB-SLAM2 (oriented FAST and rotated BRIEF SLAM2) to enhance loop closure detection and correction performance, thereby reducing cumulative errors in the system. Finally, the pose obtained from the visual images is transformed into the coordinate system as the initial pose for the LiDAR odometry, assisting the LiDAR SLAM in 3D scene reconstruction. The experimental results show that the fused LLV-SLAM method outperforms traditional SLAM algorithms in several key aspects. It reduces the average localization latency by 41.61%. Furthermore, the average localization errors in the xy, and z directions are reduced by 34.63%, 38.16%, and 24.09%, respectively. Rotational errors in the roll, pitch, and yaw angles are also reduced by 40.8%, 37.52%, and 39.5%, respectively. Additionally, the LLV-SLAM method effectively mitigates scale drift in the LeGO-LOAM algorithm, improves real-time performance and robustness, meeting perception requirements in multi-lane road environment.

Key words: autonomous driving, SLAM, LiDAR-visual fusion, lane-level positioning