汽车工程 ›› 2023, Vol. 45 ›› Issue (9): 1543-1552.doi: 10.19562/j.chinasae.qcgc.2023.09.004

所属专题: 智能网联汽车技术专题-感知&HMI&测评2023年

• • 上一篇    下一篇

特征稀疏场景下基于标签的车辆视觉SLAM

秦洪懋1,2,沈国利1,周云水1(),黄圣杰1,秦晓辉1,2,谢国涛1,2,丁荣军1,2   

  1. 1.湖南大学机械与运载工程学院,长沙 410082
    2.湖南大学无锡智能控制研究院,无锡 214072
  • 收稿日期:2023-01-08 修回日期:2023-02-24 出版日期:2023-09-25 发布日期:2023-09-23
  • 通讯作者: 周云水 E-mail:zhouys@hnu.edu.cn
  • 基金资助:
    国家重点研发计划(2021YFF0501102);国家自然科学基金(52272415);长沙市自然科学基金(kq2202162)

Tag-Based Vehicle Visual SLAM in Sparse Feature Scenes

Hongmao Qin1,2,Guoli Shen1,Yunshui Zhou1(),Shengjie Huang1,Xiaohui Qin1,2,Guotao Xie1,2,Rongjun Ding1,2   

  1. 1.College of Mechanical and Vehicle Engineering,Hunan University,Changsha 410082
    2.Wuxi Intelligent Control Research Institute of Hunan University,Wuxi 214072
  • Received:2023-01-08 Revised:2023-02-24 Online:2023-09-25 Published:2023-09-23
  • Contact: Yunshui Zhou E-mail:zhouys@hnu.edu.cn

摘要:

在智能车辆的同时定位与建图中,视觉特征点法通过对特征点的提取和匹配进行车辆位姿估计,但当环境缺少纹理或动态变化时,场景的特征稀疏、稳定性差,基于自然特征定位易导致精度下降甚至定位失败。在环境中加入视觉标签可有效解决特征稀疏问题,但基于视觉标签的定位方法高度依赖人工标定,且常因视角变化出现位姿抖动,影响定位的精度。为此,本文提出了一种基于标签的车辆视觉SLAM方法,充分利用标签信息,引入内外角点约束降低标签位姿抖动,同时借助视觉里程计建立低漂移、全局一致的地图;在定位时基于标签估计车辆位姿,并联合优化标签地图与车辆位姿,从而构建低成本、高鲁棒的视觉SLAM系统。试验结果表明,本文方法使用内外角点约束有效降低了标签的位姿抖动,使标签建图精度的提升率超过60%,定位精度的平均提升率超过30%,显著地提高了基于标签定位的精度与鲁棒性,有利于智能车辆的安全运行。

关键词: 智能驾驶, 同时定位与建图, 图像特征, 视觉标签, 地图一致性

Abstract:

In the simultaneous localization and mapping of intelligent vehicles, the visual feature point method estimates the vehicle’s pose through extraction and matching of feature points. However, when the environment lacks texture or dynamic changes, due to sparse scene features and poor stability, localization by natural features possibly declines in accuracy or even fails. Adding visual tags in the environment can effectively solve the problem of feature sparsity. But the localization methods based on visual tags highly rely on manual calibration, and the poses often jitter due to perspective changes, which affects the precision of localization. Therefore, this paper proposes a tag-based vehicle visual SLAM method, which makes full use of tag information, introduces in internal and external corner constraints to reduce the pose jitter of the tag and establishes a low drift, globally consistent tag map with the visual odometer. The vehicle pose estimated by tags and the tag map are jointly optimized in localization to build a low-cost and highly robust visual SLAM system. The test results show that the proposed method with internal and external corner constraints effectively reduces the pose jitter of the tag, improves the mapping accuracy by more than 60% and the accuracy of localization by more than 30%, which significantly increases the accuracy and robustness of tag-based localization and is conducive to the safe operation of intelligent vehicles.

Key words: intelligent driving, simultaneous localization and mapping, image feature, visual tag, map consistency