汽车工程 ›› 2024, Vol. 46 ›› Issue (12): 2279-2289.doi: 10.19562/j.chinasae.qcgc.2024.12.014

• • 上一篇    下一篇

基于点云时空特征补偿网络的智能网联车辆协同感知

张名芳(),刘颖,马健,何烨,王力   

  1. 北方工业大学,城市道路交通智能控制技术北京市重点实验室,北京 100144
  • 收稿日期:2024-05-11 修回日期:2024-06-19 出版日期:2024-12-25 发布日期:2024-12-20
  • 通讯作者: 张名芳 E-mail:mingfang@ncut.edu.cn
  • 基金资助:
    国家重点研发计划项目(2022YFB4300400);北京市教育委员会科学研究计划项目(KM202210009013)

Collaborative Perception Based on Point Cloud Spatio-Temporal Feature Compensation Network for Intelligent Connected Vehicles

Mingfang Zhang(),Ying Liu,Jian Ma,Ye He,Li Wang   

  1. North China University of Technology,Beijing Key Lab of Urban Intelligent Traffic Control Technology,Beijing 100144
  • Received:2024-05-11 Revised:2024-06-19 Online:2024-12-25 Published:2024-12-20
  • Contact: Mingfang Zhang E-mail:mingfang@ncut.edu.cn

摘要:

为克服网络延迟对协同感知准确率的影响,同时提高点云特征表达能力,本文提出了一种基于时空特征补偿网络的智能网联车辆点云协同感知方法。首先,采用点-体柱特征提取方法对点云原始数据进行处理,将扫描点局部邻域特征与体柱特征图进行拼接;然后,设计基于PredRNN算法的时间延迟补偿模块,对接收到的来自周围网联车辆的历史帧点云特征进行预测,实现两车点云特征同步;其次,利用空间特征融合补偿模块聚合跨车辆点云特征,通过双向多尺度特征金字塔网络融合多分辨率特征,输出车辆目标几何尺寸和航向角等信息;最后,在V2V4real数据集和自制数据集上的测试结果表明,该方法的检测精度优于经典协同感知算法,且对不同网络延迟时间具有较好的适应性,推理时间满足实时性要求。

关键词: 智能网联车辆, 协同感知, 时空特征补偿, 点云, 特征融合

Abstract:

In order to overcome the influence of network latency on the cooperative perception accuracy and simultaneously improve the point cloud feature expression capability, a cooperative perception method based on point cloud spatio-temporal feature compensation network for intelligent connected vehicles is proposed. Firstly, the point-to-pillar feature extraction method is used to process the raw point cloud data, and the local neighborhood features of the laser points are then spliced with pillar feature maps. Secondly, the temporal latency compensation module based on the PredRNN algorithm is designed to predict the point cloud features of historical frames received from the surrounding connected vehicles, so as to achieve the synchronization of point cloud features from two vehicles. Thirdly, the spatial feature fusion compensation module is utilized to aggregate the inter-vehicle point cloud features, and multi-resolution features are fused through the bidirectional multi-scale feature pyramid network. The output includes vehicle target geometry size, heading angle and other information. Finally, the test results on the V2V4real dataset and the self-collected dataset demonstrate that the detection accuracy of the proposed method is superior to classical cooperative perception algorithms. Furthermore, it exhibits good adaptability to various latency cases and the inference process meets the real-time requirements.

Key words: intelligent connected vehicles, collaborative perception, spatio-temporal feature compensation, point cloud, feature fusion