汽车工程 ›› 2024, Vol. 46 ›› Issue (3): 407-417.doi: 10.19562/j.chinasae.qcgc.2024.03.004

• • 上一篇    

基于车辆轨迹预测对抗性攻击与鲁棒性研究

桑海峰,赵梓杉(),王金玉,陈旺兴   

  1. 沈阳工业大学信息科学与工程学院,沈阳 110870
  • 收稿日期:2023-08-03 修回日期:2023-09-19 出版日期:2024-03-25 发布日期:2024-03-18
  • 通讯作者: 赵梓杉 E-mail:zhao_zishan@smail.sut.edu.cn
  • 基金资助:
    国家自然科学基金(62173078);辽宁省自然科学基金(2022-MIS-268)

Research on Adversarial Attacks and Robustness in Vehicle Trajectory Prediction

Haifeng Sang,Zishan Zhao(),Jinyu Wang,Wangxing Chen   

  1. School of Information Science and Engineering,Shenyang University of Technology,Shenyang  110870
  • Received:2023-08-03 Revised:2023-09-19 Online:2024-03-25 Published:2024-03-18
  • Contact: Zishan Zhao E-mail:zhao_zishan@smail.sut.edu.cn

摘要:

针对常规车辆轨迹预测数据集中较少包含极端交通场景信息的问题,本文提出一种新型对抗性攻击框架来模拟此类场景。首先,为了判定不同场景中对抗性攻击是否有效提出了一种阈值判定的方式;然后,针对攻击目的的不同分别设计了两种对抗性轨迹生成算法,在遵守物理约束和隐蔽性前提下,生成更具对抗性的轨迹样本;此外,提出3个新的评价指标全面评估攻击效果;最后,探究了不同的防御策略来减轻对抗攻击影响。实验结果显示,基于扰动阈值的快速攻击算法(attack algorithm based on perturbation threshold for fast attack,PTFA)和基于动态学习率调整的攻击算法(attack algorithm based on dynamic learning rate adjustment, DLRA)在NGSIM数据集上的攻击时间和扰动效果均优于现有算法,更高效发现模型弱点。本研究通过模拟极端情况丰富了轨迹样本,深入评估了模型鲁棒性,为后续优化奠定了基础。

关键词: 车辆轨迹预测, 对抗性攻击, 智能驾驶车辆, 鲁棒性

Abstract:

Considering the lack of extreme traffic scenarios in conventional vehicle trajectory prediction datasets, a novel adversarial attack framework to simulate such scenarios is proposed in this paper. Firstly, a threshold determination method is proposed to judge the effectiveness of adversarial attacks under different scenarios. Then, two adversarial trajectory generation algorithms are designed for different attack objectives, which generate more adversarial samples under physical and concealment constraints. In addition, three new evaluation metrics are proposed to comprehensively assess attack effect. Finally, different defense strategies are explored to mitigate adversarial attacks. Experiments results show that the Perturbation Threshold for Fast Attack (PTFA) algorithm and the Attack Algorithm Based on Dynamic Learning Rate Adjustment (DLRA) achieve shorter attack time and better perturbation effect compared to existing algorithms on the NGSIM dataset, discovering model vulnerabilities more efficiently. By simulating extreme cases, this research enriches trajectory samples, evaluates model robustness in-depth, and lays a foundation for further optimization.

Key words: vehicle trajectory prediction, adversarial attacks, intelligent driving vehicles, robustness