Administrator by China Associction for Science and Technology
Sponsored by China Society of Automotive Engineers
Published by AUTO FAN Magazine Co. Ltd.

Automotive Engineering ›› 2024, Vol. 46 ›› Issue (11): 1973-1982.doi: 10.19562/j.chinasae.qcgc.2024.11.004

Previous Articles     Next Articles

Pedestrian Trajectory Prediction Method Based on Multi-information Fusion Network

Song Gao1,2,Jianglin Zhou2,Bolin Gao1(),Jian Lu2,He Wang2,Yueyun Xu2   

  1. 1.School of Vehicles and Mobility,Tsinghua University,Beijing 100000
    2.National Innovation Center of Intelligent and Connected Vehicles,Beijing 102600
  • Received:2024-05-28 Revised:2024-07-01 Online:2024-11-25 Published:2024-11-22
  • Contact: Bolin Gao E-mail:gaobolin@tsinghua.edu.cn

Abstract:

With the continuous development of autonomous driving technology, accurately predicting the future trajectories of pedestrians has become a critical element in ensuring system safety and reliability. However, most existing studies on pedestrian trajectory prediction rely on fixed camera perspectives, which limits the comprehensive observation of pedestrian movement and thus makes them unsuitable for direct application to pedestrian trajectory prediction under the ego-vehicle perspective in autonomous vehicles. To solve the problem, in this paper a pedestrian trajectory prediction method under the ego-vehicle perspective based on the Multi-Pedestrian Information Fusion Network (MPIFN) is proposed, which achieves accurate prediction of pedestrians' future trajectories by integrating social information, local environmental information, and temporal information of pedestrians. In this paper, a Local Environmental Information Extraction Module that combines deformable convolution with traditional convolutional and pooling operations is constructed, aiming to more effectively extract local information from complex environment. By dynamically adjusting the position of convolutional kernels, this module enhances the model’s adaptability to irregular and complex shapes. Meanwhile, the pedestrian spatiotemporal information extraction module and multimodal feature fusion module are developed to facilitate comprehensive integration of social and environmental information. The experimental results show that the proposed method achieves advanced performance on two ego-vehicle driving datasets, JAAD and PSI. Specifically, on the JAAD dataset, the Center Final Mean Squared Error (CF_MSE) is 4 063, and the Center Mean Squared Error (C_MSE) is 829. On the PSI dataset, the Average Root Mean Square Error (ARB) and Final Root Mean Square Error (FRB) also achieve outstanding performance with values of 18.08/29.21/44.98 and 25.27/54.62/93.09 for prediction horizons of 0.5 s, 1.0 s, and 1.5 s, respectively.

Key words: autonomous driving, pedestrian trajectory prediction, multiple pedestrian information fusion network, ego-vehicle