Administrator by China Associction for Science and Technology
Sponsored by China Society of Automotive Engineers
Published by AUTO FAN Magazine Co. Ltd.

Automotive Engineering ›› 2025, Vol. 47 ›› Issue (2): 292-300.doi: 10.19562/j.chinasae.qcgc.2025.02.009

Previous Articles    

Enhanced Two-Stream Transformer Model for Remaining Useful Life Prediction of Diesel Engines

Xi Zhang1,Ying Yang1(),Chaojun Chen2,Chunfeng Wang2,Lei Yang3   

  1. 1.School of Computer,Electronics and Information,Guangxi University,Nanning 530004
    2.Process and Engineering Department,Guangxi Yuchai Machinery Co. ,Ltd. ,Yulin 537005
    3.Guangxi Academy of Science,Nanning 530007
  • Received:2024-08-14 Revised:2024-10-16 Online:2025-02-25 Published:2025-02-21
  • Contact: Ying Yang E-mail:yingy2004@126.com

Abstract:

Transformer-based models have made significant progress in Remaining Useful Life (RUL) prediction. However, existing Transformer models have the following limitation of difficulty in local feature extraction and failure to consider the importance of varying temporal and spatial input features. To solve the problems, in this paper, an enhanced two-stream Transformer model is proposed, which is reinforced by the local feature extraction module and the interaction fusion module. Firstly, the local feature extraction module captures local features from both the temporal and spatial streams to compensate for the Transformer's deficiency in local feature extraction. Then, the two-stream Transformer is used to extract long-term dependencies in the temporal and spatial dimensions, enhancing complementary learning between the two streams. Finally, the interaction fusion module is constructed to capture stream-level interaction using bilinear fusion, further improving prediction performance. Experiments using multiple models on two real-world datasets from a diesel engine manufacturer demonstrate that the evaluation metrics RMSE and Score are reduced by at least 3.23% and 5.89%, respectively.

Key words: remaining useful life prediction, Transformer encoder, convolutional neural network, feature fusion, sliding window