Administrator by China Associction for Science and Technology
Sponsored by China Society of Automotive Engineers
Published by AUTO FAN Magazine Co. Ltd.

Automotive Engineering ›› 2021, Vol. 43 ›› Issue (11): 1602-1610.doi: 10.19562/j.chinasae.qcgc.2021.11.005

Previous Articles     Next Articles

Multi-level and Multi-modal Target Detection Based on Feature Fusion

Teng Cheng1(),Lei Sun1,Dengchao Hou1,Qin Shi1,Junning Zhang2,Jiong Chen3,He Huang1   

  1. 1.School of Automobile and Traffic Engineering,Hefei University of Technology,Hefei  230041
    2.College of Electronic Engineering,National University of Defense Technology,Hefei  230037
    3.NIO Automotive Technology (Anhui) Company Limited,Hefei  230071
  • Received:2021-07-05 Revised:2021-08-02 Online:2021-11-25 Published:2021-11-22
  • Contact: Teng Cheng E-mail:cht616@hfut.edu.cn

Abstract:

For the problems of low robustness of the environment perception and identification difficulty of small targets of autonomous driving in complex environment, a multi-level and multi-modal fusion method based on feature fusion is proposed in this paper. Firstly, the image and point cloud modal information are mapped to the same dimension, and the hierarchical features of different size targets are extracted. On this basis, the multi-modal multi-level feature fusion is carried out. Then, six comparative experiments are designed to verify the effectiveness of each module. Finally, the Waymo data set and NIO real car data are used for training and testing. The test results show that the detection MAP value of the network is improved by 23.1% compared with that of YOLO V3.

Key words: autonomous driving, environmental perception, hierarchical feature fusion, multimodal fusion, small target detection