Administrator by China Associction for Science and Technology
Sponsored by China Society of Automotive Engineers
Published by AUTO FAN Magazine Co. Ltd.

Highlights

    Please wait a minute...
    For Selected: Toggle Thumbnails
    Domain Adaptive Visual Object Detection for Autonomous Driving Based on Multi-granularity Relation Reasoning
    Jinhui Suo, Xiaowei Wang, Peiwen Jiang, Chi Ding, Ming Gao, Yougang Bian
    Automotive Engineering    2025, 47 (2): 201-210.   DOI: 10.19562/j.chinasae.qcgc.2025.02.001
    Abstract297)   HTML39)    PDF(pc) (2316KB)(290)      

    Most of the existing domain adaptive visual object detection algorithms are based on two-stage detector design and fail to exploit the semantic topological relationship between different elements in the image space, resulting in suboptimal cross-domain adaptation performance. Therefore, in this paper a domain adaptive visual object detection algorithm based on multi-granularity relationship reasoning is proposed. Firstly, a coarse-grained patch relationship reasoning module is proposed, which uses the coarse-grained patch graph structure to capture the topological relationship between the foreground and background and perform cross-domain adaptation on the foreground area. Then, a fine-grained semantic relationship reasoning module is designed to reason about the fine-grained semantic graph structure to enhance cross-domain multi-category semantic dependencies. Finally, a granularity-induced feature alignment module is proposed to adjust the weight of feature alignment according to the affinity of the nodes, thereby improving the adaptability of the detection model when facing overall scene changes. The experimental results on multiple cross-domain scenarios of autonomous driving verify the robustness and real-time performance of the proposed algorithm.

    Table and Figures | Reference | Related Articles | Metrics
    Construction Method for Multimodal Rainy Scene Fusion in Autonomous Driving Sample Library
    Zhaolong Dong, He Huang, Zhanyi Li, Lan Yang, Huifeng Wang
    Automotive Engineering    2025, 47 (2): 211-221.   DOI: 10.19562/j.chinasae.qcgc.2025.02.002
    Abstract198)   HTML12)    PDF(pc) (6827KB)(198)      

    For the problems of difficult and uncontrollable data acquisition, as well as limited quantity of available rainy day scene samples in the process of unmanned driving perception performance training, a multimodal fusion-based algorithm for constructing rainy day traffic scenes is proposed. Firstly, the rainy day scenes are analyzed and categorized into two models of rain line models and raindrop models for reconstruction. Secondly, a stochastic multisource fusion-based rain line model is proposed, which integrates rain effect from multiple directions and densities. Next, a heterogeneous mapping-based raindrop model is proposed to achieve realistic convex transparency mapping for individual raindrops, coupled with collision prevention design to mitigate cumulative errors of multiple raindrops in the same area. Finally, the two models are integrated to realize reconstruction of rainy day scenes by using various foundational forms. The experimental results show that as rainfall intensity increases, detailed information in the constructed scenes becomes richer initially, with metrics such as entropy and average gradient showing an initial increase followed by a decrease, while image quality continuously decreases, approaching realistic rainy day conditions. With higher rainfall intensity, both interference and detail in the images notably increase, with higher entropy and average gradient, as well as decreased PSNR and SSIM parameters, indicating significant image quality degradation.

    Table and Figures | Reference | Related Articles | Metrics