汽车工程 ›› 2023, Vol. 45 ›› Issue (5): 777-785.doi: 10.19562/j.chinasae.qcgc.2023.05.007

所属专题: 智能网联汽车技术专题-感知&HMI&测评2023年

• • 上一篇    下一篇

支持抗光照目标检测的改进YOLO算法

姚宇捷1,彭育辉1(),陈泽辉1,何维堃1,吴庆1,黄炜1,陈文强2   

  1. 1.福州大学机械工程及自动化学院,福州 350116
    2.福建汉特云智能科技有限公司,福州 350028
  • 收稿日期:2022-11-10 修回日期:2022-12-19 出版日期:2023-05-25 发布日期:2023-05-26
  • 通讯作者: 彭育辉 E-mail:pengyuhui@fzu.edu.cn
  • 基金资助:
    福建省引导性科技计划项目(2022H0007);福建省自然科学基金(2021J01559)

An Improved YOLO Algorithm Supporting Anti-illumination Target Detection

Yujie Yao1,Yuhui Peng1(),Zehui Chen1,Weikun He1,Qing Wu1,Wei Huang1,Wenqiang Chen2   

  1. 1.School of Mechanical Engineering and Automation,Fuzhou University,Fuzhou 350116
    2.HanTeWin Intelligent Technology,Fuzhou 350028
  • Received:2022-11-10 Revised:2022-12-19 Online:2023-05-25 Published:2023-05-26
  • Contact: Yuhui Peng E-mail:pengyuhui@fzu.edu.cn

摘要:

针对现有的深度学习目标检测算法中存在的复杂光照场景下检测精度不高、实时性差等问题,提出了一种基于YOLO算法的抗光照目标检测网络模型YOLO-RLG。首先,将输入模型的RGB数据转换为HSV数据,从HSV数据分离出抗光照能力强的S通道,并与RGB数据合并生成RGBS数据,使输入数据具备抗光照能力;其次,将YOLOV4的主干网络替换成Ghostnet网络,并对其在普通卷积与廉价卷积的模型分配比例上进行调整,在保证检测精度的同时提高检测速度;最后,用EIoU替换CIoU改进模型的损失函数,提高了目标检测精度和算法鲁棒性。基于KITTI与VOC数据集的实验结果表明,与原网络模型比较,FPS提高了22.54与17.84 f/s,模型降低了210.3 M,精确度(AP)提升了0.83%与1.31%,且算法的抗光照能力得到显著增强。

关键词: 机器视觉, 抗光照图像处理, Ghostnet网络, 损失函数

Abstract:

For the problems of unsatisfactory detection accuracy and weak real-time performance in the complicated illumination scenes in the existing deep learning target detection algorithms, an anti-illumination target detection network model YOLO-RLG based on the YOLO algorithm is proposed. Firstly, the RGB data of the input model is converted into HSV data, and the S channel with powerful anti-illumination capability is separated from the HSV data and fused with the RGB data to generate RGBS data so that the input data has anti-illumination capability. Secondly, the backbone network of YOLOV4 is replaced with Ghostnet network, with the model assignment ratio between ordinary convolution and cheap convolution modified to improve the detection efficiency while ensuring the detection accuracy. Finally, the loss function of the model is improved by replacing CIoU with EIoU, which enhances the target detection accuracy and algorithm robustness. The experimental results based on KITTI and VOC datasets indicate that, compared with the original network model, the FPS improves by 22.54 and 17.84 f/s, with the model reduced by 210.3 M, the accuracy (AP) improved by 0.83% and 1.31%, and the algorithm's anti-illumination performance significantly enhanced.

Key words: machine vision, anti-illumination image processing, Ghostnet network, loss function