汽车工程 ›› 2022, Vol. 44 ›› Issue (9): 1327-1338.doi: 10.19562/j.chinasae.qcgc.2022.09.004
所属专题: 智能网联汽车技术专题-感知&HMI&测评2022年
胡杰1,2,3(),徐博远1,2,3,熊宗权1,2,3,昌敏杰1,2,3,郭迪1,2,3,谢礼浩1,2,3
收稿日期:
2022-03-22
修回日期:
2022-04-28
出版日期:
2022-09-25
发布日期:
2022-09-21
通讯作者:
胡杰
E-mail:auto_hj@163.com
基金资助:
Jie Hu1,2,3(),Boyuan Xu1,2,3,Zongquan Xiong1,2,3,Minjie Chang1,2,3,Di Guo1,2,3,Lihao Xie1,2,3
Received:
2022-03-22
Revised:
2022-04-28
Online:
2022-09-25
Published:
2022-09-21
Contact:
Jie Hu
E-mail:auto_hj@163.com
摘要:
针对无监督域自适应目标检测中,域的可辨性和不变性之间的矛盾导致域负迁移和多尺度问题,本文中提出了一种可缓解域负迁移的多尺度掩码分类域自适应网络。首先在主干网络上对多个中间层进行图像级域对抗训练。接着在图像级特征图上加入区域提议掩码,作为一种补充信息对实例特征进行补充。最后提出分类别实例级域分类器,在保证域可辨性前提下,使网络尽可能地提取出有效的域不变信息。在Cityscapes和FoggyCityscapes两个数据集上进行验证的结果表明,本文提出的多尺度掩码分类域自适应网络,其域分类平均精度的平均值提高了13.2个百分点,说明网络域自适应能力显著提升。
胡杰,徐博远,熊宗权,昌敏杰,郭迪,谢礼浩. 基于多尺度掩码分类域自适应网络的跨域目标检测算法[J]. 汽车工程, 2022, 44(9): 1327-1338.
Jie Hu,Boyuan Xu,Zongquan Xiong,Minjie Chang,Di Guo,Lihao Xie. Cross-Domain Object Detection Algorithm Based on Multi-scale Mask Classification Domain Adaptive Network[J]. Automotive Engineering, 2022, 44(9): 1327-1338.
表1
Cityscapes→FoggyCityscapes域自适应目标检测结果"
方法 | 人 | 骑手 | 汽车 | 货车 | 公交车 | 火车 | 摩托车 | 自行车 | mAP/% |
---|---|---|---|---|---|---|---|---|---|
Faster R-CNN | 24.1 | 33.1 | 34.3 | 4.1 | 22.3 | 3.0 | 15.3 | 26.5 | 20.3 |
DA-Faster[ | 25.0 | 31.0 | 40.5 | 22.1 | 35.3 | 20.2 | 20.0 | 27.1 | 27.6 |
StrongWeak[ | 29.9 | 42.3 | 43.5 | 24.5 | 36.2 | 32.6 | 30.0 | 35.3 | 34.3 |
DD-MRL[ | 30.8 | 40.5 | 44.3 | 27.2 | 38.4 | 34.5 | 28.4 | 32.2 | 34.6 |
ATF[ | 34.6 | 47.0 | 50.0 | 23.7 | 43.3 | 38.7 | 33.4 | 38.8 | 38.7 |
VDD[ | 33.4 | 44.0 | 51.7 | 33.9 | 52.0 | 34.7 | 34.2 | 36.8 | 40.0 |
MMCN(本文) | 33.4 | 46.8 | 51.9 | 29.1 | 48.4 | 43.2 | 36.0 | 37.4 | 40.8 |
表2
非此类别负样本损失所占权重消融实验"
方法 | 权重 | 人 | 骑手 | 汽车 | 货车 | 公交车 | 火车 | 摩托车 | 自行车 | mAP/% |
---|---|---|---|---|---|---|---|---|---|---|
MMCN | 0.1 | 33.5 | 44.6 | 50.9 | 28.0 | 44.5 | 38.3 | 34.3 | 35.1 | 38.6 |
0.2 | 33.4 | 47.1 | 50.9 | 30.1 | 44.9 | 35.6 | 31.9 | 36.1 | 38.8 | |
0.3 | 26.3 | 41.5 | 32.4 | 21.9 | 32.4 | 9.4 | 28.3 | 23.4 | 28.4 | |
0.4 | 33.4 | 46.1 | 50.1 | 26.1 | 41.7 | 23.0 | 32.6 | 32.5 | 35.7 | |
0.5 | 33.5 | 45.2 | 50.5 | 33.4 | 43.6 | 40.8 | 30.9 | 36.4 | 39.3 | |
0.6 | 26.0 | 43.1 | 43.8 | 20.2 | 34.3 | 14.6 | 23.4 | 22.6 | 28.5 | |
0.7 | 32.9 | 46.5 | 50.3 | 29.7 | 45.2 | 36.7 | 29.2 | 35.4 | 38.2 | |
0.8 | 33.4 | 46.2 | 50.9 | 27.3 | 46.8 | 34.8 | 33.0 | 35.3 | 38.5 | |
0.9 | 33.6 | 46.3 | 50.7 | 29.8 | 46.2 | 31.6 | 33.3 | 37.4 | 38.6 |
表3
中间层位置消融实验"
方法 | 中间层1 | 中间层2 | 人 | 骑手 | 汽车 | 货车 | 公交车 | 火车 | 摩托车 | 自行车 | mAP/% |
---|---|---|---|---|---|---|---|---|---|---|---|
MMCN | 7 | 9 | 33.7 | 46.0 | 50.9 | 28.2 | 46.6 | 36.4 | 34.6 | 36.3 | 39.1 |
5 | 9 | 33.5 | 46.7 | 50.7 | 26.8 | 43.9 | 24.0 | 32.9 | 36.2 | 36.8 | |
7 | 8 | 33.1 | 45.4 | 50.9 | 25.3 | 45.7 | 19.7 | 35.0 | 36.8 | 36.5 | |
6 | 8 | 33.3 | 47.0 | 51.3 | 30.2 | 46.3 | 23.9 | 29.2 | 36.5 | 37.2 | |
5 | 8 | 33.4 | 45.0 | 51.0 | 31.1 | 45.5 | 22.9 | 35.5 | 36.4 | 37.6 | |
7 | 10 | 33.3 | 46.6 | 51.1 | 29.4 | 44.8 | 35.0 | 34.0 | 36.0 | 38.8 | |
6 | 10 | 33.3 | 44.7 | 51.0 | 30.0 | 44.4 | 37.3 | 33.4 | 35.8 | 38.7 | |
5 | 10 | 32.9 | 44.6 | 48.6 | 23.4 | 39.2 | 25.8 | 30.1 | 32.4 | 34.6 | |
6 | 9 | 33.4 | 46.8 | 51.9 | 29.1 | 48.4 | 43.2 | 36.0 | 37.4 | 40.8 |
表4
区域提议掩码位置消融实验"
方法 | 人 | 骑手 | 汽车 | 货车 | 公交车 | 火车 | 摩托车 | 自行车 | mAP/% | |||
---|---|---|---|---|---|---|---|---|---|---|---|---|
35.4 | 45.0 | 51.2 | 31.8 | 45.6 | 36.8 | 32.9 | 38.0 | 39.6 | ||||
MMCN | 34.2 | 45.3 | 51.4 | 24.4 | 48.2 | 31.2 | 37.5 | 37.4 | 38.7 | |||
33.7 | 46.6 | 51.6 | 29.1 | 43.4 | 39.8 | 30.5 | 38.5 | 39.2 | ||||
25.7 | 42.5 | 35.9 | 18.5 | 28.3 | 18.0 | 23.3 | 22.1 | 26.8 | ||||
33.4 | 45.3 | 51.2 | 29.6 | 44.6 | 33.5 | 28.9 | 34.7 | 37.7 | ||||
34.7 | 47.6 | 51.2 | 28.1 | 48.2 | 35.2 | 33.1 | 36.8 | 39.4 | ||||
33.3 | 45.8 | 51.2 | 32.1 | 47.3 | 38.0 | 33.8 | 37.3 | 39.9 |
1 | REDMON J, FARHADI A. Yolov3: An incremental improveme-nt[J].arXiv preprint arXiv:,2018. |
2 | REN S, HE K, GIRSHICK R, et al. Faster r-cnn: towards re-al-time object detection with region proposal networks[J].Advances in Neural Information Processing Systems,2015,28. |
3 | 邱锡鹏.《神经网络与深度学习》[J].中文信息学报, 2020(7):1. |
4 | GANIN Y,USTINOVA E,AJAKAN H,et al.Domain-adversarial training of neural networks[J].Journal of Machine Learning Research, 2016,17(59):1-35. |
5 | LONG M,ZHU H,WANG J,et al.Unsupervised domain adaptati-on with residual transfer networks[C]. Proc. of the Neural Information Processing Systems,2016. |
6 | TZENG E,HOFFMAN J,SAENKO K,et al.Adversarial discriminative domain adaptation[C]. Proc. of the IEEE Conference on Computer Vision Pattern Recognition,2017. |
7 | TZENG E,HOFFMAN J,ZHANG N,et al.Deep domain confusion: maximizing for domain invariance[EB/OL].arXiv:1412.3474,2014.1,2. |
8 | SAITO K,USHIKU Y,HARADA T,et al.Adversarial dropout regularization.[C]. Proc. of the International Conference and Learning Representations,2018. |
9 | LIU M Y,BREUEL T,KAUTZ J.Unsupervised image-to-image translation networks. [C]. Proc. of the Neural Information Processing Systems,2017. |
10 | HOFFMAN J,TZENG E,PARK T,et al.Cycada: cycle-consistent adversarial domain adaptation[C].Proc. of the International Conference and Machine Learning,2018.1. |
11 | CHEN C Q,XIE W P,HUANG W B,et al.Progressive feature alignment for unsupervised domain adaptation[C]. Proc. of the IEEE Conference on Computer Vision Pattern Recognition,2019:627–636. |
12 | GANIN Y,LEMPITSKY V.Unsupervised domain adaptation by backpropagation[C].Proc. of the International Conference and Machine Learning,2015:1180–1189. |
13 | LONG M S,CAO Z J,WANG J M,et al.Conditional adversarial domain adaptation[C].Proc. of the Neural Information Processing Systems,2018:1640–1650. |
14 | SHU R,HUNG H B,NARUI H,et al. A dirt approach to unsup-ervised domain adaptation[C].Proc. of the International Conference and Learning Rrepresentations,2018. |
15 | TZENG E,HOFFMAN J,SAENKO K,et al.Adversarial discriminative domain adaptation[C]. Proc. of the IEEE Conference on Computer Vision Pattern Recognition,2017. |
16 | XIE S A,ZHENG Z B,CHEN L,et al.Learning semantic represe-ntations for unsupervised domain adaptation[C].Proc. of the International Conference and Machine Learning,2018:627–636. |
17 | BEN-DAVID S,BLITZER J,CRAMMER K,et al.A theory of learning from different domains[J].Machine Learning, 2010,79(1-2):151-175. |
18 | BEN-DAVID S,BLITZER J,CRAMMER K,et al.Analysis of representations for domain adaptation[C].Proc. of the Neural Inf-ormation Processing Systems,2007.1,2. |
19 | CHEN Y,LI W,SAKARIDIS C,et al. Domain adaptive Faster R-CNN for object detection in the wild[C].Proc. of the IEEE Conference on Computer Vision Pattern Recognition,2018:3339-3348. |
20 | ZHU X,PANG J,YANG C,et al,Adapting object detectors via selective cross-domain alignment[C].Proc. of the IEEE Conference on Computer Vision Pattern Recognition,2019:687-696. |
21 | LI Y J, DAI X, MA C Y, et al. Cross-domain object detection via adaptive selftraining[J]. arXiv preprint arXiv:, 2021. |
22 | FUJII K, KERA H, KAWAMOTO K. Adversarially trained object detector for unsupervised domain adaptation[J]. arXiv preprint arXiv:, 2021. |
23 | ZHUANG C, HAN X, HUANG W, et al. iFAN: image-instance full alignment networks for adaptive object detection[C].Proceedings of the AAAI Conference on Artificial Intelligence. 2020, 34(7): 13122-13129. |
24 | REDKO I, MORVANT E, HABRARD A, et al. A survey on domain adaptation theory[J]. arXiv preprint arXiv:, 2020. |
25 | SIMONYAN K,ZISSERMAN A.Very deep convolutional networks for large-scale image recognition.[C].Proc. of the Intern-ational Conference and Learning Rrepresentations,2015. |
26 | SAITO K,USHIKU Y,HARADA T,et al,Strong-weak distribution alignment for adaptive object detection[C]. Proc. of the IEEE Conference on Computer Vision Pattern Recognition,2019:6956-6965. |
27 | LIN T Y,GOYAL P,GIRSHICK R,et al.Focal loss for dense obj-ect detection [J]. IEEE Transactions on Pattern Analysis & Machine Intelligence,2017,PP(99):2999-3007. |
28 | ZHANG H,TIAN Y,WANG K,et al,Synthetic-to-real domain adaptation for object instance segmentation[C]. Proc. of the 2019 International Joint Conference on Neural Netwo-rks,2019:1-7. |
29 | KIM T,JEONG M,KIM S,et al.Diversify andmatch: a domain adaptive representation learning paradigm for object detection[C].Proc. of the IEEE Conference on Computer Vision Pattern Recognition,2019:12456–12465. |
30 | HE Z,ZHANG L.Domain adaptive object detection via asymmetric tri-way Faster-RCNN[C]. ECCV 2020: 309–324. |
31 | WU A, LIU R, HAN Y, et al. Vector-decomposed disentanglement for domain-invariant object detection[C] .Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 9342-9351. |
32 | CHEN H Y, WANG P H, LIU C H, et al. Complement objective training[J]. arXiv preprint arXiv:, 2019. |
[1] | 程腾,倪昊,张强,王文冲,石琴. 基于虚拟点云的二阶段多模态融合网络[J]. 汽车工程, 2024, 46(2): 222-229. |
[2] | 高泽, 楚遵康, 石稼晟, 林滏, 饶卫雄, 余海燕. 基于图网络的汽车零部件应力场快速预测方法研究[J]. 汽车工程, 2024, 46(1): 170-178. |
[3] | 马雷, 杨顺清, 王欢欢, 翟家琛, 徐健傲. 融合图像显著性特征的轻量级目标检测算法[J]. 汽车工程, 2024, 46(1): 84-91. |
[4] | 刘卫国,项志宇,刘锐,李国栋,王子旭. 基于深度学习的端到端车辆运动规划方法研究[J]. 汽车工程, 2023, 45(8): 1343-1352. |
[5] | 赵东宇, 赵树恩. 基于级联YOLOv7的自动驾驶三维目标检测[J]. 汽车工程, 2023, 45(7): 1112-1122. |
[6] | 韩勇,林旭洁,黄红武,蔡鸿瑜,罗金镕,李燕婷. 典型汽车碰撞事故场景中行人运动轨迹预测方法[J]. 汽车工程, 2023, 45(6): 1022-1030. |
[7] | 赵嘉豪,齐志权,齐智峰,王皓,何磊. 基于轮胎特征点的并行大型车辆朝向角计算[J]. 汽车工程, 2023, 45(6): 1031-1039. |
[8] | 陈妍妍,王海,蔡英凤,陈龙,李祎承. 基于检测的高效自动驾驶实例分割方法[J]. 汽车工程, 2023, 45(4): 541-550. |
[9] | 兰凤崇,陈继开,陈吉清,蒋心平,李子涵,潘威. 实车数据驱动的锂电池剩余使用寿命预测方法研究[J]. 汽车工程, 2023, 45(2): 175-182. |
[10] | 李琳辉,张鑫亮,付一帆,连静,马家旭. 基于TC-YOLOv7算法的可见光与红外后融合检测研究[J]. 汽车工程, 2023, 45(12): 2280-2290. |
[11] | 刘正发,吴亚,刘佩根,顾荣琦,陈广. 基于特征和标签联合分布匹配的智能驾驶跨域自适应目标检测[J]. 汽车工程, 2023, 45(11): 2082-2091. |
[12] | 张小俊,奚敬哲,史延雷,袁安录. 面向路侧视角目标检测的轻量级YOLOv7-R算法[J]. 汽车工程, 2023, 45(10): 1833-1844. |
[13] | 胡杰,李源洁,耿號,耿黄政,郭雄,易红卫. 基于深度学习的汽车故障知识图谱构建[J]. 汽车工程, 2023, 45(1): 52-60. |
[14] | 金立生,贺阳,王欢欢,霍震,谢宪毅,郭柏苍. 基于自适应阈值DBSCAN的路侧点云分割算法[J]. 汽车工程, 2022, 44(7): 987-996. |
[15] | 毕贵红,谢旭,蔡子龙,骆钊,陈臣鹏,赵鑫. 动态条件下基于深度学习的锂电池容量估计[J]. 汽车工程, 2022, 44(6): 868-878. |
Viewed | ||||||||||||||||||||||||||||||||||||||||||||||||||
Full text 625
|
|
|||||||||||||||||||||||||||||||||||||||||||||||||
Abstract 251
|
|
|||||||||||||||||||||||||||||||||||||||||||||||||
Cited |
|
|||||||||||||||||||||||||||||||||||||||||||||||||
Shared | ||||||||||||||||||||||||||||||||||||||||||||||||||
|