1 |
XU D, DING Z, HE X, et al. Learning from naturalistic driving data for human-like autonomous highway driving[J]. IEEE Transactions on Intelligent Transportation Systems, 2020, 22(12): 7341-7354.
|
2 |
EMUNA R, BOROWSKY A, BIESS A. Deep reinforcement learning for human-like driving policies in collision avoidance tasks of self-driving cars[J]. arXiv.2006.04218.2020.
|
3 |
HECKER S, DAI D, VAN GOOL L. Learning accurate, comfortable and human-like driving[J]. arXiv.1903.10995.2019.
|
4 |
SAMA K, MORALES Y, LIU H, et al. Extracting human-like driving behaviors from expert driver data using deep learning[J]. IEEE Transactions on Vehicular Technology, 2020, 69(9): 9315-9329.
|
5 |
LIANG Y, LI Y, YU Y, et al. Path-following control of autonomous vehicles considering coupling effects and multi-source system uncertainties[J]. Automotive Innovation, 2021, 4(3): 284-300.
|
6 |
GUO N, ZHANG X, ZOU Y. Real-time predictive control of path following to stabilize autonomous electric vehicles under extreme drive conditions[J]. Automotive Innovation, 2022, 5(4): 453-470.
|
7 |
LI S E. Reinforcement learning for sequential decision and optimal control[M]. Springer, 2023.
|
8 |
WANG W, ZHANG Y, GAO J, et al. GOPS: a general optimal control problem solver for autonomous driving and industrial control applications[J]. Communications in Transportation Research, 2023, 3: 100096.
|
9 |
HECKER S, DAI D, LINIGER A, et al. Learning accurate and human-like driving using semantic maps and attention[C]. 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2020: 2346-2353.
|
10 |
CODEVILLA F, SANTANA E, LÓPEZ A M, et al. Exploring the limitations of behavior cloning for autonomous driving[C]. Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019: 9329-9338.
|
11 |
GU T, DOLAN J M. Toward human-like motion planning in urban environments[C]. 2014 IEEE Intelligent Vehicles Symposium Proceedings. IEEE, 2014: 350-355.
|
12 |
HANG P, LV C, XING Y, et al. Human-like decision making for autonomous driving: a noncooperative game theoretic approach[J]. IEEE Transactions on Intelligent Transportation Systems, 2020, 22(4): 2076-2087.
|
13 |
SAMA K, MORALES Y, LIU H, et al. Extracting human-like driving behaviors from expert driver data using deep learning[J]. IEEE Transactions on Vehicular Technology, 2020, 69(9): 9315-9329.
|
14 |
宋晓琳,盛鑫,曹昊天,等.基于模仿学习和强化学习的智能车辆换道行为决策[J]. 汽车工程, 2021, 43(1): 59-67.
|
|
SONG X L, SHENG X, CAO H T, et al. Lane‑change behavior decision‑making of intelligent vehicle based on imitation learning and reinforcement learning[J]. Automotive Engineering, 2021, 43(1): 59-67.
|
15 |
XU Y, GAO W, HSU D. Receding horizon inverse reinforcement learning[J]. Advances in Neural Information Processing Systems, 2022, 35: 27880-27892.
|
16 |
LIU J, BOYLE L N, BANERJEE A G. An inverse reinforcement learning approach for customizing automated lane change systems[J]. IEEE Transactions on Vehicular Technology, 2022, 71(9): 9261-9271.
|
17 |
HO J, ERMON S. Generative adversarial imitation learning[J]. Advances in Neural Information Processing Systems, 2016, 29.
|
18 |
KUDERER M, GULATI S, BURGARD W. Learning driving styles for autonomous vehicles from demonstration[C]. 2015 IEEE International Conference on Robotics and Automation (ICRA), 2015.
|
19 |
KALAKRISHNAN M, PASTOR P, RIGHETTI L, et al. Learning objective functions for manipulation[C]. 2013 IEEE International Conference on Robotics and Automation. IEEE, 2013: 1331-1336.
|
20 |
LEVINE S, KOLTUN V. Continuous inverse optimal control with locally optimal examples[J]. arXiv, 2012.
|
21 |
ENGLERT P, VIEN N A, TOUSSAINT M. Inverse KKT: learning cost functions of manipulation tasks from demonstrations:[J]. The International Journal of Robotics Research, 2017: 1474-1488.
|
22 |
JIN W, KULIC D, LIN J F-S, et al. Inverse optimal control for multiphase cost functions[J]. IEEE Transactions on Robotics, 2019: 1387-1398.
|
23 |
JIN W, KULIC D, MOU S, et al. Inverse optimal control from incomplete trajectory observations[J]. The International Journal of Robotics Research, 2018.
|
24 |
LIANG Z, JIN W, MOU S. An iterative method for inverse optimal control[C]. 2022 13th Asian Control Conference (ASCC), 2022.
|
25 |
JIN W, WANG Z, YANG Z, et al. Pontryagin differentiable programming: an end-to-end learning and control framework[J]. Advances in Neural Information Processing Systems, 2020, 33: 7979-7992.
|
26 |
GUO L, JIA Y. Inverse model predictive control (IMPC) based modeling and prediction of human-driven vehicles in mixed traffic[J]. IEEE Transactions on Intelligent Vehicles, 2020, 6(3): 501-512.
|
27 |
GE Q, SUN Q, LI S, et al. Numerically stable dynamic bicycle model for discrete-time control[C]. 2021 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops). IEEE, 2021: 128-134.
|
28 |
LI S, LI K, RAJAMANI R, et al. Model predictive multi-objective vehicular adaptive cruise control[J]. IEEE Transactions on Control Systems Technology, 2011, 19(3):556-566.
|
29 |
LI S, JIA Z, LI K, et al. Fast online computation of a model predictive controller and its application to fuel economy-oriented adaptive cruise control[J]. IEEE Transactions on Intelligent Transportation Systems, 2015, 16(3):1199-1209.
|
30 |
赵菲,王建,张天雷,等. 云控场景下车辆队列的模型预测控制方法[J]. 汽车工程, 2022, 44(2): 179-189.
|
|
ZHAO F, WANG J, ZHANG T L, et al. Model predictive control method for vehicle platoon under cloud control scenes[J]. Automotive Engineering, 2022, 44(2): 179-189.
|
31 |
王明,唐小林,杨凯,等. 考虑预测风险的自动驾驶车辆运动规划方法[J]. 汽车工程, 2023, 45(8): 1362-1372.
|
|
WANG M, TANG X L, YANG K, et al. A motion planning method for autonomous vehicles considering prediction risk[J]. Automotive Engineering, 2023, 45(8): 1362-1372.
|