1 |
WAN S, GU R, UMER T, et al. Toward offloading internet of vehicles applications in 5G networks[J]. IEEE Transactions on Intelligent Transportation Systems, 2020, 22(7): 4151-4159.
|
2 |
MUNAWAR S, ALI Z, WAQAS M, et al. Cooperative computational offloading in mobile edge computing for vehicles: a model-based DNN approach[J]. IEEE Transactions on Vehicular Technology, 2022, 72(3): 3376-3391.
|
3 |
TANG F, MAO B, KATO N, et al. Comprehensive survey on machine learning in vehicular network: technology, applications and challenges[J]. IEEE Communications Surveys & Tutorials, 2021, 23(3): 2027-2057.
|
4 |
CHEN J, XING H, XIAO Z, et al. A DRL agent for jointly optimizing computation offloading and resource allocation in MEC[J]. IEEE Internet of Things Journal, 2021, 8(24): 17508-17524.
|
5 |
赵海涛,张唐伟,陈跃,等.基于DQN的车载边缘网络任务分发卸载算法[J].通信学报,2020,41(10):172-178.
|
|
ZHAO H T, ZHANG T W, CHEN Y, et al. Task distribution offloading algorithm of vehicle edge network based on DQN [J]. Journal on Communications, 2020,41(10):172-178.
|
6 |
HUANG L, BI S, ZHANG Y J A. Deep reinforcement learning for online computation offloading in wireless powered mobile-edge computing networks[J]. IEEE Transactions on Mobile Computing, 2019, 19(11): 2581-2593.
|
7 |
ZHOU H, JIANG K, LIU X, et al. Deep reinforcement learning for energy-efficient computation offloading in mobile-edge computing[J]. IEEE Internet of Things Journal, 2021, 9(2): 1517-1530.
|
8 |
卢海峰,顾春华,罗飞,等.基于深度强化学习的移动边缘计算任务卸载研究[J].计算机研究与发展,2020,57(7):1539-1554.
|
|
LU H F, GU C H, GU F, et al. Research on task offloading based on deep reinforcement learning in mobile edge computing[J]. Journal of Computer Research and Development, 2020,57(7):1539-1554.
|
9 |
LIU Y, YU H, XIE S, et al. Deep reinforcement learning for offloading and resource allocation in vehicle edge computing and networks[J]. IEEE Transactions on Vehicular Technology, 2019, 68(11): 11158-11168.
|
10 |
KE H, WANG J, DENG L, et al. Deep reinforcement learning-based adaptive computation offloading for MEC in heterogeneous vehicular networks[J]. IEEE Transactions on Vehicular Technology, 2020, 69(7): 7916-7929.
|
11 |
LIN J, HUANG S, ZHANG H, et al. A deep-reinforcement-learning-based computation offloading with mobile vehicles in vehicular edge computing[J]. IEEE Internet of Things Journal, 2023, 10(17): 15501-15514.
|
12 |
QIU B, WANG Y, XIAO H, et al. Deep reinforcement learning-based adaptive computation offloading and power allocation in vehicular edge computing networks[J]. IEEE Transactions on Intelligent Transportation Systems, 2024.
|
13 |
何杰,马强.基于深度强化学习的C-V2X任务卸载研究[J/OL].计算机工程, 1-11[2024-07-20]. https://doi.org/10.19678/j.issn.1000-3428.0068425.
|
|
HE J, MA Q. Research on C-V2X task offloading based on deep reinforcement learning[J]. Computer Engineering, 1-11[2024-07-20]. https://doi.org/10.19678/j.issn.1000-3428.0068425.
|
14 |
YANG H, WEI Z, FENG Z, et al. Intelligent computation offloading for MEC-based cooperative vehicle infrastructure system: a deep reinforcement learning approach[J]. IEEE Transactions on Vehicular Technology, 2022, 71(7): 7665-7679.
|
15 |
沈乐. 基于DQN-DDPG的空地协作边缘计算任务卸载与资源分配研究[J/OL].软件导刊: 1-8[2024-05-09]. http://kns.cnki.net/kcms/detail/42.1671.TP.20240130.1638.016.html.
|
|
SHEN L. Task offloading and resource allocation based on DQN-DDPG for aerial-ground cooperative mobile edge computing[J/OL]. Software Guide: 1-8[2024-05-09]. http://kns.cnki.net/kcms/detail/42.1671.TP.20240130.1638.016.html.
|
16 |
LIN N, TANG H, ZHAO L, et al. A PDDQNLP algorithm for energy efficient computation offloading in UAV-assisted MEC[J]. IEEE Transactions on Wireless Communications, 2023, 22(12): 8876-8890.
|
17 |
MEI H, YANG K, LIU Q, et al. 3D-trajectory and phase-shift design for RIS-assisted UAV systems using deep reinforcement learning[J]. IEEE Transactions on Vehicular Technology, 2022, 71(3): 3020-3029.
|
18 |
SEID A M, BOATENG G O, ANOKYE S, et al. Collaborative computation offloading and resource allocation in multi-UAV-assisted IoT networks: a deep reinforcement learning approach[J]. IEEE Internet of Things Journal, 2021, 8(15): 12203-12218.
|
19 |
SHI H, TIAN Y, LI H, et al. Task offloading and trajectory scheduling for UAV-enabled MEC networks: an MADRL algorithm with prioritized experience replay[J]. Ad Hoc Networks, 2024, 154: 103371.
|
20 |
GUO Y, MA D, SHE H, et al. Deep deterministic policy gradient-based intelligent task offloading for vehicular computing with priority experience playback[J]. IEEE Transactions on Vehicular Technology, 2024. doi: 10.1109/TVT.2024.3378919.
|
21 |
HE X, LU H, DU M, et al. Qoe-based task offloading with deep reinforcement learning in edge-enabled internet of vehicles[J]. IEEE Transactions on Intelligent Transportation Systems, 2020, 22(4): 2252-2261.
|
22 |
刘国志, 代飞, 莫启,等.车辆边缘计算环境下基于深度强化学习的服务卸载方法[J].计算机集成制造系统,2022, 28(10): 3304-3315.
|
|
LIU G Z, DAI F, MO Q, et al. Service offloading method with deep reinforcement learning in edge computing empowered Internet of vehicles[J]. Computer Integrated Manufacturing Systems,2022, 28(10): 3304-3315.
|
23 |
FENG J, YU F R, PEI Q, et al. Cooperative computation offloading and resource allocation for blockchain-enabled mobile-edge computing: a deep reinforcement learning approach[J]. IEEE Internet of Things Journal, 2019, 7(7): 6214-6228.
|
24 |
喻鹏, 张俊也, 李文璟, 等. 移动边缘网络中基于双深度Q学习的高能效资源分配方法[J]. 通信学报, 2020, 41(12): 148-161.
|
|
YU P, ZHANG J Y, LI W J, et al. Energy-efficient resource allocation method in mobile edge network based on double deep Q-learning[J]. Journal on Communications, 2020, 41(12): 148-161.
|