Author Login Editor-in-Chief Peer Review Editor Work Office Work

Computer Engineering

   

Research on C-V2X task offloading based on deep reinforcement learning

  

  • Published:2024-04-09

基于深度强化学习的C-V2X任务卸载研究

Abstract: The rapid development of driverless and assisted driving puts high demands on vehicle computing performance, for which task offloading techniques for joint mobile edge computing provide solutions. However, there is a huge challenge to realize fast and efficient task offloading decision, while the existing research does not consider the overall system benefits of task offloading enough. To address the above problems, a distributed task offloading system model for Cellular Vehicle-to-Everything (C-V2X) based on software-defined networking is designed using the vehicle-road-air architecture; a task offloading control algorithm based on deep reinforcement learning is proposed. The cost models are constructed for the three modes of task local computing, edge computing, and satellite computing respectively, and the objective function is constructed with the vehicle energy consumption and resource leasing cost at the user side and the task processing delay and server load balance at the server side as the joint optimization objectives. Considering the constraints such as the maximum expected task delay and the maximum server load ratio, the problem of task offloading is formulated as a mixed-integer nonlinear programming problem (MINLP), which is then modeled as a Markov decision process in a discrete-continuous mixed action space. Finally, the task offloading decisions regarding task scheduling, resource leasing, and power control are obtained based on deep reinforcement learning algorithms. Compared with the traditional schemes based on particle swarm optimization and genetic algorithms, the scheme in this paper reduces the single decision latency by more than 45% while achieving similar decision benefits.

摘要: 无人驾驶、辅助驾驶的快速发展对车辆计算性能提出了较高的要求,联合移动边缘计算的任务卸载技术为此提供了解决方案。然而实现快速、高效的任务卸载决策存在巨大挑战,同时现有研究对于任务卸载的系统整体效益考虑不足。针对上述问题,采用车-路-空架构,设计了一种基于软件定义网络的蜂窝车联网(C-V2X)分布式任务卸载系统模型;提出了一种基于深度强化学习的任务卸载控制算法。对任务本地计算、边缘计算、卫星计算三种模式分别构建成本模型,以用户端车辆能耗、资源租赁费用和服务端任务处理时延、服务器负载均衡性作为联合优化目标构建目标函数。考虑任务最大期望时延、服务器最大负载率等约束,将任务卸载问题表述为混合整数非线性规划(MINLP)问题,然后将其建模为离散-连续混合动作空间的Markov决策过程,最后基于深度强化学习算法获得关于任务调度、资源租赁、功率控制的任务卸载决策。与传统的基于粒子群优化、遗传算法的方案相比,本文方案在取得相近决策效益的同时,单次决策时延降低了45%以上。