Redirigiendo al acceso original de articulo en 21 segundos...
Inicio  /  Future Internet  /  Vol: 14 Par: 8 (2022)  /  Artículo
ARTÍCULO
TITULO

Energy Saving Strategy of UAV in MEC Based on Deep Reinforcement Learning

Zhiqiang Dai    
Gaochao Xu    
Ziqi Liu    
Jiaqi Ge and Wei Wang    

Resumen

Unmanned aerial vehicles (UAVs) have the characteristics of portability, safety, and strong adaptability. In the case of a maritime disaster, they can be used for personnel search and rescue, real-time monitoring, and disaster assessment. However, the power, computing power, and other resources of UAVs are often limited. Therefore, this paper combines a UAV and mobile edge computing (MEC), and designs a deep reinforcement learning-based online task offloading (DOTO) algorithm. The algorithm can obtain an online offloading strategy that maximizes the residual energy of the UAV by jointly optimizing the UAV?s time and communication resources. The DOTO algorithm adopts time division multiple access (TDMA) to offload and schedule the UAV computing task, integrates wireless power transfer (WPT) to supply power to the UAV, calculates the residual energy corresponding to the offloading action through the convex optimization method, and uses an adaptive K" role="presentation">??K K method to reduce the computational complexity of the algorithm. The simulation results show that the DOTO algorithm proposed in this paper for the energy-saving goal of maximizing the residual energy of UAVs in MEC can provide the UAV with an online task offloading strategy that is superior to other traditional benchmark schemes. In particular, when an individual UAV exits the system due to insufficient power or failure, or a new UAV is connected to the system, it can perform timely and automatic adjustment without manual participation, and has good stability and adaptability.

Palabras claves

PÁGINAS
pp. 0 - 0
MATERIAS
INFRAESTRUCTURA
REVISTAS SIMILARES

 Artículos similares