Redirigiendo al acceso original de articulo en 18 segundos...
Inicio  /  Future Internet  /  Vol: 14 Par: 2 (2022)  /  Artículo
ARTÍCULO
TITULO

Task Offloading Based on LSTM Prediction and Deep Reinforcement Learning for Efficient Edge Computing in IoT

Youpeng Tu    
Haiming Chen    
Linjie Yan and Xinyan Zhou    

Resumen

In IoT (Internet of Things) edge computing, task offloading can lead to additional transmission delays and transmission energy consumption. To reduce the cost of resources required for task offloading and improve the utilization of server resources, in this paper, we model the task offloading problem as a joint decision making problem for cost minimization, which integrates the processing latency, processing energy consumption, and the task throw rate of latency-sensitive tasks. The Online Predictive Offloading (OPO) algorithm based on Deep Reinforcement Learning (DRL) and Long Short-Term Memory (LSTM) networks is proposed to solve the above task offloading decision problem. In the training phase of the model, this algorithm predicts the load of the edge server in real-time with the LSTM algorithm, which effectively improves the convergence accuracy and convergence speed of the DRL algorithm in the offloading process. In the testing phase, the LSTM network is used to predict the characteristics of the next task, and then the computational resources are allocated for the task in advance by the DRL decision model, thus further reducing the response delay of the task and enhancing the offloading performance of the system. The experimental evaluation shows that this algorithm can effectively reduce the average latency by 6.25%, the offloading cost by 25.6%, and the task throw rate by 31.7%.

PÁGINAS
pp. 0 - 0
MATERIAS
INFRAESTRUCTURA
REVISTAS SIMILARES

 Artículos similares