Search In this Thesis
   Search In this Thesis  
العنوان
A Deep Learning Approach for Optimizing Edge Computing for IoT Environments /
المؤلف
Seliman,Moshira Abd El Naby Ebrahim
هيئة الاعداد
باحث / مشيرة عبد النبي ابراهيم سليمان
مشرف / هدى قرشي محمد
مناقش / حسن طاهر دره
مناقش / احمد حسن محمد
تاريخ النشر
2024.
عدد الصفحات
134p.:
اللغة
الإنجليزية
الدرجة
الدكتوراه
التخصص
الهندسة الكهربائية والالكترونية
تاريخ الإجازة
1/1/2024
مكان الإجازة
جامعة عين شمس - كلية الهندسة - كهرباء حاسبات
الفهرس
Only 14 pages are availabe for public view

from 134

from 134

Abstract

With the rapid expansion and heterogeneity of Internet of Things (IoT) devices, delay-sensitive and computationally intensive applications such as smart homes, object recognition, and smart grids are continually emerging. The cloud computational paradigm can be used to transfer computation-intensive tasks from IoT devices to a cloud computational server for overcoming the limitations of IoT devices with more powerful resources. However, the architecture of cloud computing might result in high latency, which is unsuitable for IoT devices that require faster response times for their tasks. Edge computing was designed to address this issue by installing an edge device near IoT devices that can supply computing resources with lower latency than cloud computing. Edge Computing is a paradigm of distributed architecture that processes locally collected data. Its primary aim is to reduce wastes of the data overload, response time, and bandwidth in IoT systems. However, when the edge server gets flooded with such requests, it fails to complete all offloaded tasks from the devices in time. In such cases, a few requested tasks are assigned to the cloud server by the edge server in order to better facilitate the offloading process with the help of more powerful cloud resources.
Manipulating task offloading is one of the most significant challenges in edge computing-based IoT systems, which requires the optimization of communication and resource allocation in conjunction with edge devices. The primary purpose of this study is to reduce latency and energy consumption for tasks in IoT edge-cloud systems. A Markov Decision Process (MDP) is used to formulate the offloading problem. DRL allows edge systems to better handle task offloading because of its efficiency in accessing massive amounts of distributed data. Managing offloading can be done by reducing link cost and energy consumption, guaranteeing higher bandwidth, and enforcing decentralization.
This study proposes an effective dynamic task offloading decision mechanism that is deployed on the network edge to determine where to offload the tasks of IoT devices considering multiple factors to complete their tasks. To provide task offloading system with more flexibility, the Unmanned Aerial Vehicles (UAVs) technology is utilizing by deploying the edge server on UAV to serve mobile IoT devices in different areas. Additionally, the cooperation between edge computing and fog computing with cloud computing is investigated to meet the high-demand requirements and optimize the network resources of complex computational offloading environments. Simulation results show that the proposed dynamic offloading mechanism could improve task completion time compared to naïve techniques.
This thesis is divided into seven chapters as listed below:
Chapter 1: Introduces a general introduction on the thesis scope and objective and the main contribution of this study.
Chapter 2: Throws light on deep learning and its different models. Also highlights the advantage of using deep learning algorithms in task offloading.
Chapter 3: Provides literature review about similar research in offloading issues.
Chapter 4: Describes a deep learning-based offloading model in multi-UAV aided mobile edge computing. Construction of architecture and algorithm are explained in detail.
Chapter 5: Elaborate details of a dual-critic deep reinforcement learning approach for task offloading in edge-fog-cloud environments.
Chapter 6: Captures the experimental results of this study to evaluate the proposed model.
Chapter 7: Summarizes the conducted work and discusses the future work.