帳號:guest(3.140.197.10)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目勘誤回報
作者:陳宜良
作者(英文):Yi-Liang Chen
論文名稱:基於強化學習的移動邊緣計算任務卸載於車聯網
論文名稱(英文):Reinforcement Learning-based Mobile Edge Computing Task Offloading in Internet of Vehicles
指導教授:趙涵捷
指導教授(英文):Han-Chieh Chao
口試委員:卓信宏
曾繁勛
簡暐哲
黃士耘
口試委員(英文):Hsin-Hung Cho
Fan-Hsun Tseng
Wei-Che Chien
Shih-Yun Huang
學位類別:碩士
校院名稱:國立東華大學
系所名稱:電機工程學系
學號:610923013
出版年(民國):112
畢業學年度:111
語文別:中文
論文頁數:44
關鍵詞:5G車聯網任務卸載移動邊緣計算強化學習
關鍵詞(英文):5GInternet of VehicleTask offloadingMECReinforcement learning 
相關次數:
  • 推薦推薦:0
  • 點閱點閱:13
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:6
  • 收藏收藏:0
近年來,隨著通訊技術的發展,第五代行動通訊技術(5th Generation Wireless System, 5G)的技術逐漸發展成熟,可以透過更大的頻寬支援更快的傳輸速度,連結更多設備,應用在更為複雜的使用環境。我們可以利用5G技術中的高可靠低延遲通訊,為車聯網提供即時的傳輸,協助處理車載行駛中的各種狀況,甚至實現自動駕駛的技術。這些許多結合視訊多媒體與各種感測器的應用,產生大量計算密集且延遲敏感的數據,單憑車載本身的計算能力無法處理,因此,為了滿足應用的計算與延遲要求,使用移動邊緣計算(Mobile Edge Computing, MEC)是一個有效的解決方法。

由於車載計算能力有限無法及時處理視訊分析帶來的龐大數據,以往會藉由雲端計算來協助處理,但傳輸到雲端計算的傳輸延遲可能過長,而無法滿足車載服務的延遲要求。因此,MEC的架構被提出來,MEC在終端靠近車的位置佈署伺服器,為車提供計算能力和低延遲的傳輸以及儲存資料的功能。藉由MEC處理車載的服務請求,可以有效降低傳輸延遲,並提供計算能力,滿足車載服務的計算需求。然而,MEC的計算能力與雲端計算相比仍然有限,如果覆蓋範圍內所有車載的服務請求都交由MEC處理,大量車載的服務請求會造成MEC過載而使得計算能力下降,違反車載服務的延遲約束。因此,有必要針對MEC進行任務卸載,避免MEC過載並減少任務的延遲時間。本文提出一種結合強化學習的卸載方法在多變的車載環境中能夠根據環境自適應滿足車載的服務請求進行任務分配。MEC會定期向雲端回報MEC狀態,雲端則根據MEC回報的狀態訓練並更新強化學習的模型。首先,車載會將任務傳到MEC中,MEC藉由強化學習的模型,負載過高的MEC可以將任務傳到周圍MEC進行計算,或是將計算需求大且延遲不敏感的任務傳到雲端進行計算,藉此來滿足車載的服務請求,並最小化整體任務的延遲時間和能耗。
In recent years, with the development of communication technology, It can support fast transmission speed through large bandwidth, connect more devices, and be applied in more environment. We can use the ultra-reliable and low latency communication in 5G technology, it provides instant transmission for the Internet of Vehicles and even realize self-driving technology. These applications generate large amount of computing-intensive and delay-sensitive data, which cannot be processed by vehicle itself. In order to meet the computing and delay requirements of the application, MEC (Mobile Edge Computing) seems to be an effective solution.
Due to the computing power of vehicle is limited, cloud computing is used to process the tasks. However, the transmission delay to cloud is too long to meet the delay constraints of tasks. MEC is deployed close to the vehicle, which provides computing power, low-latency transmission and storage ability for the vehicle. Using the MEC to process vehicle services can effectively reduce transmission delays and provide computing power for vehicle services. However, the computing power of MEC is still limited compared with cloud computing. If the service requests all processed by MEC, it will cause MEC overloading and reduce computing power, which violates the delay constraints of services requests. Therefore, it is necessary to make task offloading for MEC to prevent MEC from overloading. This paper proposes an offloading method combined with reinforcement learning, which can adaptively satisfy the service requests in various vehicle environment. MEC will periodically report the state of MEC to the cloud and the cloud will train and update the reinforcement learning model. First, the vehicle will transmit the task to the MEC. MEC can migrate the task through the reinforcement learning model. MEC can migrate the task to other MEC or migrate the task with large computing requirements and insensitive delay constraint task to the cloud, so as to satisfy the service requests and minimize the delay time and energy consumption.
第一章 緒論 1
第二章 背景知識 6
第三章 問題定義 14
第四章 基於強化學習演算法之任務卸載策略 20
第五章 實驗模擬 25
第六章 結論 34
[1]A. Gupta, R. K. Jha, “A survey of 5G network: Architecture and Emerging Technologies,” IEEE Access, Vol. 3, pp.1206-1232, July, 2015.
[2] 鄭伊辰,「擴增實境導航與電子地圖之使用性評估與探討─以Google為例」,國立高雄科技大學,碩士論文,民國一零八年
[3] Y. Mao, C. You, J. Zhang, K. Huang and K. B. Letaief, "A Survey on Mobile Edge Computing: The Communication Perspective," in IEEE Communications Surveys & Tutorials, vol. 19, no. 4, pp. 2322-2358, 2017
[4] Description of network slicing concept,” NGMN Alliance, San Diego, CA, USA, Jan. 2016.
[5] Y. C. Hu, M. Patel, D. Sabella, N. Sprecher, and V. Young, “Mobile edge computing—A key technology towards 5G,” White Paper, ETSI, Sophia Antipolis, France, 2015.
[6] “Mobile-edge computing—Introductory technical white paper,” White Paper, ETSI, Sophia Antipolis, France, Sep. 2014.
[7] P. L. Nguyen, R. -H. Hwang, P. M. Khiem, K. Nguyen and Y. -D. Lin, "Modeling and Minimizing Latency in Three-tier V2X Networks," IEEE Global Communications Conference, Taipei, Taiwan, 2020, pp. 1-6
[8] Y. Zhang, X. Chen, Y. Chen, Z. Li and J. Huang, "Cost Efficient Scheduling for Delay-Sensitive Tasks in Edge Computing System," IEEE International Conference on Services Computing (SCC), pp. 73-80,2018
[9] Z. Ning, J. Huang, X. Wang, J. J. P. C. Rodrigues and L. Guo, "Mobile Edge Computing-Enabled Internet of Vehicles: Toward Energy-Efficient Scheduling," in IEEE Network, vol. 33, no. 5, pp. 198-205, Sept.-Oct. 2019,.
[10] S. A. Mohamed, S. Sorour and H. S. Hassanein, "Group-Delay Aware Task Offloading with Service Replication for Scalable Mobile Edge Computing," GLOBECOM 2020 - 2020 IEEE Global Communications Conference, pp. 1-6,2020
[11] J. Ren, Y. He, G. Yu and G. Y. Li, "Joint Communication and Computation Resource Allocation for Cloud-Edge Collaborative System," 2019 IEEE Wireless Communications and Networking Conference (WCNC), pp. 1-6, 2019
[12] Y. Zhang, B. Di, Z. Zheng, J. Lin and L. Song, "Joint Data Offloading and Resource Allocation for Multi-Cloud Heterogeneous Mobile Edge Computing Using Multi-Agent Reinforcement Learning," 2019 IEEE Global Communications Conference (GLOBECOM), pp. 1-6, 2019
[13] J. Wang, L. Zhao, J. Liu and N. Kato, "Smart Resource Allocation for Mobile Edge Computing: A Deep Reinforcement Learning Approach," in IEEE Transactions on Emerging Topics in Computing, vol. 9, no. 3, pp. 1529-1541, 1 July-Sept. 2021
[14] Ye, Hao, Geoffrey Ye Li, and Biing-Hwang Fred Juang. "Deep reinforcement learning based resource allocation for V2V communications." , IEEE Transactions on Vehicular Technology, Vol. 68, No. 4, pp.3163-3173, 2019
[15] M. Li, J. Gao, L. Zhao and X. Shen, "Deep Reinforcement Learning for Collaborative Edge Computing in Vehicular Networks," in IEEE Transactions on Cognitive Communications and Networking, vol. 6, no. 4, pp. 1122-1135, Dec. 2020
[16] Y. Cui, Y. Liang and R. Wang, "Resource Allocation Algorithm With Multi-Platform Intelligent Offloading in D2D-Enabled Vehicular Networks," in IEEE Access, vol. 7, pp. 21246-21253, 2019
[17] H. Wang, Z. Lin, K. Guo and T. Lv, "Computation Offloading Based on Game Theory in MEC-Assisted V2X Networks," 2021 IEEE International Conference on Communications Workshops (ICC Workshops), pp. 1-6, 2021
[18] M. S. Bute, P. Fan, G. Liu, F. Abbas and Z. Ding, "A Collaborative Task Offloading Scheme in Vehicular Edge Computing," 2021 IEEE 93rd Vehicular Technology Conference (VTC2021-Spring), pp. 1-5, 2021
[19] Z. Yu, Y. Tang, L. Zhang and H. Zeng, "Deep Reinforcement Learning Based Computing Offloading Decision and Task Scheduling in Internet of Vehicles," 2021 IEEE/CIC International Conference on Communications in China (ICCC), pp. 1166-1171, 2021
[20] A. Belogaev, A. Elokhin, A. Krasilov, E. Khorov and I. F. Akyildiz, "Cost-Effective V2X Task Offloading in MEC-Assisted Intelligent Transportation Systems," in IEEE Access, vol. 8, pp. 169010-169023, 2020
[21] M. D. Hossain, S. Khanal and E. -N. Huh, "Efficient Task Offloading for MEC-Enabled Vehicular Networks: A Non-Cooperative Game Theoretic Approach," 2021 Twelfth International Conference on Ubiquitous and Future Networks (ICUFN), pp. 11-16, 2021
[22] C. -H. Hsu, Y. Chiang, Y. Zhang and H. -Y. Wei, "Mobility-Aware QoS Promotion and Load Balancing in MEC-Based Vehicular Networks: A Deep Learning Approach," 2021 IEEE 93rd Vehicular Technology Conference (VTC2021-Spring), pp. 1-6, 2021
[23] S. Moon and Y. Lim, "Task Partitioning for Migration with Collaborative Edge Computing in Vehicular Networks," 2021 IEEE 3rd Eurasia Conference on IOT, Communication and Engineering (ECICE), pp. 102-107, 2021
[24] Minsky, Marvin Lee. Theory of neural-analog reinforcement systems and its application to the brain-model problem. Princeton University, 1954.
[25] Watkins, Christopher John Cornish Hellaby. "Learning from delayed rewards." (1989).
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *