|
C. -Y. Wei, A. C. . -S. Huang, C. -Y. Chen and J. -Y. Chen, "QoS-Aware Hybrid Scheduling for Geographical Zone-Based Resource Allocation in Cellular Vehicle-to-Vehicle Communications," in IEEE Communications Letters, vol. 22, no. 3, pp. 610-613, March 2018. H. Ye, G. Y. Li and B. F. Juang, "Deep Reinforcement Learning Based Resource Allocation for V2V Communications," in IEEE Transactions on Vehicular Technology, vol. 68, no. 4, pp. 3163-3173, April 2019. Y. Hou, X. Wu, X. Tang, X. Qin and M. Zhou, "Radio Resource Allocation and Power Control Scheme in V2V Communications Network," in IEEE Access, vol. 9, pp. 34529-34540, 2021. P. Qin, Y. Fu, X. Feng, X. Zhao, S. Wang and Z. Zhou, "Energy-Efficient Resource Allocation for Parked-Cars-Based Cellular-V2V Heterogeneous Networks," in IEEE Internet of Things Journal, vol. 9, no. 4, pp. 3046-3061, 15 Feb.15, 2022. 3rd Generation Partnership Project, “Release 14 Description; Summary of Rel-14 Work Items (Release 14),” 3GPP, Tech. Report. TR 21.914, May 2018. M. H. C. Garcia et al., "A Tutorial on 5G NR V2X Communications," in IEEE Communications Surveys & Tutorials, vol. 23, no. 3, pp. 1972-2026, thirdquarter 2021, doi: 10.1109/COMST.2021.3057017. A. Sabharwal, P. Schniter, D. Guo, D. W. Bliss, S. Rangarajan and R. Wichman, "In-Band Full-Duplex Wireless: Challenges and Opportunities," in IEEE Journal on Selected Areas in Communications, vol. 32, no. 9, pp. 1637-1652, Sept. 2014. H. Ye and G. Y. Li, "Deep Reinforcement Learning based Distributed Resource Allocation for V2V Broadcasting," 2018 14th International Wireless Communications & Mobile Computing Conference (IWCMC), 2018, pp. 440-445. H. Ye and G. Y. Li, "Deep Reinforcement Learning for Resource Allocation in V2V Communications," 2018 IEEE International Conference on Communications (ICC), 2018, pp. 1-6. X. Zhang, M. Peng, S. Yan and Y. Sun, "Deep-Reinforcement-Learning-Based Mode Selection and Resource Allocation for Cellular V2X Communications," in IEEE Internet of Things Journal, vol. 7, no. 7, pp. 6380-6391, July 2020. V. Mnih et al., "Human-level control through deep reinforcement learning", Nature, vol. 518, no. 7540, pp. 529-533, 2015. W. Du and S. Ding, "A survey on multi-agent deep reinforcement learning: From the perspective of challenges and applications", Artif. Intell. Rev., vol. 54, pp. 3215-3238, Nov. 2020. Q. Yang, Y. Liu, T. Chen and Y. Tong, "Federated machine learning: Concept and applications", ACM Trans. Intell. Syst. Technol., vol. 10, no. 2, pp. 1-19, Jan. 2019. B. McMahan, E. Moore, D. Ramage, S. Hampson and B. A. Y Arcas, "Communication-efficient learning of deep networks from decentralized data", Proc. Artif. Intell. Statist., pp. 1273-1282, 2017. Sannara EK, Francois PORTET, Philippe LALANDA, & German VEGA (2021). A Federated Learning Aggregation Algorithm for Pervasive Computing: Evaluation and Comparison. In 2021 IEEE International Conference on Pervasive Computing and Communications (PerCom). IEEE. Hongyi Wang, Mikhail Yurochkin, Yuekai Sun, Dimitris Papailiopoulos, & Yasaman Khazaeni. (2020). Federated Learning with Matched Averaging. 3rd Generation Partnership Project, "Study on LTE support for Vehicle-to-Everything (V2X) services," 3GPP, Tech. Report. TR 22.885, Dec. 2015. 3rd Generation Partnership Project, "Study on LTE-based V2X services," 3GPP, Tech. Report. TR 36.885, Jul. 2016. 3rd Generation Partnership Project, "Study on evaluation methodology of new Vehicle-to-Everything (V2X) use cases for LTE and NR, " Tech. Report. TR 37.885, Jun. 2019. 3rd Generation Partnership Project, "Study on NR Vehicle-to-Everything (V2X)," 3GPP, Tech. Report. TR 38.885, Mar. 2019. 3rd Generation Partnership Project, "NR; Physical channels and modulation" 3GPP, Tech. Specification. TS 38.211, Jun. 2022. CELTIC CP5-026 WINNER+ Project, “Final channel models,” CELTIC, Deliverable D5.3, Jun. 2010.
|