[1] 3GPP.Evolved Universal Terrestrial Radio Access (E-UTRA) and Evolved Universal Terrestrial Radio Access Network (E-UTRAN);overall description;stage 2(v14.3.0, release 14)[EB/OL].[2020-07-01].https://www.arib.or.jp/IMT-2000/. [2] BAZZI A, MASINI B M, ZANELLA A, et al.On the performance of IEEE 802.11 p and LTE-V2V for the cooperative awareness of connected vehicles[J].IEEE Transactions on Vehicular Technology, 2017, 66(11):10419-10432. [3] MOLINAM R, GOZALVEZ J.LTE-V for Sidelink 5G V2X vehicular communications:anew 5G technology for short-range vehicle-to-everything communications[J].IEEE Vehicular Technology Magazine, 2017, 12(4):30-39. [4] NABIL A, KAUR K, DIETRICH C, et al.Performance analysis of sensing-based semi-persistent scheduling in C-V2X networks[C]//Proceedings of the 88th IEEE Vehicular Technology Conference.Washington D.C., USA:IEEE Press, 2018:1-5. [5] KIM J, LEE J, MOON S, et al.A position-based resource allocation scheme for V2V communication[J].Wireless Personal Communications, 2018, 98(1):1569-1586. [6] GONZALEZ-MARTIN M, SEPULCRE M.Analytical models of the performance of C-V2X mode 4 vehicular communications[J].IEEE Transactions on Vehicular Technology, 2018, 68(2):1155-1166. [7] MOLINA-MASEGOSA R, GONZALEZ J, SEPULCRE M.Configuration of the C-V2X Mode-4 sidelink PC5 interface for vehicular communications[C]//Proceedings of the 14th International Conference on Mobile Ad-Hoc and Sensor Networks.Washington D.C., USA:IEEE Press, 2018:43-48. [8] BAZZI A, CECCHINI G, ZANELLA A, et al.Study of the impact of PHY and MAC parameters in 3GPP C-V2V Mode 4[J].IEEE Access, 2018, 7:1685-1698. [9] JUNG S Y, CHEON H R, KIM J H.Reducing consecutive collisions in sensing based semi persistent scheduling for cellular-V2X[C]//Proceedings of the 90th IEEE Vehicular Technology Conference.Washington D.C., USA:IEEE Press, 2019:1-5. [10] WANG X, BERRY R A, VUKOVIC I, et al.A fixed-point model for semi-persistent scheduling of vehicular safety messages[C]//Proceedings of the 88th IEEE Vehicular Technology Conference.Washington D.C., USA:IEEE Press, 2018:1-5. [11] ABANTO-LEON L F, KOPPELAAR A, DE GROOT S H.Enhanced C-V2X Mode-4 subchannel selection[C]//Proceedings of the 88th IEEE Vehicular Technology Conference.Washington D.C., USA:IEEE Press, 2018:112-121. [12] 余翔, 陈晓东, 王政, 等.基于LTE-V2X的车联网资源分配算法[J].计算机工程, 2021, 47(2):188-193. YU X, CHEN X D, WANG Z, et al.Resource allocation algorithm of vehicular network based on LTE-V2X[J].Computer Engineering, 2021, 47(2):188-193.(in Chinese) [13] 钱志鸿, 田春生, 王鑫, 等.D2D网络中信道选择与功率控制策略研究[J].电子与信息学报, 2019, 41(10):2287-2293. QIAN Z H, TIAN C S, WANG X, et al.Research on channel selection and power control strategy in D2D network[J].Journal of Electronics and Information Technology, 2019, 41(10):2287-2293.(in Chinese) [14] HUANG J, HUANG S, XING C C, et al.Game-theoretic power control mechanisms for device-to-device communications underlaying cellular system[J].IEEE Transactions on Vehicular Technology, 2018, 67(6):4890-4900. [15] 徐昌彪, 吴杰.超密集网中基于分簇的功率优化控制方案[J].计算机工程, 2019, 45(1):55-60. XU C B, WU J.Power optimization control scheme based on clustering in ultra-dense networks[J].Computer Engineering, 2019, 45(1):55-60.(in Chinese) [16] YE H, LI G Y, JUANG B H F.Deep reinforcement learning based resource allocation for V2V communications[J].IEEE Transactions on Vehicular Technology, 2019, 68(4):3163-3173. [17] ZHANG X, PENG M, YAN S, et al.Deep reinforcement learning based mode selection and resource allocation for cellular V2X communications[J].IEEE Internet of Things Journal, 2019, 23(6):2372-2385. [18] TOGHI B, SAIFUDDIN M, MAHJOUB H, et al.Multipleaccess in cellular V2X:performance analysis in highly congested vehicular networks[C]//Proceedings of the 88th IEEE Vehicular Networking Conference.Washington D.C., USA:IEEE Press, 2018:57-68. [19] ARULKUMARAN K, DEISENROTTH M P, BRUNDAGE M, et al.Deep reinforcement learning:a brief survey[J].IEEE Signal Processing Magazine, 2017, 34(6):26-38. [20] 刘建伟, 高峰, 罗雄麟.基于值函数和策略梯度的深度强化学习综述[J].计算机学报, 2019, 42(6):1406-1438. LIU J W, GAO F, LUO X L.Review of deep reinforcement learning based on value function and policy gradient[J].Chinese Journal of Computers, 2019, 42(6):1406-1438.(in Chinese) |