[1] 国家电网 . 泛 在 电 力 物 联 网 白 皮 书 2019(全文)
[EB/OL]. [2025-03-23]. http://mp.weixin.qq.com/s?
__biz=MzI2NDQ2ODMzMA==∣=2247484615&idx
=2&sn=825adf8d5a714813b4ae424bb6f8f6b1&chksm=e
aad66c2dddaefd4317a2fccf681a321f318a077c0eaf604a0
0803deb491acc4a293d432d091#rd.
STATE GRID CORPORATION OF CHINA. White paper
on ubiquitous power Internet of Things 2019 (full text)
[EB/OL]. [2025-03-23]. Available:
http://mp.weixin.qq.com/s?__biz=MzI2NDQ2ODMzMA
==∣=2247484615&idx=2&sn=825adf8d5a714813b4
ae424bb6f8f6b1&chksm=eaad66c2dddaefd4317a2fccf68
1a321f318a077c0eaf604a00803deb491acc4a293d432d0
91#rd.
[2] 邓祥力,廖玥琳,朱宏业,等.基于数字孪生模型电流动态
时间规整差异度的变压器早期故障辨识[J].电力系统保
护与控制,2023,51(12):156-167.
DENG X L, LIAO Y L, ZHU H Y, et al. Early fault
identification of transformers based on dynamic time
warping discrepancy of digital twin model currents [J].
Power System Protection and Control, 2023, 51(12):
156-167.
[3] QIN P, FU Y, XIE Y, et al. Multi-agent learning-based
optimal task offloading and UAV trajectory planning for
AGIN-power IoT[J]. IEEE Transactions on
Communications, 2023, 71(7): 4005-4017.
[4] LIAO H, WANG Z, ZHOU Z, et al. Blockchain and
semi-distributed learning-based secure and low-latency
computation offloading in space-air-ground-integrated
power IoT[J]. IEEE Journal of Selected Topics in Signal
Processing, 2022, 16(3): 381-394.
[5] CHEN N, HE J, YIN Z, et al. 6G service-oriented
space-air-ground integrated network: A survey[J].
Chinese Journal of Aeronautics, 2022, 35(9): 1-18.
[6] BEDI G, VENAYAGAMOORTHY G K, SINGH R, et al
[7] CHENG Z, LI W M, CHEN N, et al. Deep reinforcement
learning-based joint task and energy offloading in
UAV-aided 6G intelligent edge networks[J]. Computer
Communications, 2022, 192: 234-244.
[8] LIU J, ZHAO X, QIN P, et al. Joint dynamic task
offloading and resource scheduling for WPT enabled
space-air-ground power internet of things[J]. IEEE
Transactions on Network Science and Engineering, 2022,
9(2): 660-677.
[9] SEID A M, LU J, ABISHU H N, et al.
Blockchain-enabled task offloading with energy
harvesting in multi-UAV-assisted IoT networks: a
multi-agent DRL approach[J]. IEEE Journal on Selected
Areas in Communications, 2022, 40(12): 3517-3532.
[10] WU M, SU L, CHEN J, et al. Development and prospect
of wireless power transfer technology used to power
unmanned aerial vehicle[J]. Electronics, 2022, 11(15):
72-97.
[11] LAKEW D, TRAN T, DAO N, CHO S. Intelligent
self-optimization for task offloading in
LEO-MEC-assisted energy-harvesting-UAV systems[J].
IEEE Transactions on Network Science and Engineering,
2024, 11(6): 5135-5148.
[12] LIU Y, HAN F, ZHAO S. Flexible and reliable multiuser
SWIPT IoT network enhanced by UAV-mounted
intelligent reflecting surface[J]. IEEE Transactions on
Reliability, 2022, 71(2): 1092-1103.
[13] PENG H, WANG L C. Energy harvesting reconfigurable
intelligent surface for UAV based on robust deep
reinforcement learning[J]. IEEE Transactions on Wireless
Communications, 2023, 22(10): 6826-6838.
[14] BI S, HUANG L, WANG H, et al. Stable online
computation offloading via Lyapunov-guided deep
reinforcement learning[C]//ICC 2021 -IEEE International
Conference on Communications. Montreal, QC, Canada:
IEEE, 2021: 1-7.
[15] LOWE R, WU Y, TAMAR A, et al. Multi-agent
actor-critic for mixed cooperative-competitive
environments. ArXiv, 2017.
[16] 田霖, 苏智杰, 冯婉媚, 等. 面向多无人机携能网络的
轨迹与资源规划算法[J]. 西安电子科技大学学报, 2021,
48(6): 115-122.
TIAN L, SU Z J, FENG W M, et al. Trajectory and
resource planning algorithm for multi-UAV
energy-carrying networks[J]. Journal of Xidian University,
2021, 48(6): 115-122.
[17] PENG X, HAN Z, XIE W, et al. Deep reinforcement
learning for shared offloading strategy in vehicle edge
computing[J]. IEEE Systems Journal, 2022: 1-12.
[18] MINE H, TABATA Y. On a set of optimal policies in
continuous time Markovian decision problem[J]. Journal
of Mathematical Analysis and Applications, 1971, 34(1):
53-66.
[19] OUAMRI M A, BARB G, SINGH D, et al. Nonlinear
energy-harvesting for D2D networks underlaying UAV
with SWIPT using MADQN[J]. IEEE Communications
Letters, 2023, 27(7): 1804-1808.
[20] LI X, WEI X, CHEN S, et al. Multi-agent deep
reinforcement learning based resource management in
SWIPT enabled cellular networks with H2H/M2M
co-existence[J]. Ad Hoc Networks, 2023, 149(3):
243-256.
[21] Data source: physical flows on the Belgian grid [EB/OL].
(2022-01-01) [2023-07-15].
https://www.elia.be/en/grid-data/transmission/physical-fl
ows-on-the-belgian-grid.
[22] HUANG L, BI S, ZHANG Y J A. Deep reinforcement
learning for online computation offloading in wireless
powered mobile-edge computing networks[J]. IEEE
Transactions on Mobile Computing, 2020, 19(11):
2581-2593.
[23] FENG W, TANG J, YU Y, et al. UAV-enabled SWIPT in
IoT networks for emergency communications[J]. IEEE
Wireless Communications, 2020, 27(5): 140-147.
[24] HUANG F, CHEN J, WANG H, et al.
Multiple-UAV-assisted SWIPT in Internet of Things: User
association and power allocation[J]. IEEE Access, 2019,
7(2): 124244-124255.
[25] XU X, LIU K, DAI P, et al. Joint task offloading and
resource optimization in NOMA-based vehicular edge
computing: A game-theoretic DRL approach[J]. Journal
of Systems Architecture, 2023, 134(1): 102-113.
|