[1] D V Gadasin , A V Shvedov , A V Koltsova. Cluster
Model for Edge Computing[C]//2020 International Co
nference on Engineering Management of Communicati
on and Technology,Austria:IEEE Press,2020:1-4.
[2] K Lee, M Lam, K Ramchandran. Speeding Up Distri
buted Machine Learning Using Codes[J]. IEEE Transa
ctions on Information Theory, 2018,26(3):1514-1529.
[3] G s Bhathal , A S Dhiman. Big Data Solution: Impro
vised Distributions Framework of Hadoop[C]//2018 Se
cond International Conference on Intelligent Computin
g and Control Systems, India:IEEE Press, 2018:35-38.
[4] T R Krishna, T Ragunath, S K Battula. Performance
Evaluation of Read and Write Operations in Hadoop
Distributed File System[C]//Sixth International Symposi
um on Parallel Architectures, China: Algorithms and P
rogramming, 2014:110-113. [5] P Merla, Y Liang. Data analysis using hadoop MapRe
duce environment[C]//2017 IEEE International Confere
nce on Big Data ,USA:IEEE Press, 2017:4783-4785.
[6] Y Zhao, J Wu, C. Liu. A data aware caching for big-
data applications using the MapReduce framework[J].
Tsinghua Science and Technology, 2014, 19(1):39-50.
[7] G Yang. The Application of MapReduce in the Cloud
Computing[C]//2011 2nd International Symposium on
Intelligence Information Processing and Trusted Com
puting. China, 2011:154-156.
[8] L Chen, X Zhang, L Sun. Image Parallel Processing
by Using MapReduce[C]//2021 International Conferenc
e on Information Science, Parallel and Distributed Sys
tems. China:IEEE Press, 2021:246-250.
[9] Dean, Jeffrey, Luiz André Barroso. The tail at scale[J].
Communications of the ACM, 2013, 56(2): 74-80.
[10] 杨逍. 基于编码的分布式计算理论与技术[D].东南大
学,2020.DOI:10.27014/d.cnki.gdnau.2020.000457.
Yang Xiao, Coding based distributed theory of computation
and technology[D]. Southeast University, 2020. (in
Chinese)
[11] S Ichimura, T Nagai. Threaded Accurate Matrix-Matri
x Multiplications with Sparse Matrix-Vector Multiplicat
ions[C]//2018 IEEE International Parallel and Distribut
ed Processing Symposium Workshops. Canada,2018:10
93-1102.
[12] W T Chang , R Tandon. Random Sampling for Distri
buted Coded Matrix Multiplication[C]//ICASSP 2019
International Conference on Acoustics, Speech and Sig
nal Processing. UK:IEEE Press, 2019:8187-8191.
[13] T T Luong, N N Cuong, L T Dung. The preservation
of the coefficient of fixed points of an MDS matrix
under direct exponent transformation[C]//2015 Internati
onal Conference, Advanced Technologies for Communi
cations. Vietnam, 2015:111-116.
[14] 季忠铭. 边缘分布式场景中多点协同计算的任务时延
优化[D].中国科学技术大学,2022.
Ji Zhongming Task delay optimization of multi-point
collaborative computing in edge distributed scenarios [D].
University of Science and Technology of China, 2022. (in
Chinese)
[15] 王艳,李念爽. 编码技术改进大规模分布式机器学习性
能综述[J]. 计算机研究与发展,2020,57(03):542-561.
Wang Yan, Li Nianshuang. Overview of Coding Techn
ology Improving the Performance of Large Scale Distr
ibuted Machine Learning[J].Computer Research and De
velopment, 2020,57(03):542-561. (in Chinese)
[16] K Lee, C Suh K Ramchandran. High-dimensional cod
ed matrix multiplication[J].IEEE International Symposi
um on Information Theory, 2017, 48(1):2418-2422.
[17] 苑燕飞. 基于编码技术的分布式计算方法研究 [D]. 西
安电子科技大学,2021.DOI:10.27389/d.cnki.gxadu.2021.
000877.
Yuan Yanfei. Research on Distributed Computing Met
hods Based on Encoding Technology [D] Xi'an Unive
rsity of Electronic Science and Technology, 2021. (in
Chinese)
[18] Yu Q, Maddah Ali M, Avestimehr A S. Polynomial C
odes: an Optimal Design for High-Dimensional Coded
Matrix Multiplication[J]. IEEE Transactions on Infor
mation Theory, 2017, 6(8):82-99.
[19] H Jeong, A Devulapalli, F P Calmon. ϵ-Approximate
Coded Matrix Multiplication IsNearly Twice as Efficie
nt as Exact Multiplication[J]. IEEE Journal on Selecte
d Areas in Information Theory,2021, 2(3):845-854
[20] Kung H T. Fast Evaluation and Interpolation[J]. fast e
valuation & interpolation, 1973,2(6):32-40.
[21] Dutta S, Fahim M, Haddadpour F, et al. On the Opti
mal Recovery Threshold of Coded Matrix Multiplicati
on[J]. IEEE Transactions on Information Theory, 2019,
PP(99):1-1.
[22] Liu N, Li K, Tao M. Code Design and Latency Anal
ysis of Distributed Matrix Multiplication with Straggli
ng Servers in Fading Channels[J]. China Communicati
ons, 2021, 18(10):15-24.
[23] Luby M. LT codes[C]. Proc 43 rd Annual IEEE Symposium
on Foundations of Coputer Science, 2002: 271-282
[24] 孟云霄,牛芳琳.关于信源原始码长的 LT 码度分布设计
[J].信息通信, 2018,40(7):4-9.
Meng Yunxiao, Niu Fanglin. Design of LT code Degr
ee distribution on source code length[J].Information co
mmunication, 2018,40(7):4-9. (in Chinese)
[25] A K Pradhan, A Heidarzadeh, K R Narayanan. Factor
ed LT and Factored Raptor Codes for Large-Scale Distributed Matrix Multiplication[J].IEEE Journal on Selec
ted Areas in Information Theory, 2021,2(3):893-906.
[26] SEVERINSON, ALBIN, GRAELL I AMAT. Block-Dia
gonal and LT Codes for Distributed Computing With
Straggling Servers [J]. IEEE Transactions on Commun
ications, 2019,67(3): 1739-1753.
|