1 |
GADASIN D V, SHVEDOV A V, KOLTSOVA A V. Cluster model for edge computing[C]//Proceedings of 2020 International Conference on Engineering Management of Communication and Technology. Washington D. C., USA: IEEE Press, 2020: 1-4.
|
2 |
LEE K, LAM M, RAMCHANDRAN K. Speeding up distributed machine learning using codes. IEEE Transactions on Information Theory, 2018, 26(3): 1514- 1529.
|
3 |
BHATHAL G S, DHIMAN A S. Big data solution: improvised distributions framework of Hadoop[C]//Proceedings of the 2nd International Conference on Intelligent Computing and Control Systems. Washington D. C., USA: IEEE Press, 2018: 35-38.
|
4 |
KRISHNA T R, RAGUNATH T, BATTULA S K. Performance evaluation of read and write operations in Hadoop distributed file system[C]//Proceedings of the 6th International Symposium on Parallel Architectures, Algorithms and Programming. Washington D. C., USA: 2014: 110-113.
|
5 |
MERLA P, LIANG Y. Data analysis using Hadoop MapReduce environment[C]//Proceedings of 2017 IEEE International Conference on Big Data. Washington D. C., USA: IEEE Press, 2017: 4783-4785.
|
6 |
ZHAO Y, WU J, LIU C. A data aware caching for big-data applications using the MapReduce framework. Tsinghua Science and Technology, 2014, 19(1): 39- 50.
doi: 10.1109/TST.2014.6733207
|
7 |
YANG G. The application of MapReduce in the cloud computing[C]//Proceedings of the 2nd International Symposium on Intelligence Information Processing and Trusted Computing. Washington D. C., USA: IEEE Press, 2011: 154-156.
|
8 |
CHEN L, ZHANG X, SUN L. Image parallel processing by using MapReduce[C]//Proceedings of 2021 International Conference on Information Science, Parallel and Distributed Systems. Washington D. C., USA: IEEE Press, 2021: 246-250.
|
9 |
JEFFREY D, BARROSO L A. The tail at scale. Communications of the ACM, 2013, 56(2): 74- 80.
doi: 10.1145/2408776.2408794
|
10 |
杨逍. 基于编码的分布式计算理论与技术[D]. 南京: 东南大学, 2020.
|
|
YANG X. Coding based distributed theory of computation and technology[D]. Nanjing: Southeast University, 2020. (in Chinese)
|
11 |
ICHIMURA S, NAGAI T. Threaded accurate matrix-matrix multiplications with sparse matrix-vector multiplications[C]//Proceedings of 2018 IEEE International Parallel and Distributed Processing Symposium. Washington D. C., USA: IEEE Press, 2018: 1093-1102.
|
12 |
CHANG W T, TANDON R. Random sampling for distributed coded matrix multiplication[C]//Proceedings of International Conference on Acoustics, Speech and Signal Processing. Washington D. C., USA: IEEE Press, 2019: 8187-8191.
|
13 |
LUONG T T, CUONG N N, DUNG L T. The preservation of the coefficient of fixed points of an MDS matrix under direct exponent transformation[C]//Proceedings of 2015 International Conference on Advanced Technologies for Communications. Washington D. C., USA: IEEE Press, 2015: 111-116.
|
14 |
季忠铭. 边缘分布式场景中多点协同计算的任务时延优化[D]. 合肥: 中国科学技术大学, 2022.
|
|
JI Z M. Task delay optimization of multi-point collaborative computing in edge distributed scenarios[D]. Hefei: University of Science and Technology of China, 2022. (in Chinese)
|
15 |
王艳, 李念爽. 编码技术改进大规模分布式机器学习性能综述. 计算机研究与发展, 2020, 57(3): 542- 561.
URL
|
|
WANG Y, LI N S. Overview of coding technology improving the performance of large scale distributed machine learning. Computer Research and Development, 2020, 57(3): 542- 561.
URL
|
16 |
LEE K, RAMCHANDRAN C S K. High-dimensional coded matrix multiplication. IEEE International Symposium on Information Theory, 2017, 48(1): 2418- 2422.
|
17 |
苑燕飞. 基于编码技术的分布式计算方法研究[D]. 西安: 西安电子科技大学, 2021.
|
|
YUAN Y F. Research on distributed computing methods based on encoding technology[D]. Xi'an: Xidian University, 2021. (in Chinese)
|
18 |
YU Q, MADDAH A M, AVESTIMEHR A S. Polynomial codes: an optimal design for high-dimensional coded matrix multiplication. IEEE Transactions on Information Theory, 2017, 6(8): 82- 99.
|
19 |
JEONG H, DEVULAPALLI A, CALMON F P. ε-approximate coded matrix multiplication is nearly twice as efficient as exact multiplication. IEEE Journal on Selected Areas in Information Theory, 2021, 2(3): 845- 854.
doi: 10.1109/JSAIT.2021.3099811
|
20 |
KUNG H T. Fast evaluation and interpolation. Fast Evaluation & Interpolation, 1973, 2(6): 32- 40.
|
21 |
DUTTA S, FAHIM M, HADDADPOUR F, et al. On the optimal recovery threshold of coded matrix multiplication. IEEE Transactions on Information Theory, 2020, 66(1): 278- 301.
doi: 10.1109/TIT.2019.2929328
|
22 |
LIU N, LI K, TAO M. Code design and latency analysis of distributed matrix multiplication with straggling servers in fading channels. China Communications, 2021, 18(10): 15- 24.
doi: 10.23919/JCC.2021.10.002
|
23 |
LUBY M. LT codes[C]//Proceedings of the 43rd Annual IEEE Symposium on Foundations of Computer Science. Washington D. C., USA: IEEE Press, 2002: 271-282.
|
24 |
孟云霄, 牛芳琳. 关于信源原始码长的LT码度分布设计. 信息通信, 2018, 40(7): 4- 9.
URL
|
|
MENG Y X, NIU F L. Design of LT code degree distribution on source code length. Information Communication, 2018, 40(7): 4- 9.
URL
|
25 |
PRADHAN A K, HEIDARZADEH A, NARAYANAN K R. Factored LT and factored raptor codes for large-scale distributed matrix multiplication. IEEE Journal on Selected Areas in Information Theory, 2021, 2(3): 893- 906.
|
26 |
SEVERINSON A, GRAELL I A. Block-diagonal and LT codes for distributed computing with straggling servers. IEEE Transactions on Communications, 2019, 67(3): 1739- 1753.
|