[1] 李耀墀, 蔡庆生. 迅速发展中的机器学习[J]. 计算机科学, 1988, 18(1): 30-34. LI Y C, CAI Q S. The rapid development of machine learning[J]. Computer Science, 1988, 18(1): 30-34. (in Chinese) [2] KONECN AY'G J, MCMAHAN H B, RAMAGE D, et al. Federated optimization: distributed machine learning for on-device intelligence[EB/OL].[2024-05-15]. http://arxiv.org/abs/1610.02527. [3] KONECNY J, MCMAHAN H B, YU F X, et al. Federated learning: strategies for improving communication efficiency[EB/OL].[2024-05-15]. http://arxiv.org/abs/1610.05492. [4] WANG Z B, SONG M K, ZHANG Z F, et al. Beyond inferring class representatives: user-level privacy leakage from federated learning[C]//Proceedings of the IEEE Conference on Computer Communications. Washington D. C., USA: IEEE Press, 2019: 2512-2520. [5] YIN H X, MALLYA A, VAHDAT A, et al. See through gradients: image batch recovery via GradInversion[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington D. C., USA: IEEE Press, 2021: 16337-16346. [6] ZHAO B, MOPURI K R, BILEN H. IDLG: improved deep leakage from gradients[EB/OL].[2024-05-15]. http://arxiv.org/abs/2001.02610. [7] TRAMER F, KURAKIN A, PAPERNOT N, et al. Ensemble adversarial training: attacks and defenses[EB/OL].[2024-05-15]. http://arxiv.org/abs/1705.07204. [8] 张春田, 苏育挺. 信息产品的版权保护技术——数字水印[J]. 电信科学, 1998, 14(12): 16-18. ZHANG C T, SU Y T. Copyright protection technology for information products: digital watermarking[J]. Telecommunication Science, 1998, 14(12): 16-18. (in Chinese) [9] 孙圣和, 陆哲明. 数字水印处理技术[J]. 电子学报, 2000, 38(8): 85-90. SUN S H, LU Z M. Digital watermarking processing techniques[J]. Acta Electronica Sinica, 2000, 38(8): 85-90. (in Chinese) [10] UCHIDA Y, NAGAI Y, SAKAZAWA S, et al. Embedding watermarks into deep neural networks[C]//Proceedings of the ACM International Conference on Multimedia Retrieval. New York, USA: ACM Press, 2017: 269-277. [11] ADI Y, BAUM C, CISSE M, et al. Turning your weakness into a strength: watermarking deep neural networks by backdooring[C]//Proceedings of the 27th USENIX Security Symposium. Washington D. C., USA: IEEE Press, 2018: 1615-1631. [12] FAN L X, NG K W, CHAN C S, et al. DeepIP: deep neural network intellectual property protection with passports[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2021, 58(2): 325-334. [13] ZHAO Y, PANG T, DU C, et al. A recipe for watermarking diffusion models[EB/OL].[2024-05-15]. http://arxiv.org/abs/2303.10137. [14] 袁程胜, 郭强, 付章杰. 基于差分隐私的深度伪造指纹检测模型版权保护算法[J]. 通信学报, 2022, 43(9): 181-193. YUAN C S, GUO Q, FU Z J. Copyright protection algorithm based on differential privacy deep fake fingerprint detection model[J]. Journal on Communications, 2022, 43(9): 181-193. (in Chinese) [15] LI B W, FAN L X, GU H L, et al. FedIPR: ownership verification for federated deep neural network models[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(4): 4521-4536. [16] SHAO S, YANG W, GU H, et al. FedTracker: furnishing ownership verification and traceability for federated learning model[EB/OL].[2024-05-15]. http://arxiv.org/abs/2211.07160. [17] AHMADI M, NOURMOHAMMADI R. zkFDL: an efficient and privacy-preserving decentralized federated learning with zero knowledge proof[C]//Proceedings of the 3rd IEEE International Conference on AI in Cybersecurity. Washington D. C., USA: IEEE Press, 2024: 1-10. [18] YANG W, ZHU G, YIN Y, et al. FedSOV: federated model secure ownership verification with unforgeable signature[EB/OL].[2024-05-15]. http://arxiv.org/abs/2305.06085. [19] WU T, LI X H, MIAO Y B, et al. CITS-MEW: multi-party entangled watermark in cooperative intelligent transportation system[J]. IEEE Transactions on Intelligent Transportation Systems, 2022, 24(3): 3528-3540. [20] NIE H W, LU S F. FedCRMW: federated model ownership verification with compression-resistant model watermarking[J]. Expert Systems with Applications, 2024, 249: 123776. [21] TEKGUL B G A, XIA Y X, MARCHAL S, et al. WAFFLE: watermarking in federated learning[C]//Proceedings of the 40th International Symposium on Reliable Distributed Systems. Washington D. C., USA: IEEE Press, 2021: 310-320. [22] ZHENG X, DONG Q H, FU A M. WMDefense: using watermark to defense Byzantine attacks in federated learning[C]//Proceedings of the IEEE Conference on Computer Communications Workshops. Washington D. C., USA: IEEE Press, 2022: 1-6. [23] YANG Q, LIU Y, CHEN T J, et al. Federated machine learning[J]. ACM Transactions on Intelligent Systems and Technology, 2019, 10(2): 1-19. [24] 周修廉. 分布式计算机体系结构[J]. 哈尔滨科学技术大学学报, 1980, 2(1): 48-55. ZHOU X L. Distributed computer architecture[J]. Journal of Harbin University of Science and Technology, 1980, 2(1): 48-55. (in Chinese) [25] 原野, 田园, 蒋七兵. 一种深度神经网络的分布式训练方法[J]. 电子技术应用, 2023, 49(3): 48-53. YUAN Y, TIAN Y, JIANG Q B. Distributed training method for deep neural networks[J]. Application of Electronic Technique, 2023, 49(3): 48-53. (in Chinese) [26] MCMAHAN B, MOORE E, RAMAGE D, et al. Federated learning deep networks using model averaging[EB/OL].[2024-05-15]. http://arxiv.org/abs/1602.05629. [27] WU Y, CAI S, XIAO X, et al. Privacy preserving vertical federated learning for tree-based models[EB/OL].[2024-05-15]. http://arxiv.org/abs/2008.06170. [28] YANG H W, HE H, ZHANG W Z, et al. FedSteg: a federated transfer learning framework for secure image steganalysis[J]. IEEE Transactions on Network Science and Engineering, 2020, 8(2): 1084-1094. [29] NASR M, SHOKRI R, HOUMANSADR A. Comprehensive privacy analysis of deep learning[C]//Proceedings of the 2019 IEEE Symposium on Security and Privacy. Washington D. C., USA: IEEE Press, 2018: 433-445. [30] PAILLER P. Public-key cryptosystems based on composite degree residuosity classes[M]. Berlin, Germany: Springer, 1999. [31] DWORK C, MCSHERRY F, NISSIM K, et al. Calibrating noise to sensitivity in private data analysis[M]. Berlin, Germany: Springer, 2006. [32] CRAMER R, SHOUP V. Universal hash proofs and a paradigm for adaptive chosen ciphertext secure public-key encryption[C]//Proceedings of the International Conference on Theory and Applications of Cryptographic Techniques. Berlin, Germany: Springer, 2002: 45-64. [33] MASHHADI P S, NOWACZYK S, PASHAMI S. Parallel orthogonal deep neural network[J]. Neural Networks, 2021, 140: 167-183. [34] SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[EB/OL].[2024-05-15]. http://arxiv.org/abs/1409.1556. [35] RUSSAKOVSKY O, DENG J, SU H, et al. ImageNet large scale visual recognition challenge[J]. International Journal of Computer Vision, 2015, 115(3): 211-252. [36] KRIZHEVSKY A, SUTSKEVER I, HINTON G E. ImageNet classification with deep convolutional neural networks[C]//Proceedings of the Advances in Neural Information Processing Systems. Cambridge, USA: MIT Press, 2015: 25-33. [37] KRIZHEVSKY A. Learning multiple layers of features from tiny images[D]. Toronto, Canada: University of Toronto, 2009. [38] MCMAHAN B, MOORE E, RAMAGE D, et al. Communication-efficient learning of deep networks from decentralized data[EB/OL].[2024-05-15]. http://arxiv.org/abs/1602.05629. [39] HOADLEY B. Asymptotic properties of maximum likelihood estimators for the independent not identically distributed case[J]. The Annals of Mathematical Statistics, 1971, 42(6): 1977-1991. |