[1] 李亚超, 熊德意, 张民. 神经机器翻译综述[J]. 计算机学报, 2018, 41(12): 2734–2755.
Li Y C, Xiong D Y, Zhang M. A Survey of Neural Machine Translation[J]. Chinese Journal of Computers, 2018, 41(12): 2734–2755.(in Chinese)
[2] Feng Q, He D, Liu Z, et al. SecureNLP: A system for multi-party privacy-preserving natural language processing[J]. IEEE Transactions on Information Forensics and Security, 2020, 15: 3709-3721.
[3] Luo J, Zhang Y, Zhang Z, et al. SecFormer: Fast and Accurate Privacy Preserving Inference for Transformer Models via SMPC[C]//Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024). Bangkok, Thailand: Association for Computational Linguistics, 2024: 13333–13348.
[4] Chen A. Privacy-Preserving Natural Language Dataset Generation[D]. Cambridge: Massachusetts Institute of Technology, 2023.
[5] Raeini M. Privacy-preserving large language models (PPLLMs)[J/OL]. Available at SSRN 4512071, 2023: 1–12 [2025-04-03]. https://ssrn.com/abstract=4512071.
[6] Mahendran D, Luo C, Mcinnes B T. Privacy-preservation in the context of natural language processing[J]. IEEE Access, 2021, 9: 147600-147612.
[7] Liu Y, Su Q. PPTIF: Privacy-Preserving Transformer Inference Framework for Language Translation[J]. IEEE Access, 2024.
[8] Xue L, Constant N, Roberts A, et al. mT5: A massively multilingual pre-trained text-to-text transformer[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics (ACL 2021). Stroudsburg, PA, USA: Association for Computational Linguistics, 2021: 1116–1129.
[9] Zhang X, Zhang Z, Wang L, et al. Differentially private machine translation[C]//Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA, USA: Association for Computational Linguistics, 2020: 3006–3015.
[10] Liu Y, Zhang X, Wang L, et al. Adaptive privacy budget for sequence-to-sequence models in machine translation[C]//Proceedings of the 2023 Annual Conference on Neural Information Processing Systems. Red Hook, NY, USA: Curran Associates, 2023: 1124–1135.
[11] Zhang M, Xu J. Byte-level subword embeddings for efficient privacy-preserving NLP[C]//Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing (EMNLP 2024). Stroudsburg, PA, USA: Association for Computational Linguistics, 2024: 567–579.
[12] Vu D N L, Igamberdiev T, Habernal I. Granularity is crucial when applying differential privacy to text: An investigation for neural machine translation[C]//Findings of the Association for Computational Linguistics: EMNLP 2024. Stroudsburg, PA, USA: Association for Computational Linguistics, 2024: 507–527.
[13] Wang T, Zhang L, Liu Y. SLDP-FT: A selective privacy-preserving framework for fine-tuning large language models[C]//Proceedings of the 2024 International Conference on Machine Learning and Artificial Intelligence (MLAI 2024). Red Hook, NY, USA: Curran Associates, 2024: 1156–1170.
[14] Sutskever I, Vinyals O, Le Q V. Sequence to sequence learning with neural networks[J]. Advances in Neural Information Processing Systems, 2014, 27: 3104-3112.
[15] Dwork C, McSherry F, Nissim K, et al. Calibrating noise to sensitivity in private data analysis[C]//Proceedings of the Theory of Cryptography Conference. Berlin, Germany: Springer, 2006: 265–284.
[16] Asoodeh S, Diaz M. Convergence of privacy loss in differentially private non-convex optimization[J]. Journal of Machine Learning Research, 2024, 25(123): 1–35.
[17] Dupuy C, Arava R, Gupta R, et al. An efficient DP-SGD mechanism for large scale NLU models[C]//ICASSP 2022 – 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Singapore: IEEE, 2022: 4118–4122.
[18] Abadi M, Chu A, Goodfellow I, et al. Deep learning with differential privacy[C]//Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security (CCS). New York, NY, USA: ACM, 2016: 308–318.
[19] Papernot N, Song S, Mironov I, et al. Scalable private learning with PATE[C]//Proceedings of the 6th International Conference on Learning Representations (ICLR). Vancouver, Canada: OpenReview, 2018.
[20] Liu F. Generalized gaussian mechanism for differential privacy[J]. IEEE Transactions on Knowledge and Data Engineering, 2018, 31(4): 747-756.
[21] Lee J, Kifer D, Roth A, et al. When Are Adaptive Gradient Methods Beneficial for Differentially Private Learning[C]//Proceedings of the 39th International Conference on Machine Learning (ICML). Baltimore, MD, USA: PMLR, 2022: 12665–12684.
[22] Devlin J, Chang M W, Lee K, et al. BERT: Pre-training of deep bidirectional transformers for language understanding[C]//Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2019). Stroudsburg, PA, USA: Association for Computational Linguistics, 2019: 4171–4186.
[23] Conneau A, Khandelwal K, Goyal N, et al. Unsupervised cross-lingual representation learning at scale[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL 2020). Stroudsburg, PA, USA: Association for Computational Linguistics, 2020: 8440–8451.
[24] Rahimi A, Li Y, Cohn T. Massively multilingual transfer for NER[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019). Stroudsburg, PA, USA: Association for Computational Linguistics, 2019: 151–163.
[25] Arivazhagan N, Bapna A, Firat O, et al. Massively multilingual neural machine translation in the wild: Findings and challenges[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019). Stroudsburg, PA, USA: Association for Computational Linguistics, 2019: 1945–1957.
[26] Hu J, Ruder S, Siddhant A, et al. XTREME: A massively multilingual multi-task benchmark for evaluating cross-lingual generalisation[C]//Proceedings of the 2020 International Conference on Machine Learning (ICML). Vienna, Austria: PMLR, 2020: 4411–4421.
[27] Carlucci A, Vardi M Y. Evaluating membership inference attacks against private machine learning models[C]//Proceedings of the 2020 IEEE/ACM International Symposium on Low Power Electronics and Design (ISLPED). Boston, MA, USA: IEEE, 2020: 1–6.
[28] Reza Shokri, Marco Stronati, Congzheng Song,等.Membership inference attacks against machine learning models[C]// IEEE Symposium on Security and Privacy (S&P). Los Alamitos: IEEE Computer Society, 2017: 3–18.
|