作者投稿和查稿 主编审稿 专家审稿 编委审稿 远程编辑

计算机工程 ›› 2025, Vol. 51 ›› Issue (1): 156-163. doi: 10.19678/j.issn.1000-3428.0068486

• 网络空间安全 • 上一篇    下一篇

基于Transformer模型与注意力机制的差分密码分析

肖超恩*(), 李子凡, 张磊, 王建新, 钱思源   

  1. 北京电子科技学院电子与通信工程系, 北京 100071
  • 收稿日期:2023-09-28 出版日期:2025-01-15 发布日期:2025-01-18
  • 通讯作者: 肖超恩
  • 基金资助:
    中央高校基本科研业务费资金资助(3282024009)

Differential Cryptanalysis Based on Transformer Model and Attention Mechanism

XIAO Chaoen*(), LI Zifan, ZHANG Lei, WANG Jianxin, QIAN Siyuan   

  1. Department of Electronics and Communications Engineering, Beijing Electronic Science and Technology Institute, Beijing 100071, China
  • Received:2023-09-28 Online:2025-01-15 Published:2025-01-18
  • Contact: XIAO Chaoen

摘要:

基于差分分析的密码攻击中, 通常使用贝叶斯优化方法验证部分解密的数据是否具有差分特性。目前, 主要采用基于深度学习的方式训练1个差分区分器, 但随着加密轮数的增加, 差分特征的精确度会呈现线性降低的趋势。为此, 结合注意力机制和侧信道分析, 提出了一种新的差分特性判别方法。根据多轮密文间的差分关系, 基于Transformer训练了1个针对SPECK32/64算法的差分区分器。在密钥恢复攻击中, 借助前一轮的密文对待区分密文影响最大特性, 设计了新的密钥恢复攻击方案。在SPECK32/64算法的密钥恢复攻击中, 采用26个选择明密文对, 并借助第20轮密文对将第22轮65 536个候选密钥范围缩小至17个以内, 完成对最后两轮子密钥的恢复攻击。实验结果表明, 该方法的攻击成功率达90%, 可以有效应对加密轮数增多造成的密文差分特征难以识别的问题。

关键词: Transformer模型, 注意力机制, 差分区分器, SPECK32/64算法, 密钥恢复攻击

Abstract:

In differential analysis-based cryptographic attacks, Bayesian optimization is typically used to verify whether the partially decrypted data exhibit differential characteristics. Currently, the primary approach involves training a differential distinguisher using deep learning techniques. However, this method has a notable limitation in that, as the number of encryption rounds increases, the accuracy of the differential characteristics decreases linearly. Therefore, a new differential characteristic discrimination method is proposed based on the attention mechanism and side-channel analysis. Using the difference relationship between multiple rounds of the ciphertext, a difference partition for the SPECK32/64 algorithm is trained based on the transformer. In a key recovery attack, a novel scheme is designed based on the previous ciphertext treatment to distinguish the most influential features of the ciphertext. In the key recovery attack of the SPECK32/64 algorithm, 26 selected ciphertext pairs are used. Using the 20th round ciphertext pairs, the 65 536 candidate keys of the 22nd round can be screened within 17 on average, and the key recovery attack of the last two wheels can be completed. The experimental results show that this method achieves a success rate of 90%, effectively addressing the challenge of recognizing ciphertext differential features caused by an increase in the number of encryption rounds.

Key words: Transformer model, attention mechanism, differential distinguisher, SPECK32/64 algorithm, key recovery attack