Author Login Chief Editor Login Reviewer Login Editor Login Remote Office

Computer Engineering

   

Causal Counterfactual-Based Time-Series Model Adversarial Attack Algorithm

  

  • Published:2026-04-29

基于因果反事实的时序模型对抗攻击算法

Abstract: Deep neural networks have been widely applied to time-series critical tasks such as medical diagnosis, intelligent sensing, and autonomous driving, yet their security vulnerabilities have gradually emerged. Existing studies show that deep time-series models are also susceptible to adversarial attacks. However, most existing adversarial attack methods for time-series models mainly focus on norm-bounded numerical perturbations and often ignore the inherent causal dependencies and dynamic evolution in the data generation process. As a result, the generated adversarial examples may deviate from feasible system dynamics and lack practicality in real-world scenarios. Therefore, generating effective adversarial examples while adhering to temporal causal dynamics has become an important challenge in time-series adversarial research. To address this challenge, this paper proposes TCADE (Temporal Causal ADversarial Examples), a novel method that explicitly models causal structures in time-series data and performs counterfactual reasoning under causal intervention constraints. By formulating adversarial attacks as feasible interventions on the underlying system, TCADE generates adversarial examples that can effectively mislead model predictions while remaining consistent with the system’s causal relationships and dynamic evolution. Experimental results demonstrate that TCADE achieves significant attack effectiveness in black-box settings, and the generated adversarial sequences conform to the causal generation mechanisms. This work provides a systematic evaluation of the vulnerability of time-series models under realistic and feasible black-box attacks, and offers practical insights for improving model robustness.

摘要: 深度神经网络在医学诊断、智能感知和自动驾驶等时序关键任务中取得了广泛应用,但其安全性问题逐渐显现。已有研究表明,深度时序模型同样容易受到对抗样本攻击。然而,现有针对时序模型的对抗攻击方法大多侧重于数值层面的扰动约束,往往忽略时序数据生成过程中所固有的因果依赖与动态演化规律,导致生成的对抗样本偏离真实系统的可行演化轨迹,在现实应用场景中缺乏可行性。因此,如何在遵循时序因果动态约束的前提下生成有效的对抗样本,成为时序对抗攻击研究中的一项重要问题。针对上述挑战,本文提出 TCADE(Temporal Causal ADversarial Examples)方法,通过显式建模时序数据中的因果结构,并在因果干预约束下进行反事实推理,将对抗攻击过程刻画为对系统进行可行干预的过程,从而生成既能够有效误导模型预测、又符合系统因果关系与动态演化规律的对抗样本。实验结果表明,TCADE 在黑盒攻击设定下展现出显著的攻击效果,同时生成的对抗序列符合因果生成机制。本研究对时序模型在现实可行的黑盒攻击下的脆弱性进行了系统评估,并为提升模型鲁棒性提供了实践指导。