[1] EIMER T, BIEDENKAPP A, HUTTER F, et al. Self-paced context evaluation for contextual reinforcement learning[C]// Proceedings of the 38th International Conference on Machine Learning. Cambridge: Proceedings of Machine Learning Research, 2021: 2948-2958.
[2] TIAN Y, PEI K, JANA S, et al. Deeptest: Automated testing of deep-neural-network-driven autonomous cars[C]//2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE). Gothenburg: IEEE, 2018:
[3] SZEGEDY C, ZAREMBA W, SUTSKEVER I, et al. Intriguing properties of neural networks[C]//Proceedings of the 2nd International Conference on Learning Representations. Banff: ICLR, 2014.
[4] GOODFELLOW I J, SHLENS J, SZEGEDY C. Explaining and harnessing adversarial examples[C]//Proceedings of the 3rd International Conference on Learning Representations. San Diego: ICLR, 2015.
[5] MADRY A, MAKELOV A, SCHMIDT L, et al. Towards deep learning models resistant to adversarial attacks[C]//Proceedings of the 6th International Conference on Learning Representations. Vancouver: ICLR, 2018.
[6]CARLINI N, WAGNER D. Towards evaluating the robustness of neural networks[C]//2017 IEEE Symposium on Security and Privacy. San Jose: IEEE, 2017: 39-57. [7]MOOSAVI-DEZFOOLI S M, FAWZI A, FROSSARD P. DeepFool: a simple and accurate method to fool deep neural networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas: IEEE, 2016: 2574-2582. [8]CROCE F, HEIN M. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks[C]//Proceedings of the 37th International Conference on Machine Learning. Cambridge: Proceedings of Machine Learning Research, 2020: 2206-2216. [9]HOSSEINI H, POOVENDRAN R. Semantic adversarial examples[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. Salt Lake City: IEEE, 2018: 1695-1702. [10]BROWN T B, MANÉ D, ROY A, et al. Adversarial patch[C]//Advances in Neural Information Processing Systems. Vol. 30. Long Beach: Curran Associates, 2017. [11]CAI R, ZHU Y, QIAO J, et al. Where and how to attack? A causality-inspired recipe for generating counterfactual adversarial examples[C]//Proceedings of the AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2024: 11132-11140.
[12] 刘帅威, 李智, 王国美, 等. 基于Transformer和GAN的对抗样本生成算法[J]. 计算机工程, 2024, 50(2): 180-187. [Adversarial Example Generation Algorithm Based on Transformer and GAN].
[13] 白祉旭, 王衡军. 基于改进遗传算法的对抗样本生成方法[J]. 计算机工程, 2023, 49(5): 139-149. [Adversarial Example Generation Method Based on Improved Genetic Algorithm].
[14] 陈晓楠, 胡建敏, 张本俊, 等. 基于模型间迁移性的黑盒对抗攻击起点提升方法[J]. 计算机工程, 2021, 47(8): 162-169. [Black Box Adversarial Attack Starting Point Promotion Method Based on Mobility Between Models].
[15] SCHÖLKOPF B, LOCATELLO F, BAUER S, et al. Toward causal representation learning[J]. Proceedings of the IEEE, 2021, 109(5): 612-634.
[16] ZHENG X, ARAGAM B, RAVIKUMAR P K, et al. DAGs with NO TEARS: Continuous optimization for structure learning[C]//Advances in Neural Information Processing Systems 31 (NeurIPS 2018). Montréal: Curran Associates, 2018: 9472-9483. [17] YU Y, CHEN J, GAO T, et al. DAG-GNN: DAG structure learning with graph neural networks[C]//Proceedings of the 36th International Conference on Machine Learning (ICML 2019). Long Beach: PMLR, 2019: 7154-7163.
[18] CAI R, QIAO J, ZHANG Z, et al. SELF: Structural equational likelihood framework for causal discovery[C]//Proceedings of the AAAI Conference on Artificial Intelligence. New York: AAAI Press, 2018: 159-166.
[19] YANG M, LIU F, CHEN Z, et al. CausalVAE: disentangled representation learning via neural structural causal models[C]//2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Nashville: IEEE, 2021: 9588-9597.
[20] WANG T, ZHOU Z. Active causal effect identification with expert knowledge[J]. Scientia Sinica Informationis, 2023, 53(12): 2341-2356.
[21] ZHAO H. On learning invariant representations for domain adaptation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 41(12): 3001-3016.
[22] BAREINBOIM E, FORNEY A, PEARL J. Bandits with unobserved confounders: a causal approach[C]//Advances in Neural Information Processing Systems 28 (NeurIPS 2015). Montreal: Curran Associates, 2015: 1342-1350.
[23] MUEEN A, KEOGH E, YOUNG N. Logical-shapelets: an expressive primitive for time series classification[C]//Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. San Diego: ACM, 2011: 1154-1162.
[24] BLANKERTZ B, CURIO G, MÜLLER K R. Classifying single trial EEG: towards brain computer interfacing[C]//Advances in Neural Information Processing Systems. Cambridge: MIT Press, 2001.
[25] ZENG A, CHEN M, ZHANG L, et al. Are transformers effective for time series forecasting?[C]//Proceedings of the AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2023: 11121-11129
[26] HOCHREITER S, SCHMIDHUBER J. Long short-term memory[J]. Neural Computation, 1997, 9(8): 1735-1780.
[27] LAI G, CHANG W C, YANG Y, et al. Modeling long- and short-term temporal patterns with deep neural networks[C]//Proceedings of the 41st International ACM SIGIR Conference on Research and Development in Information Retrieval. Ann Arbor: ACM, 2018: 95-104.
[28] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]//Advances in Neural Information Processing Systems. Long Beach: Curran Associates, 2017.
[29] GU W, ZHONG R, ZHANG J, et al. Towards imperceptible adversarial attacks for time series classification with local perturbations and frequency analysis[EB/OL]. arXiv preprint arXiv:2503.19519, 2025.
[30] RUNGE J, NOWACK P, KRETSCHMER M, et al. Detecting causal associations in large nonlinear time series datasets[J]. Science Advances, 2019, 5(11): eaau4996.
[31] KIM H, LEE Y, LEE W, et al. Towards undetectable adversarial attack on time series classification[J]. Information Sciences, 2025, 715: 122216.
|