[1] ATHALYE A,ENGSTROM L,ILYAS A,et al.Synthesizing robust adversarial examples[C]//Proceedings of International Conference on Machine Learning.New York,USA:ACM Press,2018:284-293. [2] SZEGEDY C,ZAREMBA W,SUTSKEVER I,et al.Intriguing properties of neural networks[EB/OL].[2020-04-01].https://arxiv.org/abs/1312.6199. [3] SU J W,VARGAS D V,SAKURAI K.One pixel attack for fooling deep neural networks[J].IEEE Transactions on Evolutionary Computation,2019,23(5):828-841. [4] NGUYEN A,YOSINSKI J,CLUNE J.Deep neural networks are easily fooled:high confidence predictions for unrecognizable images[C]//Proceedings of International Conference on Computer Vision and Pattern Recognition.Washington D.C.,USA:IEEE Press,2015:427-436. [5] GOODFELLOW I J,SHLENS J,SZEGEDY C.Explaining and harnessing adversarial examples[EB/OL].[2020-04-01].https://arxiv.org/abs/1412.6572. [6] PAPERNOT N,MCDANIEL P D,GOODFELLOW I J,et al.Practical black-box attacks against machine learning[C]//Proceedings of 2017 ACM Asia Conference on Computer and Communications Security.New York,USA:ACM Press,2017:506-519. [7] GUNNING D,AHA D W.DARPA's Explainable Artificial Intelligence (XAI) Program[J].AI Magazine,2019,40(2):44-58. [8] RAJPURKAR P,IRVIN J,ZHU K,et al.CheXNet:radiologist-level pneumonia detection on chest x-rays with deep learning[EB/OL].[2020-04-01].https://arxiv.org/abs/1711.05225. [9] ZHANG Zizhao,XIE Yuanpu,XING Fuyong,et al.MDNet:a semantically and visually interpretable medical image diagnosis network[C]//Proceedings of International Conference on Computer Vision and Pattern Recognition.Washington D.C.,USA:IEEE Press,2017:3549-3557. [10] KIM J,ROHRBACH A,DARRELL T,et al.Textual explanations for self-driving vehicles[C]//Proceedings of European Conference on Computer Vision.Berlin,Germany:Springer,2018:577-593. [11] DORAN D,SCHULZ S,BESOLD T R.What does explainable AI really mean? A new conceptualization of perspectives[EB/OL].[2020-04-01].https://arxiv.org/abs/1710.00794. [12] SAMEK W,WIEGAND T,MÜLLER K R.Explainable artificial intelligence:understanding,visualizing and interpreting deep learning models[EB/OL].[2020-04-01].https://arxiv.org/abs/1708.08296. [13] ZHANG Quanshi,ZHU Songchun.Visual interpretability for deep learning:a survey[J].Frontiers of Information Technology & Electronic Engineering,2018,19(1):27-39. [14] MILLER T.Explanation in artificial intelligence:insights from the social sciences[EB/OL].[2020-04-01].https://arxiv.org/abs/1706.07269. [15] KIM B,KOYEJO O,KHANNA R.Examples are not enough,learn to criticize! Criticism for interpretability[C]//Proceedings of International Conference on Neural Information Processing Systems.Cambridge,USA:MIT Press,2016:2280-2288. [16] STOICA I,SONG D,POPA R A,et al.A Berkeley view of systems challenges for AI[EB/OL].[2020-04-01].https://arxiv.org/abs/1712.05855. [17] XIONG Hongkai,GAO Xing,LI Shaohui,et al.Interpretable structured multi-modal deep neural network[J].Pattern Recognition and Artificial Intelligence,2018,31(1):1-11.(in Chinese)熊红凯,高星,李劭辉,等.可解释化、结构化、多模态化的深度神经网络[J].模式识别与人工智能,2018,31(1):1-11. [18] HE Huacan.Refinding the interpretability of artificial intelligence[J].CAAI Transactions on Intelligent Systems,2019,14(3):1-21.(in Chinese)何华灿.重新找回人工智能的可解释性[J].智能系统学报,2019,14(3):1-21. [19] CASTELVECCHI D.Can we open the black box of AI?[J].Nature News,2016,538(7623):20-23. [20] GUIDOTTI R,MONREALE A,TURINI F,et al.A survey of methods for explaining black box models[J].ACM Computing Surveys,2018,51(5):1-42. [21] CHAKRABORTY S,TOMSETT R,RAGHAVENDRA R,et al.Interpretability of deep learning models:a survey of results[C]//Proceedings of IEEE SmartWorld,Ubiquitous Intelligence & Computing,Advanced & Trusted Computed,Scalable Computing & Communications,Cloud & Big Data Computing,Internet of People and Smart City Innovation.Washington D.C.,USA:IEEE Press,2017:1-6. [22] MURDOCH W J,SINGH C,KUMBIER K,et al.Interpretable machine learning:definitions,methods,and applications[EB/OL].[2020-04-01].https://arxiv.org/abs/1901.04592. [23] DU Mengnan,LIU Ninghao,HU Xia.Techniques for interpretable machine learning[J].Communications of the ACM,2019,63(1):68-77. [24] MONTAVON G,SAMEK W,MÜLLER K R.Methods for interpreting and understanding deep neural networks[J].Digital Signal Processing,2018,73:1-15. [25] GILPIN L H,BAU D,YUAN B Z,et al.Explaining explanations:an approach to evaluating interpretability of machine learning[EB/OL].[2020-04-01].https://arxiv.org/abs/1806.00069. [26] SIMONYAN K,VEDALDI A,ZISSERMAN A.Deep inside convolutional networks:visualising image classification models and saliency maps[EB/OL].[2020-04-01].https://arxiv.org/abs/1312.6034. [27] BACH S,BINDER A,MONTAVON G,et al.On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation[J].PLOS ONE,2015,10(7):1-6. [28] ESCALANTE H J,ESCALERA S,GUYON I,et al.Explainable and interpretable models in computer vision and machine learning[M].Berlin,Germany:Springer,2018. [29] GAO Yingying,ZHU Weibin.Deep neural networks with visible intermediate layers[J].Acta Automatica Sinica,2015,41(9):1627-1637.(in Chinese)高莹莹,朱维彬.深层神经网络中间层可见化建模[J].自动化学报,2015,41(9):1627-1637. [30] ZEILER M D,FERGUS R.Visualizing and understanding convolutional networks[C]//Proceedings of European Conference on Computer Vision.Berlin,Germany:Springer,2014:818-833. [31] ZHOU B L,KHOSLA A,LAPEDRIZA A,et al.Learning deep features for discriminative localization[C]//Proceedings of International Conference on Computer Vision and Pattern Recognition.Washington D.C.,USA:IEEE Press,2016:2921-2929. [32] ZHOU B L,BAU D,OLIVA A,et al.Interpreting deep visual representations via network dissection[J].IEEE Transactions on Pattern Analysis & Machine Intelligence,2019,41(9):2131-2145. [33] YOSINSKI J,CLUNE J,NGUYEN A,et al.Understanding neural networks through deep visualization[EB/OL].[2020-04-01].https://arxiv.org/abs/1506.06579. [34] OLAH C,MORDVINTSEV A,SCHUBERT L.Feature visualization[J].Distill,2017,2(11):1-10. [35] KOH P W,LIANG P.Understanding black-box predictions via influence functions[EB/OL].[2020-04-01].https://arxiv.org/abs/1703.04730. [36] PETSIUK V,DAS A,SAENKO K.RISE:randomized input sampling for explanation of black-box models[EB/OL].[2020-04-01].https://arxiv.org/abs/1806.07421. [37] CHANG Chunhao,CREAGER E,GOLDENBERG A,et al.Explaining image classifiers by counterfactual genera-tion[EB/OL].[2020-04-01].https://arxiv.org/abs/1807.08024. [38] DABKOWSKI P,GAL Y.Real time image saliency for black box classifiers[EB/OL].[2020-04-01].https://arxiv.org/abs/1705.07857. [39] LANDECKER W,THOMURE M D,BETTENCOURT L M A,et al.Interpreting individual classifications of hierarchical networks[C]//Proceedings of International Conference on Computational Intelligence and Data Mining.Washington D.C.,USA:IEEE Press,2013:32-38. [40] SELVARAJU R R,COGSWELL M,DAS A,et al.Grad-CAM:visual explanations from deep networks via gradient-based localization[J].International Journal of Computer Vision,2020,128(2):336-359. [41] SUNDARARAJAN M,TALY A,YAN Q Q.Axiomatic attribution for deep networks[EB/OL].[2020-04-01].https://arxiv.org/abs/1703.01365. [42] SHRIKUMAR A,GREENSIDE P,KUNDAJE A.Learning important features through propagating activation differences[EB/OL].[2020-04-01].https://arxiv.org/abs/1704.02685. [43] KINDERMANS P-J,SCHÜTT K T,ALBER M,et al.Learning how to explain neural networks:PatternNet and PatternAttribution[EB/OL].[2020-04-01].https://arxiv.org/abs/1705.05598v2. [44] ZHANG Quanshi,WU Yingnian,ZHU Songchun.Interpretable convolutional neural networks[C]//Proceedings of International Conference on Computer Vision and Pattern Recognition.Washington D.C.,USA:IEEE Press,2018:8827-8836. [45] ZHANG Quanshi,WU Yingnian,ZHU Songchun.Interpretable CNNs[EB/OL].[2020-04-01].https://arxiv.org/abs/1901.02413v1. [46] RAVANELLI M,BENGIO Y.Interpretable convolutional filters with SincNet[EB/OL].[2020-04-01].https://arxiv.org/abs/1811.09725. [47] RIBEIRO M T,SINGH S,GUESTRIN C."Why should I trust you?":explaining the predictions of any classifier[C]//Proceedings of ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.New York,USA:ACM Press,2016:1135-1144. [48] WU M,HUGHES M C,PARBHOO S,et al.Beyond sparsity:tree regularization of deep models for interpretability[C]//Proceedings of AAAI Conference on Artificial Intelligence.Palo Alto,USA:AAAI Press,2018:1670-1678. [49] LIU X,WANG X G,MATWIN S.Interpretable deep convolutional neural networks via meta-learning[EB/OL].[2020-04-01].https://arxiv.org/abs/1802.00560. [50] ZHANG Quanshi,CAO Ruiming,SHI Feng,et al.Interpreting CNN knowledge via an explanatory graph[EB/OL].[2020-04-01].https://arxiv.org/abs/1708.01785. [51] ZHANG Quanshi,YANG Yu,WU Yingnian,et al.Interpreting CNNs via decision trees[EB/OL].[2020-04-01].https://arxiv.org/abs/1802.00121v1. [52] ZHANG Quanshi,YANG Yu,LIU Yuchen,et al.Unsupervised learning of neural networks to explain neural networks[EB/OL].[2020-04-01].https://arxiv.org/abs/1805.07468. [53] HENDRICKS L A,AKATA Z,ROHRBACH M,et al.Generating visual explanations[C]//Proceedings of European Conference on Computer Vision.Berlin,Germany:Springer,2016:3-19. [54] BARRATT S.InterpNET:neural introspection for interpretable deep learning[EB/OL].[2020-04-01].https://arxiv.org/abs/1710.09511. [55] PARK D H,HENDRICKS L A,AKATA Z,et al.Multimodal explanations:justifying decisions and pointing to the evidence[C]//Proceedings of International Conference on Computer Vision and Pattern Recognition.Washington D.C.,USA:IEEE Press,2018:8779-8788. [56] DONG H P,HENDRICKS L A,AKATA Z,et al.Attentive explanations:justifying decisions and pointing to the evidence[EB/OL].[2020-04-01].https://arxiv.org/abs/1612.04757. [57] KARPATHY A,JOHNSON J,LI F F.Visualizing and understanding recurrent networks[EB/OL].[2020-04-01].https://arxiv.org/abs/1506.02078. [58] LI J W,CHEN X L,HOVY E,et al.Visualizing and under-standing neural models in NLP[EB/OL].[2020-04-01].https://arxiv.org/abs/1506.01066. [59] KÁDÁR A,CHRUPAŁA G,ALISHAHI A.Representation of linguistic form and function in recurrent neural networks[J].Computational Linguistics,2017,43(4):761-780. [60] KÁDÁR A,CHRUPAŁA G,ALISHAHI A.Lingusitic analysis of multi-modal recurrent neural networks[C]//Proceedings of the 4th Workshop on Vision and Language.Stroudsburg,USA:ACL Press,2015:8-9. [61] STROBELT H,GEHRMANN S,PFISTER H,et al.LSTMVis:a tool for visual analysis of hidden state dynamics in recurrent neural networks[J].IEEE Transactions on Visualization and Computer Graphics,2018,24(1):667-676. [62] GUPTA P,SCHUTZE H.LISA:explaining recurrent neural network judgments via layer-wise semantic accumulation and example to pattern transformation[C]//Proceedings of 2018 Conference on Empirical Methods in Natural Language Processing.Brussels,Belgium:ACL,2018:154-164. [63] TANG Zhiyuan,SHI Ying,WANG Dong,et al.Memory visualization for gated recurrent neural networks in speech recognition[C]//Proceedings of 2017 IEEE International Conference on Acoustics,Speech and Signal Processing.Washington D.C.,USA:IEEE Press,2017:2736-2740. [64] MING Yao,CAO Shaozu,ZHANG Ruixiang,et al.Understanding hidden memories of recurrent neural networks[C]//Proceedings of 2017 IEEE Conference on Visual Analytics Science and Technology.Washington D.C.,USA:IEEE Press,2017:13-24. [65] PETERS M E,NEUMANN M,IYYER M,et al.Deep contextualized word representations[EB/OL].[2020-04-01].https://arxiv.org/abs/1802.05365. [66] BAHDANAU D,CHO K,BENGIO Y.Neural machine trans-lation by jointly learning to align and translate[EB/OL].[2020-04-01].https://arxiv.org/abs/1409.0473. [67] XU K,BA J,KIROS R,et al.Show,attend and tell:neural image caption generation with visual attention[C]//Proceedings of International Conference on Machine Learning.New York,USA:ACM Press,2015:2048-2057. [68] HERMANN K M,KOCISKY T,GREFENSTETTE E,et al.Teaching machines to read and comprehend[C]//Proceedings of International Conference on Neural Information Processing Systems.Cambridge,USA:MIT Press,2015:1693-1701. [69] ZENG Ming,GAO Haoxiang,YU Tong,et al.Understanding and improving recurrent networks for human activity recognition by continuous attention[C]//Proceedings of International Symposium on Wearable Computers.Washington D.C.,USA:IEEE Press,2018:56-63. [70] LIANG Xiaodan,LIN Liang,SHEN Xiaohui,et al.Interpretable structure-evolving LSTM[C]//Proceedings of International Conference on Computer Vision and Pattern Recognition.Washington D.C.,USA:IEEE Press,2017:2175-2184. [71] WISDOM S,POWERS T,PITTON J,et al.Interpretable recurrent neural networks using sequential sparse recovery[EB/OL].[2020-04-01].https://arxiv.org/abs/1611.07252. [72] RADFORD A,METZ L,CHINTALA S.Unsupervised representation learning with deep convolutional generative adversarial networks[EB/OL].[2020-04-01].https://arxiv.org/abs/1511.06434. [73] ZHU Junyan,KRÄHENBÜHL P,SHECHTMAN E,et al.Generative visual manipulation on the natural image manifold[C]//Proceedings of European Conference on Computer Vision.Berlin,Germany:Springer,2016:597-613. [74] BROCK A,LIM T,RITCHIE J M,et al.Neural photo editing with introspective adversarial networks[EB/OL].[2020-04-01].https://arxiv.org/abs/1609.07093. [75] BAU D,ZHU J Y,STROBELT H,et al.GAN dissection:visualizing and understanding generative adversarial networks[EB/OL].[2020-04-01].https://arxiv.org/abs/1811.10597. [76] SPRINGENBERG J T,DOSOVITSKIY A,BROX T,et al.Striving for simplicity:the all convolutional net[EB/OL].[2020-04-01].https://arxiv.org/abs/1412.6806. [77] BINDER A,BINDER A,SAMEK W.The LRP toolbox for artificial neural networks[J].Journal of Machine Learning Research,2016,17(1):3938-3942. [78] LAPUSCHKIN S,BINDER A,MONTAVON G,et al.Analyzing classifiers:fisher vectors and deep neural networks[C]//Proceedings of International Conference on Computer Vision and Pattern Recognition.Washington D.C.,USA:IEEE Press,2016:2912-2920. [79] SAMEK W,BINDER A,MONTAVON G,et al.Evaluating the visualization of what a deep neural network has learned[J].IEEE Transactions on Neural Networks,2017,28(11):2660-2673. [80] ARRAS L,HORN F,MONTAVON G,et al."What is relevant in a text document?":an interpretable machine learning approach[J].PLOS ONE,2016,12(8):1-13. [81] ARRAS L,MONTAVON G,MÜLLER K-R,et al.Explaining recurrent neural network predictions in sentiment analysis[EB/OL].[2020-04-01].https://arxiv.org/abs/1706.07206. [82] SRINIVASAN V,LAPUSCHKIN S,HELLGE C,et al.Interpretable human action recognition in compressed domain[C]//Proceedings of IEEE International Conference on Acoustics,Speech and Signal Processing.Washington D.C.,USA:IEEE Press,2017:1692-1696. [83] SMILKOV D,THORAT N,KIM B,et al.SmoothGrad:removing noise by adding noise[EB/OL].[2020-04-01].https://arxiv.org/abs/1706.03825. [84] GHORBANI A,ABID A,ZOU J.Interpretation of neural networks is fragile[C]//Proceedings of AAAI Conference on Artificial Intelligence.Palo Alto,USA:AAAI Press,2018:3681-3688. [85] YIN Bangjie,TRAN L,LI Haoxiang,et al.Towards interpretable face recognition[EB/OL].[2020-04-01].https://arxiv.org/abs/1805.00611. [86] KUO C C J,ZHANG M,LI S Y,et al.Interpretable convolutional neural networks via feedforward design[EB/OL].[2020-04-01].https://arxiv.org/abs/1810.02786. [87] ZHAROV Y,KORZHENKOV D,SHVECHIKOV P,et al.YASENN:explaining neural networks via partitioning activation sequences[EB/OL].[2020-04-01].https://arxiv.org/abs/1811.02783. [88] HOCHREITER S,SCHMIDHUBER J.Long short-term memory[J].Neural Computation,1997,9(8):1735-1780. [89] SCHUSTER M,PALIWAL K K.Bidirectional recurrent neural networks[J].IEEE Transactions on Signal Processing,1997,45(11):2673-2681. [90] CHO K,VAN MERRIËNBOER B,GULCEHRE C,et al.Learning phrase representations using RNN encoder-decoder for statistical machine translation[EB/OL].[2020-04-01].https://arxiv.org/abs/1406.1078. [91] GOODFELLOW I,POUGET-ABADIE J,MIRZA M,et al.Generative adversarial nets[C]//Proceedings of International Conference on Neural Information Processing Systems.Cambridge,USA:MIT Press,2014:2672-2680. [92] MIRZA M,OSINDERO S.Conditional generative adversarial nets[EB/OL].[2020-04-01].https://arxiv.org/abs/1411.1784. [93] CHEN X,DUAN Y,HOUTHOOFT R,et al.InfoGAN:interpretable representation learning by information maximizing generative adversarial nets[C]//Proceedings of International Conference on Neural Information Processing Systems.Cambridge,USA:MIT Press,2016:2172-2180. [94] ZHANG Han,XU Tao,LI Hongsheng,et al.StackGAN:text to photo-realistic image synthesis with stacked generative adversarial networks[C]//Proceedings of IEEE International Conference on Computer Vision.Washington D.C.,USA:IEEE Press,2017:5907-5915. [95] ARJOVSKY M,CHINTALA S,BOTTOU L.Wasserstein GAN[EB/OL].[2020-04-01].https://arxiv.org/abs/1701.07875. [96] YU Lantao,ZHANG Weinan,WANG Jun,et al.SeqGAN:sequence generative adversarial nets with policy gradient[C]//Proceedings of AAAI Conference on Artificial Intelligence.Palo Alto,USA:AAAI Press,2016:2852-2858. [97] NGUYEN A T,CLUNE J,BENGIO Y,et al.Plug & play generative networks:conditional iterative generation of images in latent space[C]//Proceedings of International Conference on Computer Vision and Pattern Recognition.Washington D.C.,USA:IEEE Press,2017:3510-3520. [98] HOFFMAN R R,MUELLER S T,KLEIN G,et al.Metrics for explainable AI:challenges and prospects[EB/OL].[2020-04-01].https://arxiv.org/abs/1812.04608. [99] BAU D,ZHOU Bolei,KHOSLA A,et al.Network dissection:quantifying interpretability of deep visual representations[C]//Proceedings of International Conference on Computer Vision and Pattern Recognition.Washington D.C.,USA:IEEE Press,2017:3319-3327. [100] DENKOWSKI M,LAVIE A.Meteor universal:language specific translation evaluation for any target language[C]//Proceedings of the 9th Workshop on Statistical Machine Translation.Stroudsburg,USA:ACL,2014:376-380. [101] KRIZHEVSKY A,SUTSKEVER I,HINTON G E.ImageNet classification with deep convolutional neural networks[C]//Proceedings of International Conference on Neural Information Processing Systems.Cambridge,USA:MIT Press,2012:1097-1105. [102] SIMONYAN K,ZISSERMAN A.Very deep convolutional networks for large-scale image recognition[EB/OL].[2020-04-01].https://arxiv.org/abs/1409.1556. [103] CHEN Xianjie,MOTTAGHI R,LIU Xiaobai,et al.Detect what you can:detecting and representing objects using holistic models and body parts[C]//Proceedings of International Conference on Computer Vision and Pattern Recognition.Washington D.C.,USA:IEEE Press,2014:1971-1978. [104] WAH C,BRANSON S,WELINDER P,et al.The Caltech-UCSD Birds-200-2011 dataset[EB/OL].[2020-04-01].http://www.vision.caltech.edu/visipedia/CUB-200-2011.html. [105] ZHANG Quanshi,CAO Ruiming,WU YingNian,et al.Growing interpretable part graphs on ConvNets via multi-shot learning[C]//Proceedings of AAAI Conference on Artificial Intelligence.Palo Alto,USA:AAAI Press,2017:2898-2906. [106] LI Hao,XU Zheng,TAYLOR G,et al.Visualizing the loss landscape of neural nets[EB/OL].[2020-04-01].https://arxiv.org/abs/1712.09913. [107] XU Bo,XIE Chenhao,ZHANG Yi,et al.Learning defining features for categories[C]//Proceedings of International Joint Conference on Artificial Intelligence.San Francisco,USA:Morgan Kaufmann Press,2016:3924-3930. [108] ZHANG Y,XIAO Y H,HWANG S W,et al.Entity suggestion with conceptual expanation[C]//Proceedings of International Joint Conference on Artificial Intelligence.San Francisco,USA:Morgan Kaufmann Press,2017:4244-4250. |