[1] GOODFELLOW I, BENGIO Y, COURVILLE A.Deep learning[M].Cambridge, USA:MIT Press, 2016:326-366. [2] KRIZHEVSKY A, SUTSKEVER I, HINTON G E.ImageNet classification with deep convolutional neural networks[J].Communications of the ACM, 2017, 60(6):84-90. [3] SZEGEDY C, LIU W, JIA Y, et al.Going deeper with convolutions[C]//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2015:1-9. [4] HE K M, ZHANG X, REN S, et al.Deep residual learning for image recognition[C]//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Washington D.C., USA:IEEE Press, 2016:1-5. [5] ZOU Y X, YU J S, CHEN Z H, et al.Convolution neural networks model compression based on feature selection for image classification[J].Control Theory & Applications, 2017, 34(6):746-752. [6] 叶子, 肖诗斌.卷积神经网络模型压缩在图像分类中的应用[J].北京信息科技大学学报, 2018, 33(3):52-56. YE Z, XIAO S B.Compression of convolutional neural network applied to image classification[J].Journal of Beijing Information Science & Technology University, 2018, 33(3):52-56.(in Chinese) [7] SONG H, JEFF P, JOHN T, et al.Learning both weights and connections for efficient neural network[J].International Journal of Neural Systems, 1996, 7(2):129-147. [8] HAN S, LIU X Y, MAO H Z, et al.EIE:efficient inference engine on compressed deep neural network[J].Computer Architecture News, 2016, 44(3):243-254. [9] DENTON E L, ZAREMBA W, BRUNA J, et al.Exploiting linear structure within convolutional networks for efficient evaluation[C]//Proceedings of the 28th Conference on Neural Information Processing Systems.Montreal, Canada:Morgan Kaufmann Press, 2014:1269-1277. [10] RASTEGARI M, ORDONEZ V, REDMON J, et al.XNOR-Net:ImageNet classification using binary convolutional neural networks[C]//Proceedings of European Conference on Computer Vision.Berlin, Germany:Springer, 2016:525-542. [11] ZHANG X, ZOU J, MING X, et al.Efficient and accurate approximations of nonlinear convolutional networks[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2015:1984-1992. [12] LIU Z, LI J G, SHEN Z Q.Learning efficient convolutional networks through network slimming[C]//Proceedings of the 16th IEEE International Conference on Computer Vision.Washington D.C., USA:IEEE Press, 2017:2755-2763. [13] HUANG G, LIU Z, LAURENS V D M, et al.Densely connected convolutional networks[C]//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2017:2261-2269. [14] VANHOUCKE V, SENIOR A, MAO M Z.Improving the speed of neural networks on CPUs[C]//Proceedings of Deep Learning and Unsupervised Feature Learning Workshop.Cambridge, USA:MIT Press, 2011:1-4. [15] WU J, CONG L, WANG Y, et al.Quantized convolutional neural networks for mobile devices[C]//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2016:4820-4828. [16] WEN W, WU C, WANG Y, et al.Learning structured sparsity in deep neural networks[C]//Proceedings of NIPS'16.Cambridge, USA:MIT Press, 2016:1-9. [17] CHANGPINYO S, SANDLER M, ZHMOGINOV A.The power of sparsity in convolutional neural networks[C]//Proceedings of International Conference on Learning Representations.Toulon, France:[s.n.], 2017:1-13. [18] HAO Z, ALVAREZ J M, PORIKLI F.Less is more:towards compact CNNs[C]//Proceedings of European Conference on Computer Vision.Berlin, Germany:Springer, 2016:1-16. [19] HAN S, MAO H, DALLY W J.Deep compression:compressing deep neural networks with pruning, trained quantization and huffman coding[J].Fiber, 2015, 56(4):3-7. [20] GUO Y, YAO A, CHEN Y.Dynamic network surgery for efficient DNNs[C]//Proceedings of NIPS'16.Cambridge, USA:MIT Press, 2016:1379-1387. [21] SIMONYAN K, ZISSERMAN A.Very deep convolutional networks for large-scale image recognition[C]//Proceedings of ICLR'17.Palais, France:ICLR:2017:1-5. [22] HE K, ZHANG X, REN S, ET AL.Deep residual learning for image recognition[C]//Proceedings of Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2016:1-12. [23] BELLO I, ZOPH B, VASUDEVAN V, et al.Neural optimizer search with reinforcement learning[C]//Proceedings of the 34th International Conference on Machine Learning.New York, USA:ACM Press, 2017:1-16. [24] BAKER B, GUPTA O, NAIK N.Designing neural network architectures using reinforcement learning[C]//Proceedings of ICLR'17.Palais, France:ICLR:2017:1-5. [25] TKCVHENKO R, IZONIN I.Model and principles for the implementation of neural-like structures based on geometric data transformations[C]//Proceedings of ICCSEEA'18.Berlin, Germany:Springer, 2018:578-587. [26] IZONIN I, TKACHENKO R, KRYVINSKA N, et al.Multiple linear regression based on coefficients identification using non-iterative SGTM neural-like structure[C]//Proceedings of the 15th International Work-Conference on Artificial Neural Networks.Gran Canaria, Spain:[s.n.], 2019:467-479. [27] ZHU J, HASTIE T.Classification of gene microarrays by penalized logistic regression[J].Biostatistics, 2004, 5(3):427-443. [28] MARIO Z, MICHAEL G.Accelerating K-means on the graphics processor via CUDA[C]//Proceedings of the 1st International Conference on Intensive Applications and Services.Washington D.C., USA:IEEE Press, 2009:7-15. [29] KWAK N.Principal component analysis based on L1-norm maximization[J].IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008, 30(9):1672-1680. [30] SRIVASTAVA N, HINTON G, KRIZHEVSKY A, et al.Dropout:a simple way to prevent neural networks from overfitting[J].Journal of Machine Learning Research, 2014, 15(1):1929-1958. [31] GAO H, YU S, ZHUANG L, et al.Deep networks with stochastic depth[C]//Proeedings of ECCV'16.Berlin, Germany:Springer, 2016:1-13. |