[1] ZHANG X Y, ZHOU X Y, LIN M X, et al.ShuffleNet:an extremely efficient convolutional neural network for mobile devices[C]//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2018:6848-6856. [2] 杨民杰, 梁亚玲, 杜明辉.基于参数子空间和缩放因子的YOLO剪枝算法[J].计算机工程, 2021, 47(2):111-117. YANG M J, LIANG Y L, DU M H.YOLO pruning algorithm based on parameter subspace and scaling factor[J].Computer Engineering, 2021, 47(2):111-117.(in Chinese) [3] JADERBERG M, VEDALDI A, ZISSERMAN A.Speeding up convolutional neural networks with low rank expansions[EB/OL].[2020-05-11].https://arxiv.org/abs/1405.3866v1. [4] ZHU C Z, HAN S, MAO H Z, et al.Trained ternary quantization[EB/OL].[2020-05-11].https://arxiv.org/abs/1612.01064v3. [5] HINTON G, VINYALS O, DEAN J.Distilling the knowledge in a neural network[J].Computer Science, 2015, 14(7):38-39. [6] NAKKIRAN P, KAPLUN G, BANSAL Y, et al.Deep double descent:where bigger models and more data hurt[EB/OL].[2020-05-11].https://arxiv.org/abs/1912.02292. [7] LECUN Y, DENKER J S, SOLLA S A.Optimal brain damage[C]//Proceedings of 1990 International Conference on Neural Information Processing.Berlin, Germany:Springer, 1990:598-605. [8] HAN S, POOL J, TRAN J, et al.Learning both weights and connections for efficient neural networks[C]//Proceedings of 2015 International Conference on Neural Information Processing.Berlin, Germany:Springer, 2015:1135-1143. [9] GUO Y, YAO A, CHEN Y, et al.Dynamic network surgery for efficient DNNs[C]//Proceedings of 2016 International Conference on Neural Information Processing.Berlin, Germany:Springer, 2016:1387-1395. [10] LI H, KADAV A, DURDANOVIC I, et al.Pruning filters for efficient ConvNets[EB/OL].[2020-05-11].https://arxiv.org/pdf/1608.08710.pdf. [11] HU H Y, PENG R, TAI Y W, et al.Network trimming:a data-driven neuron pruning approach towards efficient deep architectures[EB/OL].[2020-05-11].https://arxiv.org/pdf/1607.03250.pdf. [12] WANG H, ZHANG W H, WONG K Y M, et al.Encoding multisensory information in modular neural networks[C]//Proceedings of 2017 International Conference on Neural Information Processing.Berlin, Germany:Springer, 2017:658-665. [13] YE J B, LU X, LIN Z, et al.Rethinking the smaller-norm-less-informative assumption in channel pruning of convolution layers[EB/OL].[2020-05-11].https://arxiv.org/abs/1802.00124. [14] LUO J H, WU J X, LIN W Y.ThiNet:a filter level pruning method for deep neural network compression[C]//Proceedings of 2017 IEEE International Conference on Computer Vision.Washington D.C., USA:IEEE Press, 2017:5068-5076. [15] ZHUANG Z, TAN M, ZHUANG B, et al.Discrimination-aware channel pruning for deep neural networks[C]//Proceedings of International Conference on Neural Information Processing.Berlin, Germany:Springer, 2018:883-894. [16] HE Y, LIU P, WANG Z W, et al.Filter pruning via geometric median for deep convolutional neural networks acceleration[C]//Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2019:4335-4344. [17] LIU Z, LI J G, SHEN Z Q, et al.Learning efficient convolutional networks through network slimming[C]//Proceedings of 2017 IEEE International Conference on Computer Vision.Washington D.C., USA:IEEE Press, 2017:2755-2763. [18] REDMON J, FARHADI A.YOLOv3:an incremental improvement[EB/OL].[2020-05-11].https://arxiv.org/pdf/1804.02767.pdf. [19] HE K M, ZHANG X Y, REN S Q, et al.Deep residual learning for image recognition[C]//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2016:770-778. [20] HUANG Z Z, WANG X J, LUO P.Convolution-weight-distribution assumption:rethinking the criteria of channel pruning[EB/OL].[2020-05-11].https://arxiv.org/abs/2004.11627. [21] IOFFE S, SZEGEDY C.Batch normalization:accelerating deep network training by reducing internal covariate shift[C]//Proceedings of the 32nd International Conference on International Conference on Machine Learning.New York, USA:ACM Press, 2015:448-456. [22] SUN X, REN X C, MA S M, et al.meProp:sparsified back propagation for accelerated deep learning with reduced overfitting[C]//Proceedings of the 34th International Conference on Machine Learning.Washington D.C., USA:IEEE Press, 2017:3299-3308. |