[1] SZEGEDY C,LIU W,JIA Y Q,et al.Going deeper with convolutions[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.Washington D.C.,USA:IEEE Press,2015:1-9. [2] ALEX K,SUTSKEVER I,HINTON G E.ImageNet classification with deep convolutional neural networks[J].Communications of the ACM,2017,60(6):84-90. [3] SIMONYAN K,ZISSERMAN A.Very deep convolutional networks for large-scale image recognition[C]//Proceedings of the 3rd International Conference on Learning Representations.San Diego,USA:[s.n.],2015:1-10. [4] HE K M,ZHANG X Y,REN S Q,et al.Deep residual learning for image recognition[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.Washington D.C.,USA:IEEE Press,2016:770-778. [5] 郭子博,高瑛珂,胡航天,等.基于混合架构的卷积神经网络算法加速研究[J].计算机工程与应用,2022,58(6):88-94. GUO Z B,GAO Y K,HU H T,et al.Research on acceleration of convolutional neural network algorithm based on hybrid architecture[J].Computer Engineering and Applications,2022,58(6):88-94.(in Chinese) [6] HOWARD A G,ZHU M L,CHEN B,et al.MobileNets:efficient convolutional neural networks for mobile vision applications[EB/OL].[2022-01-10].https://arxiv.org/abs/1704.04861. [7] SANDLER M,HOWARD A,ZHU M L,et al.MobileNetV2:inverted residuals and linear bottlenecks[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition.Washington D.C.,USA:IEEE Press,2018:4510-4520. [8] WU J X,LENG C,WANG Y H,et al.Quantized convolutional neural networks for mobile devices[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.Washington D.C.,USA:IEEE Press,2016:4820-4828. [9] LUO J H,WU J X,LIN W Y.ThiNet:a filter level pruning method for deep neural network compression[C]//Proceedings of IEEE International Conference on Computer Vision.Washington D.C.,USA:IEEE Press,2017:5068-5076. [10] ROMERO A,BALLAS N,KAHOU S E,et al.FitNets:hints for thin deep nets[EB/OL].[2022-01-10].https://arxiv.org/abs/1412.6550. [11] WANG W X,FU C,GUO J S,et al.COP:customized deep model compression via regularized correlation-based filter-level pruning[EB/OL].[2022-01-10].https://arxiv.org/abs/1906.10337. [12] LIU Z,LI J G,SHEN Z Q,et al.Learning efficient convolutional networks through network slimming[C]//Proceedings of IEEE International Conference on Computer Vision.Washington D.C.,USA:IEEE Press,2017:2755-2763. [13] CHIN T W,DING R Z,ZHANG C,et al.Towards efficient model compression via learned global ranking[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition.Washington D.C.,USA:IEEE Press,2020:1515-1525. [14] FRANKLE J,CARBIN M.The lottery ticket hypothesis:finding sparse,trainable neural networks[EB/OL].[2022-01-10].https://arxiv.org/abs/1803.03635. [15] MAENE J,LI M X,MOENS M F.Towards understanding iterative magnitude pruning:why lottery tickets win[EB/OL].[2022-01-10].https://arxiv.org/abs/2106.06955. [16] SAVARESE P,HUGO S,MICHAEL M.Winning the lottery with continuous sparsification[EB/OL].[2022-01-10].https://arxiv.org/abs/1912.04427v4. [17] MALACH E,YEHUDAI G,SHALEV-SHWARTZ S,et al.Proving the lottery ticket hypothesis:pruning is all you need[C]//Proceedings of the 3rd International Conference on Learning Representations.San Diego,USA:[s.n.],2020:1-10. [18] ZHOU H,LAN J,LIU R,et al.Deconstructing lottery tickets:zeros,signs,and the supermask[EB/OL].[2022-01-10].https://arxiv.org/abs/1905.01067. [19] FRANKLE J,DZIUGAITE G K,ROY D M,et al.Stabilizing the lottery ticket hypothesis[EB/OL].[2022-01-10].https://arxiv.org/abs/1903.01611. [20] PRAKASH A,STORER J,FLORENCIO D,et al.RePr:improved training of convolutional filters[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition.Washington D.C.,USA:IEEE Press,2020:10658-10667. [21] HE Y,KANG G L,DONG X Y,et al.Soft filter pruning for accelerating deep convolutional neural networks[EB/OL].[2022-01-10].https://arxiv.org/abs/1808.06866. [22] MISHRA A,LATORRE J A,POOL J,et al.Accelerating sparse deep neural networks[EB/OL].[2022-01-10].https://arxiv.org/abs/2104.08378. [23] GOU J P,YU B S,MAYBANK S J,et al.Knowledge distillation:a survey[J].International Journal of Computer Vision,2021,129(6):1789-1819. [24] HINTON G,VINYALS O,DEAN J.Distilling the knowledge in a neural network[EB/OL].[2022-01-10].https://arxiv.org/abs/1503.02531. [25] XU Y G,QIU X P,ZHOU L G,et al.Improving BERT fine-tuning via self-ensemble and self-distillation[EB/OL].[2022-01-10].https://arxiv.org/abs/2002.10345. [26] KIM K,JI B,YOON D,et al.Self-knowledge distillation with progressive refinement of targets[C]//Proceedings of IEEE/CVF International Conference on Computer Vision.Washington D.C.,USA:IEEE Press,2022:6547-6556. [27] SIMONE S,COMMINIELLO D,HUSSAIN A,et al.Group sparse regularization for deep neural networks[J].Neurocomputing,2017,241:81-89. [28] 韦越,陈世超,朱凤华,等.基于稀疏正则化的卷积神经网络模型剪枝方法[J].计算机工程,2021,47(10):61-66. WEI Y,CHEN S C,ZHU F H,et al.Pruning method for convolutional neural network models based on sparse regularization[J].Computer Engineering,2021,47(10):61-66.(in Chinese) [29] WIMMER P,MEHNERT J,CONDURACHE A.COPS:controlled pruning before training starts[EB/OL].[2022-01-10].https://arxiv.org/abs/2107.12673. [30] LI Y W,GU S H,MAYER C,et al.Group sparsity:the hinge between filter pruning and decomposition for network compression[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition.Washington D.C.,USA:IEEE Press,2020:8015-8024. [31] LIN M B,JI R R,WANG Y,et al.HRank:filter pruning using high-rank feature map[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition.Washington D.C.,USA:IEEE Press,2020:1526-1535. [32] ZHAO C L,NI B B,ZHANG J,et al.Variational convolutional neural network pruning[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition.Washington D.C.,USA:IEEE Press,2020:2775-2784. [33] GUO Y,YUAN H,TAN J C,et al.GDP:stabilized neural network pruning via gates with differentiable polarization[C]//Proceedings of IEEE/CVF International Conference on Computer Vision.Washington D.C.,USA:IEEE Press,2022:5219-5230. [34] WANG Y L,ZHANG X L,XIE L X,et al.Pruning from scratch[J].Proceedings of the AAAI Conference on Artificial Intelligence,2020,34(7):12273-12280. [35] LIN M B,JI R R,ZHANG Y X,et al.Channel pruning via automatic structure search[EB/OL].[2022-01-10].https://arxiv.org/abs/2001.08565v1. [36] TANG Y H,WANG Y H,XU Y X,et al.SCOP:scientific control for reliable neural network pruning[EB/OL].[2022-01-10].https://arxiv.org/abs/2010.10732. [37] CHIN T W,DING R Z,ZHANG C,et al.Towards efficient model compression via learned global ranking[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition.Washington D.C.,USA:IEEE Press,2020:1515-1525. [38] LEE N,AJANTHAN T,TORR P H S.SNIP:single-shot network pruning based on connection sensitivity[EB/OL].[2022-01-10].https://arxiv.org/abs/1810.02340. [39] MARQUES-SILVA J P,SAKALLAH K A.GRASP-a new search algorithm for satisfiability[M]//KUEHLMANN A.The best of ICCAD.Boston,USA:Springer US,2003:73-89. |