[1] BULAT A, KOSSAIFI J, TZIMIROPOULOS G, et al.Toward fast and accurate human pose estimation via soft-gated skip connections[C]//Proceedings of the 15th IEEE International Conference on Automatic Face and Gesture Recognition.Washington D.C., USA:IEEE Press, 2020:8-15. [2] HAN W, ZHANG Z D, ZHANG Y, et al.ContextNet:improving convolutional neural networks for automatic speech recognition with global context[EB/OL].[2021-03-05].https://arxiv.org/abs/2005.03191v3. [3] TORFI A, SHIRVANI R A, KENESHLOO Y, et al.Natural language processing advancements by deep learning:a survey[EB/OL].[2021-03-05].https://arxiv.org/abs/2003.01200. [4] KINGSBURY B E D, SAINATH T N, SINDHWANI V.Low-rank matrix factorization for deep belief network training with high-dimensional output targets:US9262724[P].2016-02-16. [5] GONG R H, LIU X L, JIANG S H, et al.Differentiable soft quantization:bridging full-precision and low-bit neural networks[EB/OL].[2021-03-05].https://arxiv.org/abs/1908.05033. [6] WANG W H, WEI F R, DONG L, et al.MiniLM:deep self-attention distillation for task-agnostic compression of pre-trained transformers[EB/OL].[2021-03-05].https://arxiv.org/abs/2002.10957. [7] MA N N, ZHANG X Y, ZHENG H T, et al.ShuffleNet V2:practical guidelines for efficient CNN architecture design[M].Berlin, Germany:Springer, 2018:122-138. [8] YOU Z H, YAN K, YE J M, et al.Gate decorator:global filter pruning method for accelerating deep convolutional neural networks[EB/OL].[2021-03-05].https://arxiv.org/abs/1909.08174. [9] HE Y, DING Y H, LIU P, et al.Learning filter pruning criteria for deep convolutional neural networks acceleration[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2020:2006-2015. [10] GUO Y W, YAO A B, CHEN Y R.Dynamic network surgery for efficient DNNs[C]//Proceedings of the 30th International Conference on Neural Information Processing Systems.New York, USA:ACM Press, 2006:1387-1395. [11] LUO J H, WU J X, LIN W Y.ThiNet:a filter level pruning method for deep neural network compression[C]//Proceedings of IEEE International Conference on Computer Vision.Washington D.C., USA:IEEE Press, 2017:5068-5076. [12] HE Y, LIU P, WANG Z W, et al.Filter pruning via geometric Median for deep convolutional neural networks acceleration[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2019:4335-4344. [13] HE Y, DONG X Y, KANG G L, et al.Asymptotic soft filter pruning for deep convolutional neural networks[EB/OL].[2021-03-05].https://arxiv.org/abs/1808.07471. [14] 卢海伟, 夏海峰, 袁晓彤.基于滤波器注意力机制与特征缩放系数的动态网络剪枝[J].小型微型计算机系统, 2019, 40(9):1832-1838. LU H W, XIA H F, YUAN X T.Dynamic network pruning via filter attention mechanism and feature scaling factor[J].Journal of Chinese Computer Systems, 2019, 40(9):1832-1838.(in Chinese) [15] 甘岚, 李佳, 沈鸿飞.面向嵌入式的残差网络加速方法研究[J].小型微型计算机系统, 2020, 41(11):2314-2320. GAN L, LI J, SHEN H F.Research on the acceleration method of residual network for embedded system[J].Journal of Chinese Computer Systems, 2020, 41(11):2314-2320.(in Chinese) [16] LIU Z, LI J G, SHEN Z Q, et al.Learning efficient convolutional networks through network slimming[C]//Proceedings of IEEE International Conference on Computer Vision.Washington D.C., USA:IEEE Press, 2017:2755-2763. [17] SCHENCK C, FOX D.SPNets:differentiable fluid dynamics for deep neural networks[EB/OL].[2021-03-05].https://arxiv.org/abs/1806.06094. [18] HE Y H, LIN J, LIU Z J, et al.AMC:AutoML for model compression and acceleration on mobile devices[C]//Proceedings of the 15th European Conference on Computer Vision.Berlin, Germany:Springer, 2018:851-832. [19] YANG T J, HOWARD A, CHEN B, et al.NetAdapt:platform-aware neural network adaptation for mobile applications[C]//Proceedings of European Conference on Computer Vision.Berlin, Germany:Springer, 2018:289-304. [20] LI B L, WU B W, SU J, et al.EagleEye:fast sub-net evaluation for efficient neural network pruning[C]//Proceedings of European Conference on Computer Vision.Berlin, Germany:Springer, 2020:639-654. [21] LIN S H, JI R R, LI Y C, et al.Accelerating convolutional networks via global & dynamic filter pruning[C]//Proceedings of the 27th International Joint Conference on Artificial Intelligence.New York, USA:ACM Press, 2018:2425-2432. [22] PASZKE A, GROSS S, CHINTALA S, et al.Automatic differentiation in PyTorch[EB/OL].[2021-03-05].https://openreview.net/pdf?id=BJJsrmfCZ. [23] DONG X Y, HUANG J S, YANG Y, et al.More is less:a more complicated network with less inference complexity[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2017:1895-1903. [24] LI H, KADAV A, DURDANOVIC I, et al.Pruning filters for efficient ConvNets[EB/OL].[2021-03-05].https://arxiv.org/abs/1608.08710. |