[1] WANG S C.Artificial neural network[M]. Berlin, Germany:Springer, 2003. [2] DOERSCH C, GUPTA A, EFROS A A.Unsupervised visual representation learning by context prediction[C]//Proceedings of IEEE International Conference on Computer Vision.Washington D.C., USA:IEEE Press, 2015:1422-1430. [3] GIDARIS S, SINGH P, KOMODAKIS N.Unsupervised representation learning by predicting image rotations[EB/OL]. [2021-01-12]. http://arxiv.org/abs/1803.07728. [4] ZHOU Z H.A brief introduction to weakly supervised learning[J]. National Science Review, 2018, 5(1): 44-53. [5] RABINOVICH E, SZNAJDER B, SPECTOR A, et al. Learning concept abstractness using weak supervision[EB/OL]. [2021-01-12]. https://arxiv.org/pdf/1809.01285.pdf. [6] ARACHIE C, HUANG B.Adversarial label learning[C]//Proceedings of AAAI Conference on Artificial Intelligence.Palo Alto, SUA:AAAI Press, 2019:3183-3190. [7] MUHAMMAD U R, YANG Y, HOSPEDALES T M, et al. Goal-driven sequential data abstraction[C]//Proceedings of IEEE/CVF International Conference on Computer Vision.Washington D.C., USA:IEEE Press, 2019:71-80. [8] KRIZHEVSKY A, SUTSKEVER I, HINTON G.ImageNet classification with deep convolutional neural networks[C]//Proceedings of the 25th International Conference on Neural Information Processing Systems.Lake Tahoe, USA:Associates Inc., 2012:1097-1105. [9] HAN K, GUO J, ZHANG C, et al. Attribute-aware attention model for fine-grained representation learning[C]//Proceedings of the 26th ACM International Conference on Multimedia.New York, USA:ACM Press, 2018:2040-2048. [10] REN S, HE K, GIRSHICK R, et al. Faster R-CNN:towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 39(6): 1137-1149. [11] LIN T Y, GOYAL P, GIRSHICK R, et al. Focal loss for dense object detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42(2): 318-327. [12] CHEN L C, PAPANDREOU G, KOKKINOS I, et al. Semantic image segmentation with deep convolutional nets and fully connected CRFs[J]. Computer Science, 2014(4): 357-361. [13] HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2016:770-778. [14] LECUN Y, DENKER J S, SOLLA S A, et al. Optimal brain damage[C]//Proceedings of Advances in Neural Information Processing Systems.New York, USA:ACM Press, 1990:598-605. [15] HAN S, MAO H, DALLY W J.Deep compression:compressing deep neural networks with pruning, trained quantization and huffman coding[EB/OL]. [2021-01-12]. https://www.researchgate.net/publication/319770334_Deep_Compression_Compressing_Deep_Neural_Networks_with_Pruning_Trained_Quantization_and_Huffman_Coding. [16] CHENG Y, WANG D, ZHOU P, et al. A survey of model compression and acceleration for deep neural networks[EB/OL]. [2021-01-12]. https://arxiv.org/abs/1710.09282. [17] DENG L, LI G, HAN S, et al. Model compression and hardware acceleration for neural networks:a comprehensive survey[J]. Proceedings of the IEEE, 2020, 108(4): 485-532. [18] KRIZHEVSKY A, SUTSKEVER I, HINTON G E.ImageNet classification with deep convolutional neural networks[C]//Proceedings of Advances in Neural Information Processing Systems.New York, USA:ACM Press, 2012:1097-1105. [19] IANDOLA F N, HAN S, MOSKEWICZ M W, et al. SqueezeNet:AlexNet-level accuracy with 50x fewer parameters and <0.5 MB model size[EB/OL]. [2021-01-12]. https://arxiv.org/abs/1602.07360. [20] HOWARD A G, ZHU M L, CHEN B, et al. MobileNet:efficient convolutional neural networks for mobile vision applications[EB/OL]. [2021-01-12]. https://arxiv.org/abs/1704.04861. [21] SANDLER M, HOWARD A, ZHU M, et al. MobileNet V2:inverted residuals and linear bottlenecks[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2018:4510-4520. [22] ZHANG X, ZHOU X, LIN M, et al. ShuffleNet:an extremely efficient convolutional neural network for mobile devices[C]//Proceedings of IEEE conference on computer vision and pattern recognition.Washington D.C., USA:IEEE Press, 2018:6848-6856. [23] MA N, ZHANG X, ZHENG H T, et al. ShuffleNet V2:Practical guidelines for efficient CNN architecture design[C]//Proceedings of European Conference on Computer Vision.Berlin, Germany:Springer, 2018:116-131. [24] CHEN Y, FAN H, XU B, et al. Drop an octave:reducing spatial redundancy in convolutional neural networks with octave convolution[C]//Proceedings of IEEE/CVF International Conference on Computer Vision.Washington D.C., USA:IEEE Press, 2019:3435-3444. [25] HAN K, WANG Y, TIAN Q, et al. GhostNet:more features from cheap operations[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2020:1580-1589. [26] ZOPH B, VASUDEVAN V, SHLENS J, et al. Learning transferable architectures for scalable image recognition[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2018:8697-8710. [27] TAN M, CHEN B, PANG R, et al. MnasNet:Platform-aware neural architecture search for mobile[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2019:2820-2828. [28] SIMONYAN K, ZISSERMAN A.Very deep convolutional networks for large-scale image recognition[EB/OL]. [2021-01-12]. https://arxiv.org/pdf/1409.1556.pdf. [29] CHOLLET F.Xception:deep learning with depthwise separable convolutions[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2017:1251-1258. [30] SZEGEDY C, VANHOUCKE V, IOFFE S, et al. Rethinking the Inception architecture for computer vision[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2016:2818-2826. [31] SZEGEDY C, LIU W, JIA Y, et al. Going deeper with convolutions[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2015:1-9. [32] ZAREMBA W, SUTSKEVER I, VINYALS O.Recurrent neural network regularization[EB/OL]. [2021-01-12]. https://www.researchgate.net/publication/265469170_Recurrent_Neural_Network_Regularization. [33] TAN M X, LE Q V.MixConv:mixed depthwise convolutional kernels[EB/OL]. [2021-01-12]. https://arxiv.org/abs/1907.09595v3. [34] 李志军, 杨楚皙, 刘丹, 等. 基于深度卷积神经网络的信息流增强图像压缩方法[J]. 吉林大学学报(工学版), 2020, 50(5): 1788-1795. LI Z J, YANG C X, LIU D, et al. Deep convolutional networks based image compression with enhancement of information flow[J]. Journal of Jilin University(Engineering and Technology Edition), 2020, 50(5): 1788-1795.(in Chinese) [35] SUN Y, WANG X, TANG X.Sparsifying neural network connections for face recognition[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2016:4856-4864. [36] LUO J H, WU J X.An entropy-based pruning method for CNN compression[EB/OL]. [2021-01-12]. http://arxiv.org/abs/1706.05791. [37] 胡黄水, 赵思远, 刘清雪, 等. 基于动量因子优化学习率的BP神经网络PID参数整定算法[J]. 吉林大学学报(理学版), 2020, 58(6): 1415-1420. HU H S, ZHAO S Y, LIU Q X, et al. BP neural network PID parameter tuning algorithm based on momentum factor optimized learning rate[J]. Journal of Jilin University(Science Edition), 2020, 58(6): 1415-1420.(in Chinese) [38] HAN S, POOL J, TRAN J, et al. Learning both weights and connections for efficient neural networks[EB/OL]. [2021-01-12]. https://www.researchgate.net/publication/277959043_Learning_both_Weights_and_Connections_for_Efficient_Neural_Networks. [39] SAU B B, BALASUBRAMANIAN V N.Deep model compression:distilling knowledge from noisy teachers[EB/OL]. [2021-01-12]. https://arxiv.org/pdf/1610.09650.pdf. [40] WEN W, XU C, WU C, et al. Coordinating filters for faster deep neural networks[C]//Proceedings of IEEE International Conference on Computer Vision.Washington D.C., USA:IEEE Press, 2017:658-666. [41] HINTON G, VINYALS O, DEAN J.Distilling the knowledge in a neural network[EB/OL]. [2021-01-12]. http://arxiv.org/pdf/1503.02531. [42] MITCHELL T M.Machine learning[M]. [S.l.]:McGraw-Hill Press, 2003. [43] SILVER D, HUANG A, MADDISON C J, et al. Mastering the game of go with deep neural networks and tree search[J]. Nature, 2016, 529(7587): 484-489. [44] XIONG W, DROPPO J, HUANG X, et al. Toward human parity in conversational speech recognition[J]. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2017, 25(12): 2410-2423. [45] YAO Q M, WANG M S, HUGO J E, et al. Taking human out of learning applications:a survey on automated machine learning[EB/OL]. [2021-01-12]. http://export.arxiv.org/abs/1810.13306. [46] LⅢ K P.On lines and planes of closest fit to systems of points in space[J]. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 1901, 2(11): 559-572. [47] FISHER R A.The use of multiple measurements in taxonomic problems[J]. Annals of Eugenics, 1936, 7(2): 179-188. [48] VINCENT P, LAROCHELLE H, BENGIO Y, et al. Extracting and composing robust features with denoising autoencoders[C]//Proceedings of the 25th International Conference on Machine Learning.Helsinki, Finland:[s.n.], 2008:1096-1103. [49] KATZ G, SHIN E C R, SONG D.ExploreKit:automatic feature generation and selection[C]//Proceedings of the 16th International Conference on Data Mining.Washington D.C., USA:IEEE Press, 2016:979-984. [50] KANTER J M, VEERAMACHANENI K.Deep feature synthesis:towards automating data science endeavors[C]//Proceedings of IEEE International Conference on Data Science and Advanced Analytics.Washington D.C., USA:IEEE Press, 2015:1-10. [51] SMITH M G, BULL L.Genetic programming with a genetic algorithm for feature construction and selection[J]. Genetic Programming and Evolvable Machines, 2005, 6(3): 265-281. [52] ELAD M, AHARON M.Image denoising via sparse and redundant representations over learned dictionaries[J]. IEEE Transactions on Image Processing, 2006, 15(12): 3736-3745. [53] ZEILER M D, KRISHNAN D, TAYLOR G W, et al. Deconvolutional networks[C]//Proceedings of 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Press, 2010:2528-2535. [54] YU K, ZHANG T, GONG Y.Nonlinear learning using local coordinate coding[EB/OL]. [2021-01-12]. https://www.mlpack.org/papers/lcc.pdf. [55] HE Y, LIN J, LIU Z, et al. AMC:AutoML for model compression and acceleration on mobile devices[C]//Proceedings of European Conference on Computer Vision.Berlin, Germany:Springer, 2018:784-800. [56] ZHONG Z, YAN J J, LIU C L.Practical network blocks design with Q-learning[EB/OL]. [2021-01-12]. http://arxiv.org/pdf/1708.05552. [57] WATKINS C J C H.Learning from delayed rewards[J]. Robotics and Autonomous Systems, 1995, 15(4): 233-235. [58] ZOPH B, LE Q V.Neural architecture search with reinforcement learning[EB/OL]. [2021-01-12]. https://www.researchgate.net/publication/309738632_Neural_Architecture_Search_with_Reinforcement_Learning. [59] CAI H, CHEN T Y, ZHANG W N, et al. Reinforcement learning for architecture search by network transformation[EB/OL]. [2021-01-12]. http://arxiv.org/pdf/1707.04873. [60] CANZIANI A, PASZKE A, CULURCIELLO E.An analysis of deep neural network models for practical applications[EB/OL]. [2021-01-12]. https://arxiv.org/pdf/1605.07678.pdf. |