[1] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet
classification with deep convolutional neural networks. In
NIPS, pages 1097–1105, 2012.
[2] Wenlong Cao,RuiJian Wu,Min li.A review of neural
network modeling methods.In AROC,2019.
[3] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D.
Anguelov, D. Erhan, et al. Going deeper with convolutions.
In CVPR, pages 1–9, 2015.
[4] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual
learning for image recognition. In CVPR, 2016.
[5] YueXian Zou,JiaSheng Yu,ZeHan Chen,Yi Wang.
Convolution Neural Networks Model Compression Based
On Feature Selection For Image Classification.In Control
Theory&Applications, 2017.
[6] Zi Ye,Shibin Xiao.Compression of convolutional neural
network applied to image classification.In Journal of
Beijing Information Science & Technology
University,2018.
[7] Song Han, Jeff Pool, John Tran, and William Dally.
Learning both Weights and Connections for Efficient
Neural Network. In NIPS, 2015.
[8] Song Han, Xingyu Liu, Huizi Mao, Jing Pu, Ardavan
Pedram, Mark A Horowitz, and William J Dally. EIE:
Efficient Inference Engine on Compressed Deep Neural
Network. In ISCA, 2016a.
[9] E. L. Denton, W. Zaremba, J. Bruna, Y. LeCun, and R.
Fergus. Exploiting linear structure within convolutional
networks for efficient evaluation. In NIPS, 2014.
[10] W. Chen, J. T. Wilson, S. Tyree, K. Q. Weinberger, and Y.
Chen. Compressing neural networks with the hashing trick.
In ICML, 2015.
[11] M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi.
Xnornet: Imagenet classifification using binary
convolutional neural networks. In ECCV, 2016.
[12] M. Courbariaux and Y. Bengio. Binarynet: Training deep
neural networks with weights and activations constrained
to+ 1 or-1. arXiv preprint arXiv:1602.02830, 2016.
[13] G. Huang, D. Chen, T. Li, F. Wu, L. van der Maaten, and K.
Q. Weinberger. Multi-scale dense convolutional networks
for efficient prediction. arXiv preprint arXiv:1703.09844,
2017.
[14] E. L. Denton, W. Zaremba, J. Bruna, Y. LeCun, R. Fergus.
Exploiting linear structure within convolutional networks
for efficient evaluation, in: Proceedings of Advances in
Neural Information Processing Systems (NIPS), 2014, pp.
1269-1277.
[15] M. Jaderberg, A. Vedaldi, A. Zisserman. Speeding up
convolutional neural networks with low rank expansions,
in: Proceedings of British Machine Vision Conference
(BMVC), 2014.
[16] X. Zhang, J. Zou, X. Ming, K. He, J. Sun. Efficient and
accurate approximations of nonlinear convolutionalnetworks, in: Proceedings of Computer Vision and Pattern
Recognition (CVPR), 2015, pp. 1984-1992.
[17] S. Han, J. Pool, J. Tran, W. J. Dally. Learning both weights
and connections for efficient neural network, in:
Proceedings of Advances in Neural Information Processing
Systems (NIPS), 2015, pp. 1135–1143.
[18] Zhuang Liu, Jianguo Li, Zhiqiang Shen. Learning Efficient
Convolutional Networks through Network Slimming, in:
Proceedings of Computer Vision and Pattern Recognition
(CVPR), 2017.
[19] G. Huang, Z. Liu, K. Q. Weinberger, and L. van der
Maaten. Densely connected convolutional networks. In
CVPR, 2017.
[20] V. Vanhoucke, A. Senior, M. Z. Mao. Improving the speed
of neural networks on cpus, in: NIPS Deep Learning and
Unsupervised Feature Learning Workshop, Citeseer, 2011.
[21] M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi.
Xnornet. Imagenet classification using binary
convolutional neural networks. In ECCV, 2016.
[22] J. Wu, C. Leng, Y. Wang, Q. Hu, J. Cheng. Quantized
convolutional neural networks for mobile devices, in:
Proceedings of Computer Vision and Pattern Recognition
(CVPR), 2016, pp. 4820–4828.
[23] W. Wen, C. Wu, Y. Wang, Y. Chen, and H. Li. Learning
structured sparsity in deep neural networks. In NIPS, 2016.
[24] S. Changpinyo, M. Sandler, and A. Zhmoginov. The power
of sparsity in convolutional neural networks. arXiv
preprint arXiv:1702.06257, 2017.
[25] H. Zhou, J. M. Alvarez, and F. Porikli. Less is more:
Towards compact cnns. In ECCV, 2016.
[26] S. Han, H. Mao, W. J. Dally. Deep compression:
Compressing deep neural network with pruning, trained
quantization and huffman coding, in: Proceedings of
International Conference on Learning Representations
(ICLR), 2016.
[27] Y. Guo, A. Yao, Y. Chen. Dynamic network surgery for
efficient dnns, in: Proceedings of Advances in Neural
Information Processing Systems (NIPS), 2016, pp.
1379–1387.
[28] K. Simonyan and A. Zisserman. Very deep convolutional
networks for large-scale image recognition. In ICLR,
2015.
[29] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual
learning for image recognition. In CVPR, 2016.
[30] J. Jin, Z. Yan, K. Fu, N. Jiang, and C. Zhang. Neural
network architecture optimization through submodularity
and super modularity. arXiv preprint arXiv:1609.00074,
2016.
[31] B. Zoph and Q. V. Le. Neural architecture search with
reinforcement learning. In ICLR, 2017.
[32] B. Baker, O. Gupta, N. Naik, and R. Raskar. Designing
neural network architectures using reinforcement learning.
In ICLR, 2017.
[33] Tkcvhenko, R., I Izonin. I.: Model and Principles for the
Implementation of Neural-Like Structures based on
Geometric Data Transformations. In: Hu, Z.B., Petoukhov,
S., (eds) Advances in Computer Science for Engineering
and Education. ICCSEEA2018. Advances in Intelligent
Systems and Computing, vol. 754, Springer, Cham,
578-587 (2018)
[34] Izonin, Ivan & Tkachenko, Roman & Kryvinska, N. &
Tkachenko, Pavlo & Greguš, Michal. (2019). Multiple
Linear Regression Based on Coefficients Identification
Using Non-iterative SGTM Neural-like Structure.
10.1007/978-3-030-20521-8_39.
[35] WU Lei.Numerical methods for pregularization problems:
Ph. D dissertation [D] . Changsha: College of
Mathematics and Econometrics of Hunan University,2013.
[36] Zhu J, Hastie T.Classification of Gene Microarrays by
Penalized Logistic Regression[J]. Biostatistics, 2004, 5(3):
427-443.
[37] Mario Z, Michael G. Accelerating K-means on the
Graphics Processor via CUDA[C]//Proc. of the 1st
International Conference on Intensive Applications and
Services. [S. l.]: IEEE Press, 2009: 7-15.
[38] Nojun K.Principal Component Analysis Based on
L1-Norm Maximization.IEEE Trans on Pattern Analysis
and Machine Intelligence,2008,30(9) : 1672-1680
[39] Srivastava N,Hinton G,Krizhevsky A,et al.Dropout:
A simple way to prevent neural networks from overfitting.
The Journal of Machine Learning Research, 2014, 15(1):
1929-1958.
[40] A. Krizhevsky and G. Hinton. Learning multiple layers of
features from tiny images. In Tech Report,http://www.cs.toronto.edu/~kriz/cifar.html,2009.
[41] Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng. Reading digits in natural images with unsupervised feature learning, 2011. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning, http://ufldl.stanford.edu/housenumbers/,2011.
[42] I. Goodfellow, D. Warde-Farley, M. Mirza, A. Courville, and Y. Bengio. Maxout networks. In ICML, 2013.
[43] G. Huang, Y. Sun, Z. Liu, D. Sedra, and K. Q. Weinberger. Deep networks with stochastic depth. In ECCV, 2016.
[44] M. Lin, Q. Chen, and S. Yan. Network in network. In ICLR, 2014.
[45] http://yann.lecun.com/exdb/mnist/
|