[1] GUPTA S,AGRAWAL A,GOPALAKRISHNAN K,et al.Deep learning with limited numerical precision[C]//Proceedings of International Conference on Machine Learning.New York,USA:ACM Press,2015:1737-1746. [2] COURBARIAUX M,BENGIO Y,DAVID J P.BinaryConnect:training deep neural networks with binary weights during propagation[C]//Proceedings of International Conference on Neural Information Processing Systems.Cambridge,USA:MIT Press,2015:3123-3131. [3] COURBARIAUX M,HUBARA I,SOUDRY D,et al.Binarized neural networks:training deep neural networks with weights and activations constrained to +1 or -1[EB/OL].[2020-02-10].https://arxiv.org/abs/1602.02830. [4] RASTEGARI M,ORDONEZ V,REDMON J,et al.XNOR-Net:ImageNet classification using binary convolutional neural networks[C]//Proceedings of European Conference on Computer Vision.Berlin,Germany:Springer,2016:525-542. [5] CAI Zhaowei,HE Xiaodong,SUN Jian,et al.Deep learning with low precision by half-wave Gaussian quantization[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.Washington D.C.,USA:IEEE Press,2017:5918-5926. [6] LI Fengfu,ZHANG Bo,LIU Bin.Ternary weight networks[EB/OL].[2020-02-10].https://arxiv.org/abs/1605.04711. [7] ZHU Chenzhuo,HAN Song,MAO Huizi,et al.Trained ternary quantization[EB/OL].[2020-02-10].https://arxiv.org/pdf/1612.01064.pdf. [8] ZHOU Shuchang,WU Yuxin,NI Zekun,et al.Dorefa-Net:training low bitwidth convolutional neural networks with low bitwidth gradients[EB/OL].[2020-02-10].https://arxiv.org/pdf/1606.06160.pdf. [9] ZHOU Aojun,YAO Anbang,GUO Yiwen,et al.Incremental network quantization:towards lossless CNNs with low-precision weights[EB/OL].[2020-02-10].https://arxiv.org/pdf/1702.03044.pdf. [10] HU Qinghao,WANG Peisong,CHENG Jian.From hashing to CNNs:training binary weight networks via hashing[C]//Proceedings of the 32nd AAAI Conference on Artificial Intelligence.Palo Alto,USA:AAAI Press,2018:3247-3254. [11] LIN D D,TALATHI S S.Overcoming challenges in fixed point training of deep convolutional networks[EB/OL].[2020-02-10].https://arxiv.org/abs/1607.02241. [12] GLOROT X,BORDES A,BENGIO Y.Deep sparse rectifier neural networks[C]//Proceedings of the 14th International Conference on Artificial Intelligences and Statistics.Washington D.C.,USA:IEEE Press,2011:315-323. [13] LLOYD S.Least squares quantization in PCM[J].IEEE Transactions on Information Theory,1982,28(2):129-137. [14] IOFFE S,SZEGEDY C.Batch normalization:accelerating deep network training by reducing internal covariate shift[EB/OL].[2020-02-10].https://arxiv.org/pdf/1502.03167.pdf. [15] PASCANU R,MIKOLOV T,BENGIO Y.On the difficulty of training recurrent neural networks[C]//Proceedings of International Conference on Machine Learning.Washington D.C.,USA:IEEE Press,2013:1310-1318. [16] LI Zefan,NI Bingbing.Performance guaranteed network acceleration via high-order residual quantization[EB/OL].[2020-02-10].https://arxiv.org/abs/1708.08687. [17] WU Jiaxiang,LENG Cong,WANG Yuhang,et al.Quantized convolutional neural networks for mobile devices[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.Washington D.C.,USA:IEEE Press,2016:4820-4828. [18] SHEN Fumin,SHEN Chunhua,LIU Wei,et al.Supervised discrete hashing[C]//Proceedings of IEEE Computer Vision and Pattern Recognition.Washington D.C.,USA:IEEE Press,2015:37-45. [19] RUSSAKOVSKY O,DENG J,SU H,et al.ImageNet large scale visual recognition challenge[J].International Journal of Computer Vision,2015,115(3):211-252. [20] KRIZHEVSKY A,SUTSKEVER I,HINTON G E.ImageNet classification with deep convolutional neural networks[J].Communications of the ACM,2017,60(6):84-90. [21] KINGMA D P,BA J.Adam:a method for stochastic optimization[EB/OL].[2020-02-10].https://arxiv.org/pdf/1412.6980.pdf. |