[1] HE K M, ZHANG X Y, REN S Q, et al.Deep residual learning for image recognition[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.Washington D.C., USA:IEEE Computer Society, 2016:770-778. [2] LIU W, ANGUELOV D, ERHAN D, et al.SSD:single shot MultiBox detector[C]//Proceedings of European Conference on Computer Vision.Berlin, Germany:Springer, 2016:21-37. [3] ZHANG C, LI P, SUN G Y, et al.Optimizing FPGA-based accelerator design for deep convolutional neural networks[C]//Proceedings of 2015 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays.New York, USA:ACM Press, 2015:161-170. [4] GHAFFARI S, SHARIFIAN S.FPGA-based convolutional neural network accelerator design using high level synthesize[C]//Proceedings of the 2nd International Conference of Signal Processing and Intelligent Systems.Washington D.C., USA:IEEE Press, 2016:1-6. [5] CHEN Y H, KRISHNA T, EMER J S, et al.Eyeriss:an energy-efficient reconfigurable accelerator for deep convolutional neural networks[J].IEEE Journal of Solid-State Circuits, 2017, 52(1):127-138. [6] COURBARIAUX M, BENGIO Y, DAVID J P.Training deep neural networks with low precision multiplications[EB/OL].(2015-09-23)[2021-01-02].https://arxiv.org/pdf/1412.7024.pdf. [7] WESS M, DINAKARRAO S M P, JANTSCH A.Weighted quantization-regularization in DNNs for weight memory minimization toward HW implementation[J].IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2018, 37(11):2929-2939. [8] GONG C, LI T, LU Y, et al.μL2Q:an ultra-low loss quantization method for DNN compression[C]//Proceedings of 2019 International Joint Conference on Neural Networks.Washington D.C., USA:IEEE Press, 2019:1-8. [9] LI H, KADAV A, DURDANOVIC I, et al.Pruning filters for efficient ConvNets[EB/OL].(2017-03-10)[2021-01-02].https://arxiv.org/pdf/1608.08710.pdf. [10] POLINO A, PASCANU R, ALISTARH D.Model compression via distillation and quantization[EB/OL].(2018-02-15)[2021-01-02].https://arxiv.org/pdf/1802.05668.pdf. [11] SHAWAHNA A, SAIT S M, EL-MALEH A.FPGA-based accelerators of deep learning networks for learning and classification:a review[J].IEEE Access, 2019, 7:7823-7859. [12] ZHAO R Z, LUK W, NIU X Y, et al.Hardware acceleration for machine learning[C]//Proceedings of 2017 IEEE Computer Society Annual Symposium on VLSI.Washington D.C., USA:IEEE Press, 2017:645-650. [13] CHEN T S, DU Z D, SUN N H, et al.DianNao:a small-footprint high-throughput accelerator for ubiquitous machine-learning[J].ACM SIGPLAN Notices, 2014, 49(4):269-284. [14] SUN S, JIANG H J, YIN M C, et al.Design of efficient CNN accelerator based on Zynq platform[C]//Proceedings of the 15th International Conference on Computer Science and Education.Washington D.C., USA:IEEE Press, 2020:489-493. [15] IOFFE S.Batch renormalization:towards reducing minibatch dependence in batch-normalized models[EB/OL].(2017-03-30)[2021-01-02].https://arxiv.org/pdf/1702.03275.pdf. [16] STANKOVIĆ I, BRAJOVIĆ M, DAKOVIĆ M, et al.Quantization in compressive sensing:a signal processing approach[J].IEEE Access, 2020, 8:50611-50625. [17] ZHOU Y L, CHEN L, XIE R, et al.Low-precision CNN model quantization based on optimal scaling factor estimation[C]//Proceedings of 2019 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting.Washington D.C., USA:IEEE Press, 2019:1-5. [18] CHIEN J T, CHANG S T.M-ARY quantized neural networks[C]//Proceedings of 2020 IEEE International Conference on Multimedia and Expo.Washington D.C., USA:IEEE Press, 2020:1-6. [19] GUO K Y, SUI L Z, QIU J T, et al.Angel-Eye:a complete design flow for mapping CNN onto embedded FPGA[J].IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2018, 37(1):35-47. [20] MAO W D, WANG J C, LIN J, et al.Methodology for efficient reconfigurable architecture of generative neural network[C]//Proceedings of 2019 IEEE International Symposium on Circuits and Systems.Washington D.C., USA:IEEE Press, 2019:1-5. [21] QIU J T, WANG J, YAO S, et al.Going deeper with embedded FPGA platform for convolutional neural network[C]//Proceedings of 2016 ACM/SIGDA International Symposium.New York, USA:ACM Press, 2016:26-35. |