作者投稿和查稿 主编审稿 专家审稿 编委审稿 远程编辑

计算机工程 ›› 2019, Vol. 45 ›› Issue (9): 188-193. doi: 10.19678/j.issn.1000-3428.0052195

• 人工智能及识别技术 • 上一篇    下一篇

基于LZW编码的卷积神经网络压缩方法

刘崇阳, 刘勤让   

  1. 国家数字交换系统工程技术研究中心, 郑州 450002
  • 收稿日期:2018-07-24 修回日期:2018-09-14 出版日期:2019-09-15 发布日期:2019-09-03
  • 作者简介:刘崇阳(1994-),男,硕士研究生,主研方向为人工智能、深度学习;刘勤让,研究员、博士。
  • 基金资助:
    国家科技重大专项(2016ZX01012101);国家自然科学基金(61572520,61521003)。

Convolutional Neural Network Compression Method Based on LZW Encoding

LIU Chongyang, LIU Qinrang   

  1. China National Digital Switching System Engineering and Technological R & D Center, Zhengzhou 450002, China
  • Received:2018-07-24 Revised:2018-09-14 Online:2019-09-15 Published:2019-09-03

摘要: 针对卷积神经网络(CNN)因参数量大难以移植到嵌入式平台的问题,提出基于LZW编码的CNN压缩方法。通过浮点转定点和剪枝2种方法来压缩模型容量。对权值进行k-means聚类量化,并在此基础上进行LZW编码。在MNIST数据集上进行实验,结果表明,剪枝效果优于浮点转定点的压缩效果,在进行剪枝、量化后使用LZW编码,其压缩比可达25.338。

关键词: 卷积神经网络, LZW编码, 浮点转定点, 模型剪枝, k-means聚类量化

Abstract: Aiming at the problem that the Convolutional Neural Network(CNN) parameters are large and difficult to be transplanted to the embedded platform,a CNN compression method based on Lempel-Ziv-Welch(LZW) encoding is proposed.The model capacity is compressed by two ways:convert floating point to fixed point and pruning.The weights are quantized by k-means clustering,and LZW encoding is performed on this basis.Experimental results on the MNIST dataset show that the pruning effect is better than the compression effect of converting floating point to fixed point.After pruning and quantification,the compression ratio of LZW encoding can reach 25.338.

Key words: Convolutional Neural Network(CNN), Lempel-Ziv-Welch(LZW) encoding, convert floating point to fixed point, model pruning, k-means clustering quantification

中图分类号: