作者投稿和查稿 主编审稿 专家审稿 编委审稿 远程编辑

计算机工程 ›› 2008, Vol. 34 ›› Issue (5): 254-256. doi: 10.3969/j.issn.1000-3428.2008.05.089

• 工程应用技术与实现 • 上一篇    下一篇

神经网络计算部件的数字VLSI优化设计

李 昂,吴 巍,钱 艺,王 沁   

  1. (北京科技大学信息工程学院,北京 100083)
  • 收稿日期:1900-01-01 修回日期:1900-01-01 出版日期:2008-03-05 发布日期:2008-03-05

Optimized Digital VLSI Design for Computation Components of Neural Network

LI Ang, WU Wei, QIAN Yi, WANG Qin   

  1. (Information Engineering School, University of Science & Technology Beijing, Beijing 100083)
  • Received:1900-01-01 Revised:1900-01-01 Online:2008-03-05 Published:2008-03-05

摘要: 在神经网络的数字VLSI实现中,激活函数及乘累加等计算部件是设计中的难点。区别于使用乘法器及加法器的传统方法,该文提出的LMN方法基于查找表(即函数真值表),使用逻辑最小项化简提炼出函数最简逻辑表达式后,可直接生成结构规整的门级电路,除线延时外,电路只有数个门级延时。以非线性函数为例对该方法进行了介绍,结果表明当定点数位数较少时,算法在速度及误差方面具有更好的性能。

关键词: 神经网络, VLSI设计, 非线性函数, 逻辑化简

Abstract: Computation components’ design is difficult in VLSI implementation of neural networks. Differing from traditional methods which use multipliers and adders, the LMN method proposed in this paper uses logic minimization to compress the function’s look-up-table (true table) effectively, and produces the simplest logic expressions. These expressions can be translated straightforward into gates level circuits with canonical structure and little gate delays. It takes non-linear activation functions as example to introduce the method. Results show when using limited bits of fixed point numbers, this method has a better performance in speed and error.

Key words: neural network, VLSI, nonlinear function, logic minimization

中图分类号: