作者投稿和查稿 主编审稿 专家审稿 编委审稿 远程编辑

计算机工程 ›› 2010, Vol. 36 ›› Issue (15): 179-181. doi: 10.3969/j.issn.1000-3428.2010.15.063

• 人工智能及识别技术 • 上一篇    下一篇

基于CUDA技术的卷积神经网络识别算法

张佳康,陈庆奎   

  1. (上海理工大学光电信息与计算机工程学院,上海 200093)
  • 出版日期:2010-08-05 发布日期:2010-08-25
  • 作者简介:张佳康(1985-),男,硕士研究生,主研方向:并行计算;陈庆奎,教授、博士生导师
  • 基金资助:
    国家自然科学基金资助项目(60573108);上海教委发展基金资助项目(09YZ428);上海教委科研创新基金资助重点项目(08ZZ76);上海市重点学科建设基金资助项目(S30501)

CUDA Technology Based Recognition Algorithm of Convolutional Neural Networks

ZHANG Jia-kang, CHEN Qing-kui   

  1. (School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093)
  • Online:2010-08-05 Published:2010-08-25

摘要: 针对具有高浮点运算能力的流处理器设备GPU对神经网络的适用性问题,提出卷积神经网络的并行化识别算法,采用计算统一设备架构(CUDA)技术,并定义其上的并行化数据结构,描述计算任务到CUDA的映射机制。实验结果证明,在GTX200硬件架构的GPU上实现的并行识别算法的平均浮点运算能力峰值较CPU上串行算法提高了近60倍,更适用于神经网络的相关应用。

关键词: 流处理器, 单指令多线程, GTX200硬件架构, CUDA技术, 卷积神经网络

Abstract: For the problem whether Graphic Processing Unit(GPU), the stream processor with high performance of floating-point computing is applicable to neural networks, this paper proposes the parallel recognition algorithm of Convolutional Neural Networks(CNNs). It adopts Compute Unified Device Architecture(CUDA) technology, definites the parallel data structures, and describes the mapping mechanism for computing tasks on CUDA. It compares the parallel recognition algorithm achieved on GPU of GTX200 hardware architecture with the serial algorithm on CPU. It improves speed by nearly 60 times. Result shows that GPU based the stream processor architecture are more applicable to some related applications about neural networks than CPU.

Key words: stream processor, Single-Instruction Multiple-Thread(SIMT), GTX200 hardware architecture, Compute Unified Device Architecture (CUDA) technology, Convolutional Neural Networks(CNNs)

中图分类号: