摘要: 针对具有高浮点运算能力的流处理器设备GPU对神经网络的适用性问题,提出卷积神经网络的并行化识别算法,采用计算统一设备架构(CUDA)技术,并定义其上的并行化数据结构,描述计算任务到CUDA的映射机制。实验结果证明,在GTX200硬件架构的GPU上实现的并行识别算法的平均浮点运算能力峰值较CPU上串行算法提高了近60倍,更适用于神经网络的相关应用。
关键词:
流处理器,
单指令多线程,
GTX200硬件架构,
CUDA技术,
卷积神经网络
Abstract: For the problem whether Graphic Processing Unit(GPU), the stream processor with high performance of floating-point computing is applicable to neural networks, this paper proposes the parallel recognition algorithm of Convolutional Neural Networks(CNNs). It adopts Compute Unified Device Architecture(CUDA) technology, definites the parallel data structures, and describes the mapping mechanism for computing tasks on CUDA. It compares the parallel recognition algorithm achieved on GPU of GTX200 hardware architecture with the serial algorithm on CPU. It improves speed by nearly 60 times. Result shows that GPU based the stream processor architecture are more applicable to some related applications about neural networks than CPU.
Key words:
stream processor,
Single-Instruction Multiple-Thread(SIMT),
GTX200 hardware architecture,
Compute Unified Device Architecture (CUDA) technology,
Convolutional Neural Networks(CNNs)
中图分类号:
张佳康, 陈庆奎. 基于CUDA技术的卷积神经网络识别算法[J]. 计算机工程, 2010, 36(15): 179-181.
ZHANG Jia-Kang, CHEN Qiang-Kui. CUDA Technology Based Recognition Algorithm of Convolutional Neural Networks[J]. Computer Engineering, 2010, 36(15): 179-181.