Author Login Editor-in-Chief Peer Review Editor Work Office Work

Computer Engineering ›› 2022, Vol. 48 ›› Issue (12): 134-139,149. doi: 10.19678/j.issn.1000-3428.0063354

• Artificial Intelligence and Pattern Recognition • Previous Articles     Next Articles

A Fast and Progressive Convolutional Neural Architecture Search Algorithm

ZHAO Liang, FANG Wei   

  1. School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, Jiangsu 214122, China
  • Received:2021-11-25 Revised:2021-12-27 Published:2021-12-28

一种快速渐进式卷积神经网络结构搜索算法

赵亮, 方伟   

  1. 江南大学 人工智能与计算机学院, 江苏 无锡 214122
  • 作者简介:赵亮(1997—),男,硕士研究生,主研方向为神经网络结构搜索、模式识别、计算机视觉;方伟,教授、博士。
  • 基金资助:
    国家重点研发计划(2017YFC1601800);国家自然科学基金(62073155,61673194,62106088);广东省重点实验室项目(2020B121201001)。

Abstract: Manually designing a Convolutional Neural Network(CNN) architecture is extremely difficult and requires a high level of professionalism.The gradient differentiable search is fast and efficient.However, this method has some drawbacks, such as a large gap and low stability between search and evaluation.To address these issues, a Fast and Progressive Neural Architecture Search(FPNAS) algorithm that combines progressive search and greedy index is proposed. Using the greedy index as the edge selection criterion effectively closes the deep gap and improves the relevance and stability of the search evaluation by gradually expanding the architecture of the search stage.Simultaneously, the method of dividing datasets is used to reduce search time to address problems of high computing resource consumption of neural architecture search algorithms.The experimental results show that the stability of the CNN architecture searched by FPNAS is improved when compared to the PDARTS and SGAS algorithms, using accuracy and search time as indicators.The search time is reduced by 0.19 and 0.14 GPU Days, respectively.On the CIFAR-10 dataset, the maximum accuracy reaches 97.7%.

Key words: deep learning, Convolutional Neural Network(CNN), differentiable architecture search, progressive architecture search, dateset dividing method

摘要: 手动设计卷积神经网络结构对专业性要求高、难度大。基于梯度可微的搜索快速高效,但这类方法存在深度鸿沟和稳定性较差的问题。提出一种结合渐进式搜索和贪心指标的快速渐进式结构搜索算法(FPNAS),通过渐进式扩展搜索阶段的结构,使得搜索阶段的网络结构逐渐接近评估阶段,避免深度鸿沟造成的影响。同时,通过运用贪心指标作为选边准则,增加搜索评估的相关性并提高搜索的稳定性。针对网络结构搜索算法消耗计算资源多的问题,提出渐进式划分数据集方法,通过分阶段不同比例的数据集划分来减少结构搜索的计算资源消耗。以准确率和搜索时间作为评价指标,将FPNAS与渐进式可微结构搜索算法和贪心搜索算法进行对比,实验结果表明,FPNAS搜索出的网络结构稳定性得到改进,搜索时间分别缩短0.19和0.14个GPU Days,在CIFAR-10数据集上精度最高达到97.7%。

关键词: 深度学习, 卷积神经网络, 可微结构搜索, 渐进式结构搜索, 划分数据集方法

CLC Number: