计算机工程 ›› 2019, Vol. 45 ›› Issue (10): 203-207,214.doi: 10.19678/j.issn.1000-3428.0052466

• 人工智能及识别技术 • 上一篇    下一篇

梯度有偏随机DA优化方法的个体收敛界分析

张梦晗a, 汪海a, 刘欣b, 鲍蕾a   

  1. 中国人民解放军陆军炮兵防空兵学院 a. 信息工程系;b. 基础部, 合肥 230031
  • 收稿日期:2018-08-22 修回日期:2018-09-28 出版日期:2019-10-15 发布日期:2018-11-09
  • 作者简介:张梦晗(1994-),男,硕士研究生,主研方向为机器学习、模式识别;汪海,硕士研究生;刘欣,教授;鲍蕾,讲师、博士。
  • 基金项目:
    国家自然科学基金(61673394)。

Analysis of Individual Convergence Bound for Gradient Biased Stochastic DA Optimization Method

ZHANG Menghana, WANG Haia, LIU Xinb, BAO Leia   

  1. a. Department of Information Engineering;b. Department of Basic Courses, PLA Army Academy of Artillery and Air Defense, Hefei 230031, China
  • Received:2018-08-22 Revised:2018-09-28 Online:2019-10-15 Published:2018-11-09

摘要: 样本不满足独立同分布会使梯度估计在迭代过程中存在偏差,且最优的个体收敛界在噪声的干扰下无法确定。为此,提出一种线性插值随机对偶平均(DA)优化方法。给出DA方法收敛性的证明,在梯度估计有偏的基础上,求解得到一种线性插值DA随机优化方法不产生累积偏差的个体收敛界,以保证正则化损失函数结构下优化方法的个体收敛精度。实验结果表明,与随机加速方法相比,该方法具有较快的个体收敛速率与较高的收敛精度。

关键词: 对偶平均方法, 随机优化, 个体收敛性, 梯度有偏估计, 最优收敛速率

Abstract: Samples that do not satisfy the independent and identical distribution will lead to deviations of the gradient estimation in the iterative process,and the convergence bound of the optimal individual cannot be determined under the interference of noise.Therefore,a linear interpolation stochastic Dual Averaging(DA) optimization method is proposed.The proof of the convergence of the DA method is given.On the basis of the gradient estimation bias,the individual convergence bounds of the non-cumulative deviation of the linear interpolation DA stochastic optimization method are obtained,and the optimization method of individual convergence precision of regularized loss function structure is assured.Experimental results show that compared with the stochastic accelerate method,the method has a faster individual convergence rate and a higher convergence accuracy.

Key words: Dual Averaging(DA) method, stochastic optimization, individual convergence, gradient biased estimation, optimal convergence rate

中图分类号: