作者投稿和查稿 主编审稿 专家审稿 编委审稿 远程编辑

计算机工程 ›› 2019, Vol. 45 ›› Issue (10): 189-195. doi: 10.19678/j.issn.1000-3428.0050909

• 人工智能及识别技术 • 上一篇    下一篇

基于AdaBoost的弹性网型正则化多核学习算法

任胜兵, 谢如良   

  1. 中南大学 软件学院, 长沙 410075
  • 收稿日期:2018-03-22 修回日期:2018-06-27 出版日期:2019-10-15 发布日期:2018-09-27
  • 作者简介:任胜兵(1969-),男,副教授、博士,主研方向为模式识别、图像处理、可信软件;谢如良,硕士研究生。
  • 基金资助:
    中南大学研究生自主探索创新项目(1053320170432)。

Elastic-net Regularization Multi Kernel Learning Algorithm Based on AdaBoost

REN Shengbing, XIE Ruliang   

  1. School of Software, Central South University, Changsha 410075, China
  • Received:2018-03-22 Revised:2018-06-27 Online:2019-10-15 Published:2018-09-27

摘要: 在正则化多核学习中,稀疏的核函数权值会导致有用信息丢失和泛化性能退化,而通过非稀疏模型选取所有核函数则会产生较多的冗余信息并对噪声敏感。针对上述问题,基于AdaBoost框架提出一种弹性网型正则化多核学习算法。在迭代选取基本分类器时对核函数的权值进行弹性网型正则化约束,即混合L1范数和Lp范数约束,构造基于多个基本核最优凸组合的基本分类器,并将其集成到最终的强分类器中。实验结果表明,该算法在保留集成算法优势的同时,能够实现核函数权值稀疏性和非稀疏性的平衡,与L1-MKL和Lp-MKL算法相比,能够以较少的迭代次数获得分类精度较高的分类器。

关键词: 集成学习, 多核学习, 弹性网型正则化, 弱分类器, 稀疏性

Abstract: In regularization multi kernel learning,the sparse kernel function weight leads to the loss of useful information and the degradation of generalization performance,while selecting all kernel functions through non-sparse models generates more redundant information and is sensitivity to noise.Aiming at these problems,an elastic-net regularization multi kernel learning algorithm based on AdaBoost architecture is proposed.When the basic classifier is selected at each iteration,the weight of the kernel function is added with the elastic-net regularization,that is,mixed L1 norm and Lp norm constraints.The basic classifier are constructed based on multi basic kernel optimal convex combinations,which are integrated into the final strong classifier.Experimental results show that the proposed algorithm can balance the sparsity and non-sparsity of the weight in kernel function while preserving the advantages of the integrated algorithm.Compared with L1-MKL and Lp-MKL algorithms,it can obtain the classifier with higher classification accuracy in fewer iterations.

Key words: ensemble learning, multi kernel learning, elastic-net regularization, weak classifier, sparsity

中图分类号: