计算机工程 ›› 2019, Vol. 45 ›› Issue (1): 192-198,205.doi: 10.19678/j.issn.1000-3428.0048673

• 人工智能及识别技术 • 上一篇    下一篇

变样本量学习最小二乘支持向量机算法

加尔肯别克,袁杰   

  1. 新疆大学 电气工程学院,乌鲁木齐 830047
  • 收稿日期:2017-09-14 出版日期:2019-01-15 发布日期:2019-01-15
  • 作者简介:加尔肯别克(1990—),男,硕士研究生,主研方向为机器学习、模式识别;袁杰(通信作者),副教授、博士。
  • 基金项目:

    国家自然科学基金(61863033);新疆维吾尔自治区自然科学基金(2016D01C032)

Variable Samples Learning Least Square Support Vector Machine Algorithm

JIA Erkenbieke,YUAN Jie   

  1. School of Electrical Engineering,Xinjiang University,Urumqi 830047,China
  • Received:2017-09-14 Online:2019-01-15 Published:2019-01-15

摘要:

为增加最小二乘支持向量机(LS-SVM)算法解的稀疏性,提高其运算效率,提出一种变样本量学习LS-SVM算法。从训练集中随机抽取部分样本作为初始工作集,在学习阶段将样本训练过程分为样本增量和样本减量2个阶段。在样本增量阶段,按KKT条件选取特定样本加入工作集并进行训练,在样本减量阶段,采用负松弛变量剪枝策略与基于对偶目标函数差的剪枝策略实现剪枝。在此基础上,采用工作集中的剩余样本构造学习分类器。实验结果表明,相对SMO、SMO-new、ISLS-SVM算法,该算法具有稀疏性高、运算速度快、无精度损失等优点。

关键词: 最小二乘支持向量机, 稀疏性, 变样本量学习, 预剪枝, KKT条件

Abstract:

In order to increase the sparseness of the solution of Least Squares Support Vector Machine (LS-SVM) algorithm and improve its operation efficiency,a variable samples learning LS-SVM algorithm is proposed.Some samples are randomly selected from the training set as the initial working set,and the training process is divided into two stages:sample increment and sample reduction.In the sample increment stage,select specific samples according to KKT conditions to join the working set and train.In the sample reduction stage,Negative Slack Variable Pruning Strategy (NSVPS) and Dual Objective Function Deviation Pruning Strategy(DOFDPS) pruning strategy are used to achieve pruning.On this basis,the residual classifier is used to construct learning classifier.Experimental results show that compared with SMO,SMO-new and ISLS-SVM algorithm,the algorithm has the advantages of high sparsity,fast operation speed and no loss of precision.

Key words: Least Squares Support Vector Machine (LS-SVM), sparseness, variable samples learning, pre-pruning, KKT condition

中图分类号: