作者投稿和查稿 主编审稿 专家审稿 编委审稿 远程编辑

计算机工程 ›› 2009, Vol. 35 ›› Issue (17): 172-174. doi: 10.3969/j.issn.1000-3428.2009.17.059

• 人工智能及识别技术 • 上一篇    下一篇

基于r范数损失函数的NPKMR方法改进

张 玲,朱嘉钢   

  1. (江南大学信息工程学院,无锡 214122)
  • 收稿日期:1900-01-01 修回日期:1900-01-01 出版日期:2009-09-05 发布日期:2009-09-05

Improvement on NPKMR Method Based on Norm-r Loss Function

ZHANG Ling, ZHU Jia-gang   

  1. (School of Information Technology, Jiangnan University, Wuxi 214122)
  • Received:1900-01-01 Revised:1900-01-01 Online:2009-09-05 Published:2009-09-05

摘要: 针对非正定核的机器回归方法(NPKMR)只对总体误差最小化而造成回归性能较差的问题,提出一种在NPKMR的基础上对每个样本点的回归误差进行约束的改进方法。通过引入r范数损失函数和松弛变量,对每个样本点的回归误差进行约束。实验表明,对NPKMR方法的改进可以提高回归精度和泛化性能。

关键词: 机器回归, 非正定核, r范数损失函数, 松弛变量

Abstract: NPKMR is a method of Non-Positive Kernel Machine Regression(NPKMR), in which only the total regression error is constrained, but each sample regression error is ignored. Therefore, the accuracy and the generalization performance of NPKMR can not be satisfied. To improve the accuracy and the generalization performance of NPKMR method, this paper proposes that each sample regression error is constrained besides the total regression error. It introduces norm-r loss function and slack variable in order to constrain each sample regression error. Experimental results show that improvement on NPLMR method is effective and feasible.

Key words: machine regression, non-positive kernel, norm-r loss function, slack variable

中图分类号: