Abstract:
NPKMR is a method of Non-Positive Kernel Machine Regression(NPKMR), in which only the total regression error is constrained, but each sample regression error is ignored. Therefore, the accuracy and the generalization performance of NPKMR can not be satisfied. To improve the accuracy and the generalization performance of NPKMR method, this paper proposes that each sample regression error is constrained besides the total regression error. It introduces norm-r loss function and slack variable in order to constrain each sample regression error. Experimental results show that improvement on NPLMR method is effective and feasible.
Key words:
machine regression,
non-positive kernel,
norm-r loss function,
slack variable
摘要: 针对非正定核的机器回归方法(NPKMR)只对总体误差最小化而造成回归性能较差的问题,提出一种在NPKMR的基础上对每个样本点的回归误差进行约束的改进方法。通过引入r范数损失函数和松弛变量,对每个样本点的回归误差进行约束。实验表明,对NPKMR方法的改进可以提高回归精度和泛化性能。
关键词:
机器回归,
非正定核,
r范数损失函数,
松弛变量
CLC Number:
ZHANG Ling; ZHU Jia-gang. Improvement on NPKMR Method Based on Norm-r Loss Function[J]. Computer Engineering, 2009, 35(17): 172-174.
张 玲;朱嘉钢. 基于r范数损失函数的NPKMR方法改进[J]. 计算机工程, 2009, 35(17): 172-174.