«上一篇 下一篇»
  计算机工程  2019, Vol. 45 Issue (7): 208-211  DOI: 10.19678/j.issn.1000-3428.0050800
0

引用本文  

庄立纯, 张正军, 张乃今, 等. 基于非线性Logistic模型的改进UDEED算法[J]. 计算机工程, 2019, 45(7), 208-211. DOI: 10.19678/j.issn.1000-3428.0050800.
ZHUANG Lichun, ZHANG Zhengjun, ZHANG Naijin, et al. Improved UDEED Algorithm Based on Nonlinear Logistic Model[J]. Computer Engineering, 2019, 45(7), 208-211. DOI: 10.19678/j.issn.1000-3428.0050800.

基金项目

国家自然科学基金(61773014)

通信作者

张正军(通信作者), 副教授、博士

作者简介

庄立纯(1994-), 女, 硕士研究生, 主研方向为数据挖掘, E-mail:zlc@njust.edu.cn;
张乃今, 硕士研究生;
李君娣, 硕士研究生

文章历史

收稿日期:2018-03-15
修回日期:2018-05-04
基于非线性Logistic模型的改进UDEED算法
庄立纯 , 张正军 , 张乃今 , 李君娣     
南京理工大学 理学院, 南京 210094
摘要:针对UDEED算法中线性Logistic模型分类预测准确率较低的问题,基于泰勒展开式,提出一种多项式核的非线性Logistic模型改进算法。研究非线性Logistic模型的核函数参数估计方法,更新损失函数的计算规则,并利用梯度下降法求解改进UDEED模型,实现数据集的分类预测。实验结果表明,与UDEED算法相比,改进算法提高了分类预测的准确率。
关键词UDEED算法    非线性Logistic模型    半监督学习    无标签数据    梯度下降    
Improved UDEED Algorithm Based on Nonlinear Logistic Model
ZHUANG Lichun , ZHANG Zhengjun , ZHANG Naijin , LI Jundi     
School of Science, Nanjing University of Science and Technology, Nanjing 210094, China
Abstract: To address the problem that the linear Logistic model in the UDEED algorithm has poor classification prediction accuracy, based on Taylor expansion, an improved nonlinear Logistic model algorithm for polynomial kernel is proposed.The estimation method for kernel function parameter of nonlinear Logistic model is studied, and the calculation rules of the loss function are updated.The improved UDEED model is solved by the gradient descent method, and the data set is classified and predicted.Experimental results show that compared with the UDEED algorithm, the improved algorithm improves the accuracy of classification prediction.
Key words: UDEED algorithm    nonlinear Logistic model    semi-supervised learning    unlabeled data    gradient descent    
0 概述

集成学习是机器学习的热门方向之一, 通过训练弱学习器产生强学习器, 以获得更好的泛化能力, 如Adaboost、Bagging等算法[1-3]。多数集成学习算法为监督学习, 其训练样本标签已知, 但在实际应用中有标签数据较少, 无标签数据相对易得[4], 因此, 研究无标签数据的半监督学习[5-6]具有重要意义。

文献[7]提出一种反复迭代的Co-training算法, 利用学习器对无标签样本进行标记, 并将分类结果置信度较高的数据作为新样本提供给另一学习器使用。文献[8-9]提出SSMBOOST算法和ASSEMBLE算法, 其基本原理是在集成学习中利用无标签数据最大化分类决策间隔边界。在协同训练算法Co-training基础上, 文献[10]提出Tri-training的半监督性集成算法, 各分类器所获得的新标记示例由其余2个分类器协作提供, 且可通过引入随机森林来保证分类器之间的多样性, 并将Tri-training扩展为Co-Forest[11]。此类算法均为先在无标签数据上做伪标记以扩大有标签训练集规模, 进而更新集成学习器。文献[12]提出UDEED算法, 该算法的基分类器在有标签样本上具有较高的精度。

本文描述UDEED算法及Logistic函数[13]的相关知识, 构建多项式核的非线性Logistic模型[14-15], 在此基础上, 提出基于多项式核Logistic函数的改进UDEED算法。

1 UDEED算法及Logistic回归模型 1.1 UDEED算法

假设X=ℝd表示d维输入空间, Y={+1, -1}为输出空间, L={(xi, yi)|1≤iL}包含L个已知标签的训练样本, U={xi|L+1≤iL+u}包含u个无标签的训练样本。当集成学习由m个基分类器{fk|1≤km}组成时, fk:X→[-1, 1]表示从d维输入空间X到输出空间[-1, 1]的映射, 其采用线性Logistic模型。

$ {f_k}\left( x \right) = 2 \cdot \frac{1}{{1 + {{\rm{e}}^{ - \left( {\mathit{\boldsymbol{w}}_k^{\rm{T}} \times \mathit{\boldsymbol{x}} + {b_k}} \right)}}}} - 1,1 \le k \le m $ (1)

UDEED的基本思想是在最大化基分类器在有标签训练集上的拟合准确率的同时, 最大化基分类器在无标签训练数据上的多样性。因此, UDEED算法产生基分类器 f =(f1, f2, …, fm)是通过最小化式(2)中的损失函数。

$ V\left( {\mathit{\boldsymbol{f}},L,D} \right) = {V_{{\rm{emp}}}}\left( {\mathit{\boldsymbol{f}},L} \right) + r \cdot {V_{{\rm{div}}}}\left( {\mathit{\boldsymbol{f}},D} \right) $ (2)

其中, r为集成学习的经验误差和多样性之间的平衡参数, Vemp(f, L)表示集成学习在有标签训练数据集上的经验误差, 可表示为:

$ \begin{array}{l} {V_{{\rm{emp}}}}\left( {\mathit{\boldsymbol{f}},L} \right) = \frac{1}{m}\sum\limits_{k = 1}^m {l\left( {{f_k},L} \right)} = \\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\frac{1}{{mL}} \cdot \sum\limits_{k = 1}^m {\sum\limits_{i = 1}^L { - BLH\left( {{f_k}\left( {{\mathit{\boldsymbol{x}}_i}} \right),{y_i}} \right)} } \end{array} $ (3)
$ \begin{array}{l} BLH\left( {{f_k}\left( {{\mathit{\boldsymbol{x}}_i}} \right),{y_i}} \right) = \ln {\left( {\frac{{1 + {f_k}\left( {{\mathit{\boldsymbol{x}}_i}} \right)}}{2}} \right)^{\frac{{1 + {y_i}}}{2}}}{\left( {\frac{{1 + {f_k}\left( {{\mathit{\boldsymbol{x}}_i}} \right)}}{2}} \right)^{\frac{{1 - {y_i}}}{2}}} = \\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; - \frac{{1 + {y_i}}}{2}\ln \left( {1 + {{\rm{e}}^{ - \mathit{\boldsymbol{w}}_k^{\rm{T}}{\mathit{\boldsymbol{x}}_i}}}} \right) - \\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\frac{{1 - {y_i}}}{2}\ln \left( {1 + {{\rm{e}}^{ - \mathit{\boldsymbol{w}}_k^{\rm{T}}{\mathit{\boldsymbol{x}}_i}}}} \right) \end{array} $ (4)

其中, Vdiv(f, D)表示集成学习在无标签训练集上的多样性, 将其定义为:

$ {V_{{\rm{div}}}}\left( {\mathit{\boldsymbol{f}},D} \right) = \frac{2}{{m\left( {m - 1} \right)}} \cdot \sum\limits_{p = 1}^{m - 1} {\sum\limits_{q = p + 1}^m {d\left( {{f_p},{f_q},D} \right)} } $ (5)
$ d\left( {{f_p},{f_q},D} \right) = \frac{1}{{\left| D \right|}}\sum\limits_{x \in D} {{f_p}\left( \mathit{\boldsymbol{x}} \right){f_q}\left( \mathit{\boldsymbol{x}} \right)} $ (6)

UDEED算法采用梯度下降法求解, 其通过式(7)找到一个最小化损失函数。

$ {\mathit{\boldsymbol{f}}^ * } = \mathop {\arg \min }\limits_f V\left( {\mathit{\boldsymbol{f}},L,D} \right) $ (7)

其中, V(f, L, D)的梯度由模型参数Θ={ w k|1≤km}决定。

$ \frac{{\partial V}}{{\partial \mathit{\Theta }}} = \left[ {\frac{{\partial V}}{{\partial {\mathit{\boldsymbol{w}}_1}}},\frac{{\partial V}}{{\partial {\mathit{\boldsymbol{w}}_2}}}, \cdots ,\frac{{\partial V}}{{\partial {\mathit{\boldsymbol{w}}_m}}}} \right] $ (8)
$ \begin{array}{l} \frac{{\partial V}}{{\partial {\mathit{\boldsymbol{w}}_k}}} = - \frac{1}{{mL}}\sum\limits_{i = 1}^L {\frac{{\partial BLH\left( {{f_k}\left( {{\mathit{\boldsymbol{x}}_i}} \right),{y_i}} \right)}}{{\partial {\mathit{\boldsymbol{w}}_k}}}} + \\ \;\;\;\;\;\;\;\;\;\;\frac{{2\gamma }}{{m\left( {m - 1} \right)}}\sum\limits_{k' = 1,k' \ne k}^m {\frac{{\partial d\left( {{f_k},{f_{k'}},D} \right)}}{{\partial {\mathit{\boldsymbol{w}}_k}}}} \end{array} $ (9)
$ \begin{array}{l} \frac{{\partial BLH\left( {{f_k}\left( {{\mathit{\boldsymbol{x}}_i}} \right),{y_i}} \right)}}{{\partial {\mathit{\boldsymbol{w}}_k}}} = \left( {\frac{{\left( {1 + {y_i}} \right)\left( {1 - {f_k}\left( {{\mathit{\boldsymbol{x}}_i}} \right)} \right)}}{4} - } \right.\\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\left. {\frac{{\left( {1 - {y_i}} \right)\left( {1 + {f_k}\left( {{\mathit{\boldsymbol{x}}_i}} \right)} \right)}}{4}} \right) \cdot {\mathit{\boldsymbol{x}}_i} \end{array} $ (10)
$ \frac{{\partial d\left( {{f_k},{f_{k'}},D} \right)}}{{\partial {\mathit{\boldsymbol{w}}_k}}} = \frac{1}{{2\left| D \right|}} \cdot \sum\limits_{\mathit{\boldsymbol{x}} \in D} {{f_{k'}}\left( \mathit{\boldsymbol{x}} \right)} \cdot \left( {1 - {f_k}{{\left( \mathit{\boldsymbol{x}} \right)}^2}} \right) \cdot \mathit{\boldsymbol{x}} $ (11)

UDEED算法中每个基分类器fk通过最小化目标函数$\frac{1}{2}{\left\| {{\mathit{\boldsymbol{w}}_k}} \right\|^2} + \lambda \cdot \sum\limits_{i = 1}^L - BLH\left({{f_k}\left({\mathit{\boldsymbol{x}}_i^k} \right), y_i^k} \right)$初始化, 即从自助采样样本Lk中学习得到基分类器(λ为平衡模型复杂度参数)。

1.2 Logistic模型

在经典二项Logistic模型中, 有:

$ P\left( {Y = 1\left| \mathit{\boldsymbol{x}} \right.} \right) = \frac{{\exp \left( {\mathit{\boldsymbol{w}} \cdot \mathit{\boldsymbol{x}}} \right)}}{{1 + \exp \left( {\mathit{\boldsymbol{w}} \cdot \mathit{\boldsymbol{x}}} \right)}}\\ P\left( {Y = 0\left| \mathit{\boldsymbol{x}} \right.} \right) = \frac{1}{{1 + \exp \left( {\mathit{\boldsymbol{w}} \cdot \mathit{\boldsymbol{x}}} \right)}} $ (12)

其中, x =(x(1), x(2), …, x(d), 1)∈ℝd+1为输入空间, Y∈{0, 1}表示输出空间, w =(w(1), w(2), …, w(d+1))∈ℝd+1为权值变量, w·x 表示 wx 的内积。

Logistic模型学习时, 设:

$P(Y = 1|\mathit{\boldsymbol{x}}) = \pi (\mathit{\boldsymbol{x}}),P(Y = 0|\mathit{\boldsymbol{x}}) = 1 - \pi (\mathit{\boldsymbol{x}})$ (13)

其对数似然函数为:

$ \begin{array}{*{20}{c}} {L\left( \mathit{\boldsymbol{w}} \right) = \sum\limits_{i = 1}^N {\left[ {{y_i}\ln \pi \left( {{\mathit{\boldsymbol{x}}_i}} \right) + \left( {1 - {y_i}} \right)\ln \left( {1 - \pi \left( {{\mathit{\boldsymbol{x}}_i}} \right)} \right)} \right]} = }\\ {\sum\limits_{{\rm{i}} = 1}^N {\left[ {{y_i}\left( {\mathit{\boldsymbol{w}} \cdot \mathit{\boldsymbol{x}}} \right) - \ln \left( {1 + \exp \left( {\mathit{\boldsymbol{w}} \cdot {\mathit{\boldsymbol{x}}_i}} \right)} \right)} \right]} } \end{array} $ (14)

通过对L(w)求极大值, 得到 w 的估计值。因此, 上述问题变成以对数似然函数为目标函数的最优化问题。该模型的求解通常采用梯度下降法或者拟牛顿法。

2 基于Logistic模型的改进UDEED算法 2.1 基于核函数的非线性Logistic模型

定义非线性Logistic模型为:

$ y = \frac{1}{{1 + {{\rm{e}}^{ - G\left( {\mathit{\boldsymbol{x}};\mathit{\boldsymbol{\omega }}} \right)}}}} $ (15)

其中, G(x; ω)是非线性的, 由于任何函数都可以用泰勒展开式进行拟合, 设G(x; ω)在某一点 x kn阶可导, 因此多元函数G(x; ω)在 x k的泰勒展开为:

$ \begin{array}{l} f\left( \mathit{\boldsymbol{x}} \right) = f\left( {{\mathit{\boldsymbol{x}}_k}} \right) + {\left[ {\nabla f\left( {{\mathit{\boldsymbol{x}}_k}} \right)} \right]^{\rm{T}}}\left( {\mathit{\boldsymbol{x}} - {\mathit{\boldsymbol{x}}_k}} \right) + \\ \;\;\;\;\;\;\;\;\;\;\;\frac{1}{{2!}}{\left[ {\mathit{\boldsymbol{x}} - {\mathit{\boldsymbol{x}}_k}} \right]^{\rm{T}}}H\left( {{\mathit{\boldsymbol{x}}_k}} \right)\left[ {\mathit{\boldsymbol{x}} - {\mathit{\boldsymbol{x}}_k}} \right] + o\left( {{\mathit{\boldsymbol{x}}_k}} \right) \end{array} $ (16)

其中:

$ H\left( {{\mathit{\boldsymbol{x}}_k}} \right) = \left[ {\begin{array}{*{20}{c}} {\frac{{{\partial ^2}f\left( {{\mathit{\boldsymbol{x}}_k}} \right)}}{{\partial \mathit{\boldsymbol{x}}_1^2}}}&{\frac{{{\partial ^2}f\left( {{\mathit{\boldsymbol{x}}_k}} \right)}}{{\partial {\mathit{\boldsymbol{x}}_1}\partial {\mathit{\boldsymbol{x}}_2}}}}& \cdots &{\frac{{{\partial ^2}f\left( {{\mathit{\boldsymbol{x}}_k}} \right)}}{{\partial {\mathit{\boldsymbol{x}}_1}\partial {\mathit{\boldsymbol{x}}_d}}}}\\ {\frac{{{\partial ^2}f\left( {{\mathit{\boldsymbol{x}}_k}} \right)}}{{\partial {\mathit{\boldsymbol{x}}_2}\partial {\mathit{\boldsymbol{x}}_1}}}}&{\frac{{{\partial ^2}f\left( {{\mathit{\boldsymbol{x}}_k}} \right)}}{{\partial \mathit{\boldsymbol{x}}_2^2}}}& \cdots &{\frac{{{\partial ^2}f\left( {{\mathit{\boldsymbol{x}}_k}} \right)}}{{\partial {\mathit{\boldsymbol{x}}_2}\partial {\mathit{\boldsymbol{x}}_d}}}}\\ \vdots & \vdots &{}& \vdots \\ {\frac{{{\partial ^2}f\left( {{\mathit{\boldsymbol{x}}_k}} \right)}}{{\partial {\mathit{\boldsymbol{x}}_d}\partial {\mathit{\boldsymbol{x}}_1}}}}&{\frac{{{\partial ^2}f\left( {{\mathit{\boldsymbol{x}}_k}} \right)}}{{\partial {\mathit{\boldsymbol{x}}_d}\partial {\mathit{\boldsymbol{x}}_2}}}}& \cdots &{\frac{{{\partial ^2}f\left( {{\mathit{\boldsymbol{x}}_k}} \right)}}{{\partial \mathit{\boldsymbol{x}}_d^2}}} \end{array}} \right] $

G(x; ω)的泰勒展开式可知, 当 xd维数据时, 拟合G(x; ω)需要(d+d2)个参数; 当 x 为含有百维的数据时, 参数空间较大。假定 x k各属性特征之间无交叉效应, 即在多项式核函数模型当中交叉系数为0, 则H(x k)为对角矩阵, 可表示为:

$ H\left( {{\mathit{\boldsymbol{x}}_k}} \right) = \left[ {\begin{array}{*{20}{c}} {{\lambda _1}}&{}&{}&{}\\ {}&{{\lambda _2}}&{}&{}\\ {}&{}& \ddots &{}\\ {}&{}&{}&{{\lambda _d}} \end{array}} \right] $

$b = \frac{1}{{2!}}\mathit{\boldsymbol{x}}_k^{\rm{T}}H\left({{\mathit{\boldsymbol{x}}_k}} \right){\mathit{\boldsymbol{x}}_k} - {\left[ {\nabla f\left({{\mathit{\boldsymbol{x}}_k}} \right)} \right]^{\rm{T}}}{\mathit{\boldsymbol{x}}_k}$, θ 1T=[[▽f(x k)]T- x kTH(x k)], $\mathit{\boldsymbol{\theta }}_2^{\rm{T}} = \frac{1}{2}\left({{\lambda _1}, {\lambda _2}, \cdots, {\lambda _d}} \right)$, 定义 x 2=(x12, x22, …, xd2), 则有:

$ \begin{array}{l} f\left( \mathit{\boldsymbol{x}} \right) = \left[ {{{\left[ {\nabla f\left( {{\mathit{\boldsymbol{x}}_k}} \right)} \right]}^{\rm{T}}} - \mathit{\boldsymbol{x}}_k^{\rm{T}}H\left( {{\mathit{\boldsymbol{x}}_k}} \right)} \right]\mathit{\boldsymbol{x}} + \frac{1}{2}{\mathit{\boldsymbol{x}}^{\rm{T}}}H\left( {{\mathit{\boldsymbol{x}}_k}} \right)\mathit{\boldsymbol{x}} + \\ \;\;\;\;\;\;\;\;\;\;\;\frac{1}{{2!}}\mathit{\boldsymbol{x}}_k^{\rm{T}}H\left( {{\mathit{\boldsymbol{x}}_k}} \right){\mathit{\boldsymbol{x}}_k} - {\left[ {\nabla f\left( {{\mathit{\boldsymbol{x}}_k}} \right)} \right]^{\rm{T}}}{\mathit{\boldsymbol{x}}_k} + f\left( {{\mathit{\boldsymbol{x}}_k}} \right) = \\ \;\;\;\;\;\;\;\;\;\;\;\theta _1^{\rm{T}}\mathit{\boldsymbol{x}} + \mathit{\boldsymbol{\theta }}_2^{\rm{T}}{\mathit{\boldsymbol{x}}^2} + b \end{array} $ (17)

由于b为定值, 令 x =(x1, x2, …, xd, 1)∈ℝd+1, x 2=(x12, x22, …, xd2, 1), w1T=(θ1T, θd+1)∈ℝd+1, w 2T=(θ 2T, γd+1)∈ℝd+1, 此时$f(\mathit{\boldsymbol{x}}) = \sum\limits_{d = 1}^2 {\mathit{\boldsymbol{\omega }}_d^{\rm{T}}} {\mathit{\boldsymbol{x}}^d}$(其中θd+1+γd+1=b), 即G(x; ω)可表示为$\sum\limits_{d = 1}^2 {\mathit{\boldsymbol{\omega }}_d^{\rm{T}}} {\mathit{\boldsymbol{x}}^d}$, 称为多项式核, 如表 1所示。

下载CSV 表 1 核函数表达式
2.2 改进UDEED算法

本文将UDEED算法中的基分类器模型由采用标准的Logistic函数替换为基于多项式核非线性Logistic函数, 其基分类器为:

$ {f_k}\left( \mathit{\boldsymbol{x}} \right) = 2 \cdot \frac{1}{{1 + {{\rm{e}}^{ - \left( {\mathit{\boldsymbol{w}}_k^{\rm{T}} \times \mathit{\boldsymbol{x}} + {b_k}} \right)}}}} - 1\left( {1 \le k \le m} \right) $ (18)

模型目标为:

$ {\mathit{\boldsymbol{f}}^ * } = \mathop {\arg \min }\limits_f V\left( {\mathit{\boldsymbol{f}},L,D} \right) $ (19)

非线性Logistic模型的求解一般采用近似算法, 有:

$ P(Y = 1|\mathit{\boldsymbol{x}}) = \frac{1}{{1 + \exp \left( { - \sum\limits_{d = 1}^s {\mathit{\boldsymbol{\omega }}_d^{\rm{T}}} {\mathit{\boldsymbol{x}}^d}} \right)}} $ (20)
$ P(Y = 0|\mathit{\boldsymbol{x}}) = \frac{1}{{1 + \exp \left( { - \sum\limits_{d = 1}^s {\mathit{\boldsymbol{\omega }}_d^{\rm{T}}} {\mathit{\boldsymbol{x}}^d}} \right)}} $ (21)

P(Y=1| x)=π(x), 则P(Y=0| x)=1-π(x), 其对数极大似然函数为:

$ \begin{array}{l} L\left( \mathit{\boldsymbol{w}} \right) = \sum\limits_{i = 1}^N {\left[ {{y_i}\ln \pi \left( {{\mathit{\boldsymbol{x}}_i}} \right) + \left( {1 - {y_i}} \right)\ln \left( {1 - \pi \left( {{\mathit{\boldsymbol{x}}_i}} \right)} \right)} \right]} = \\ \;\;\;\;\;\;\;\;\;\;\;\sum\limits_{i = 1}^N {\left[ {{y_i}\left( {\sum\limits_{d = 1}^s {\mathit{\boldsymbol{\omega }}_d^{\rm{T}}{\mathit{\boldsymbol{x}}^d}} } \right) - \ln \left( {1 + \exp \left( {\sum\limits_{d = 1}^s {\mathit{\boldsymbol{\omega }}_d^{\rm{T}}{\mathit{\boldsymbol{x}}^d}} } \right)} \right)} \right]} \end{array} $ (22)

该模型梯度为:

$ \frac{{\partial L\left( \mathit{\boldsymbol{w}} \right)}}{{\partial {\mathit{\boldsymbol{w}}_d}}} = \sum\limits_{i = 1}^N {\left[ {{y_i} \cdot \mathit{\boldsymbol{x}}_i^d - \frac{1}{{1 + \exp \left( { - \sum\limits_{d = 1}^s {\mathit{\boldsymbol{\omega }}_d^{\rm{T}}{\mathit{\boldsymbol{x}}^d}} } \right)}} \cdot \mathit{\boldsymbol{x}}_i^d} \right]} $ (23)

L(w)求极大值, 得到 w 的估计值。该模型的求解通常采用梯度下降法或者拟牛顿法。改进的UDEED算法产生基分类器 f =(f1, f2, …, fm)通过最小化式(23)的损失函数。

$ \begin{array}{l} V\left( {\mathit{\boldsymbol{f}},L,D} \right) = {V_{{\rm{emp}}}}\left( {\mathit{\boldsymbol{f}},L} \right) + r \cdot {V_{{\rm{div}}}}\left( {\mathit{\boldsymbol{f}},D} \right) = \frac{1}{{mL}} \cdot \\ \;\;\;\;\;\;\;\;\sum\limits_{k = 1}^m {\sum\limits_{i = 1}^L {\frac{{1 + {y_i}}}{2}\ln \left( {1 + {{\rm{e}}^{ - \sum\limits_{d = 1}^s {\mathit{\boldsymbol{\omega }}_d^{\rm{T}}{\mathit{\boldsymbol{x}}^d}} }}} \right)} } + \frac{{1 - {y_i}}}{2}\ln \left( {1 + {{\rm{e}}^{ - \sum\limits_{d = 1}^s {\mathit{\boldsymbol{\omega }}_d^{\rm{T}}{\mathit{\boldsymbol{x}}^d}} }}} \right) + \\ \;\;\;\;\;\;\;\;\frac{{2\gamma }}{{m\left( {m - 1} \right)}} \cdot \sum\limits_{p = 1}^{m - 1} {\sum\limits_{q = p + 1}^m {\frac{1}{{\left| D \right|}}\sum\limits_{x \in D} {\left( {\frac{2}{{1 + {{\rm{e}}^{ - \sum\limits_{d = 1}^s {\mathit{\boldsymbol{\omega }}_d^{\rm{T}}{\mathit{\boldsymbol{x}}^d}} }}}} - 1} \right) \cdot \left( {\frac{2}{{1 + {{\rm{e}}^{ - \sum\limits_{d = 1}^s {\mathit{\boldsymbol{\omega }}_d^{\rm{T}}{\mathit{\boldsymbol{x}}^d}} }}}} - 1} \right)} } } \end{array} $ (24)

V(f, L, D)的梯度由模型参数Θ={ w k|1≤km}确定。

$ \frac{{\partial V}}{{\partial \mathit{\Theta }}} = \left[ {\frac{{\partial V}}{{\partial {\mathit{\boldsymbol{w}}_1}}},\frac{{\partial V}}{{\partial {\mathit{\boldsymbol{w}}_2}}}, \cdots ,\frac{{\partial V}}{{\partial {\mathit{\boldsymbol{w}}_m}}}} \right] $ (25)
$ \begin{array}{l} \frac{{\partial V}}{{\partial {\mathit{\boldsymbol{w}}_k}}} = - \frac{1}{{mL}}\sum\limits_{i = 1}^L {\frac{{\partial BLH\left( {{f_k}\left( {{\mathit{\boldsymbol{x}}_i}} \right),{y_i}} \right)}}{{\partial {\mathit{\boldsymbol{w}}_k}}}} + \\ \;\;\;\;\;\;\;\;\;\frac{{2\gamma }}{{m\left( {m - 1} \right)}}\sum\limits_{k' = 1,k' \ne k}^m {\frac{{\partial d\left( {{f_k},{f_{k'}},D} \right)}}{{\partial {\mathit{\boldsymbol{w}}_k}}}} \end{array} $ (26)
$ \begin{array}{*{20}{c}} {\frac{{\partial BLH\left( {{f_k}\left( {{\mathit{\boldsymbol{x}}_i}} \right),{y_i}} \right)}}{{\partial {\mathit{\boldsymbol{w}}_k}}} = \left( {\frac{{\left( {1 + {y_i}} \right)\left( {1 - {f_k}\left( {{\mathit{\boldsymbol{x}}_i}} \right)} \right)}}{4} - } \right.}\\ {\left. {\frac{{\left( {1 - {y_i}} \right)\left( {1 + {f_k}\left( {{\mathit{\boldsymbol{x}}_i}} \right)} \right)}}{4}} \right) \cdot {\mathit{\boldsymbol{x}}_i}} \end{array} $ (27)
$ \frac{{\partial d\left( {{f_k},{f_{k'}},D} \right)}}{{\partial {\mathit{\boldsymbol{w}}_k}}} = \frac{1}{{2\left| D \right|}} \cdot \sum\limits_{x \in D} {{f_{k'}}\left( \mathit{\boldsymbol{x}} \right)} \cdot \left( {1 - {f_k}{{\left( \mathit{\boldsymbol{x}} \right)}^2}} \right) \cdot \mathit{\boldsymbol{x}} $ (28)

改进UDEED算法如下:

输入  有标签训练样本L={(x i, yi)|1≤iL}, 无标签训练样本U={ x i|L+1≤iL+u}, 梯度下降法学习率alpha= 0.25, 迭代次数均设为30

输出  $H(\mathit{\boldsymbol{x}}) = sign\left({\frac{1}{m}\sum\limits_{k = 1}^m {{h_k}} (\mathit{\boldsymbol{x}})} \right)$

1. for k=1, 2, …, m do fk=f(L, Lbs)

//初始化fk, Lbs表示从L中自助采样, fk=f(L, Lbs)表示在//Lbs中采用梯度下降法求解基分类器式(18), 目标函数为//式(19), 梯度公式为式(23)

2. for all k hk=h(L, D, fk)//hk表示将初始化的fk代入//V(f, L, D), 采用梯度下降法进一步优化参数。

3. end for

3 实验结果与分析

在python 3.3环境下, 本文选取2个UCI常用数据集对改进UDEED算法与原始UDEED算法拟合效果进行探究, 其数据集描述如表 2所示。

下载CSV 表 2 UCI数据集描述

每一个数据集使用交叉验证法按比例将其自动划分为有标签集L, 无标签训练集U, 测试集T, 设定测试集T占数据集的50%, 用$r = \frac{{|L|}}{{|L| + |U|}}$表示有标签数据在训练集中所占的比例。本文r取0.25, 表明只有少量有标签数据可用, 通过30次交叉验证法后取均值得到实验结果, 以排除随机性。

本文算法基分类器采用多项式核非线性Logistic模型, 而原UDEED算法基分类器采用线性Logistic模型。在集成规模不同时, 学习器在测试集上的预测效果不同, 在2个数据集上基分类器数目对预测效果的影响如图 1所示。取基分类器为40, 2种算法准确率对比如表 3所示, 可以看出, 本文算法具有较好的预测效果。

Download:
图 1 预测准确率随基分类器数目变化曲线
下载CSV 表 3 2种算法预测准确率对比
4 结束语

本文提出一种基于非线性Logistic模型的改进UDEED算法, 通过引入多项式的非线性Logistic模型, 对UDEED算法中的基分类器进行改进, 从而提高模型的识别率。实验结果表明, 与UDEED算法相比, 改进算法具有较高的准确率。下一步将引入Adaboost算法, 以提升非线性Logistic模型的适应性。

参考文献
[1]
SCHAPIRE R E, SINGER Y.Improved boosting algorithms using confidence-rated predictions[C]//Proceedings of the 7th Annual Conference on Computational Learning Theory.New York, USA: ACM Press, 1998: 80-91. (0)
[2]
BREIMAN L. Bagging predictors[J]. Machine Learning, 1996, 24(2): 123-140. (0)
[3]
WU Xindong, KUMAR V, QUINLAN J R, et al. Top 10 algorithms in data mining[J]. Knowledge and Information Systems, 2007, 14(1): 1-37. (0)
[4]
孙博, 王建东, 陈海燕, 等. 集成学习中的多样性度量[J]. 控制与决策, 2014(3): 385-395. (0)
[5]
ZHOU Zhihua.When semi-supervised learning meets ensemble learning[C]//Proceedings of the 8th International Workshop on Multiple Classifier Systems.Berlin, Germany: Springer, 2009: 529-538. (0)
[6]
CHAPELLE O, SCHÖLKOPF B, ZIEN A. Semi-supervised learning[M]. Cambridge, USA: MIT Press, 2006: 217-235. (0)
[7]
BLUM A, MITCHELL T.Combining labeled and unlabeled data with co-training[C]//Proceedings of the 7th Annual Conference on Computational Learning Theory.New York, USA: ACM Press, 1998: 92-100. (0)
[8]
DALCHÉ-BUC F, GRANDVALET Y, AMBROISE C.Semi-supervised marginboost[EB/OL].[2017-12-20].http://papers.nips.cc/paper/2108-semi-supervised-marginboost.pdf. (0)
[9]
BENNETT K P, DEMIRIZ A, MACLIN R.Exploiting unlabeled data in ensemble methods[C]//Proceedings of the 8th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.New York, USA: ACM Press, 2002: 289-296. (0)
[10]
周志华. 基于分歧的半监督学习[J]. 自动化学报, 2013, 39(11): 1871-1878. (0)
[11]
LI Ming, ZHOU Zhihua. Improve computer-aided diagnosis with machine learning techniques using undiagnosed samples[J]. IEEE Transactions on Systems, Man, and Cybernetics, 2007, 37(6): 1088-1098. DOI:10.1109/TSMCA.2007.904745 (0)
[12]
ZHANG Minling, ZHOU Zhihua.Exploiting unlabeled data to enhance ensemble diversity[C]//Proceedings of IEEE International Conference on Data Mining.Washington D.C., USA: IEEE Computer Society, 2010: 619-628. (0)
[13]
LAMBERT D M, STOCK J R, ELLRAM L M.Fundamentals of logistics management[EB/OL].[2017-12-20].https://bbs.pinggu.org/a-1194068.html. (0)
[14]
苗常青.关于非线性Logistic回归模型的若干讨论[D].扬州: 扬州大学, 2013. (0)
[15]
徐赢, 潘有军. 一种预测含水率的非线性Logistic模型[J]. 油气藏评价与开发, 2015(5): 22-25. DOI:10.3969/j.issn.2095-1426.2015.05.005 (0)