[1] VAPNIK V N.The nature of statistical learning theory[M].Berlin,Germany:Springer,1995. [2] GAO Wei,ZHOU Zhihua.On the doubt about margin explanation of boosting[J].Artificial Intelligence,2013,203(2):1-18. [3] ZHANG Teng,ZHOU Zhihua.Large margin distribution machine[C]//Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.New York,USA:ACM Press,2014:313-322. [4] WU Chao,LIU Yufeng.Robust truncated hinge loss support vector machines[J].Journal of the American Statistical Association,2007,102(479):974-983. [5] HUANG Xiaolin,SHI Lei.Support vector machine classifier with pinball loss[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2014,36(5):984-997. [6] YANG Liming,DONG Hongwei.Support vector machine with truncated pinball loss and its application in pattern recognition[J].Chemometrics and Intelligent Laboratory Systems,2018,177(2):89-99. [7] SHALEV-SHWARTZ S,SINGER Y.PEGASOS:primal estimated sub-gradient solver for SVM[J].Mathematical Programming,2011,127(1):3-30. [8] JOHNSON R,ZHANG T.Accelerating stochastic gradient descent using predictive variance reduction[J].News in Physiological Sciences,2013,1(3):315-323. [9] NESTEROV Y.Introductory lectures on convex optimization:a basic course[M].Boston,USA:Kluwer Academic Public,2004. [10] SHANG Fanhua,LIU Yuanyuan.Fast stochastic variance reduced gradient method with momentum acceleration for machine learning[EB/OL].[2019-09-10].https://www.researchgate.net/publication. [11] TSENG P.Approximation accuracy,gradient methods,and error bound for structured convex optimization[J].Mathematical Programming,2010,125(2):263-295. [12] ZHU Zeyuan.Katyusha:the first direct acceleration of stochastic gradient methods[C]//Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing.New York,USA:ACM Press,2017:1200-1205. [13] NGUYEN V C,XU Huan.Accelerated stochastic mirror descent algorithms for composite non-strongly convex optimization[J].Journal of Optimization Theory and Applications,2019,18(2):541-566. [14] LIU S,KAILKHURA B.Zeroth-order stochastic variance reduction for nonconvex optimization[C]//Proceedings of the 32nd Conference on Neural Information Processing Systems.Washington D.C.,USA:IEEE Press,2018:1-26. [15] LIU Shijia,CHEN Jie.Zeroth-order online alternating direction method of multipliers:convergence analysis and applications[C]//Proceedings of the 21st International Conference on Artificial Intelligence and Statistics.Washington D.C.,USA:IEEE Press,2018:288-297. [16] ZHU Zeyuan.Katyusha:accelerated variance reduction for faster SGD[EB/OL].[2019-09-10].https://arxiv.org/abs/1603.05953v1. [17] SHANG Fanhua,ZHOU Kaiwen.VR-SGD:a simple stochastic variance reduction method for machine learning[J].IEEE Transactions on Knowledge and Data Engineering,2018,32(1):188-202. [18] CHEN Fanyong,ZHAN Jing.Large cost-sensitive margin distribution machine for imbalanced data classification[J].Neurocomputing,2017,224(8):45-57. [19] ZHOU Yuhang,ZHOU Zhihua.Cost-sentive large margin distribution machine[J].Journal of Computer Research and Development,2016,53(9):1964-1970.(in Chinese)周宇航,周志华.代价敏感大间隔分布学习机[J].计算机研究与发展,2016,53(9):1964-1970. [20] TAO Wei,PAN Zhisong.The individual convergence of projected subgradient methods using the Nesterov's step-size stratey[J].Chinese Journal of Computers,2018,41(1):164-176.(in Chinese)陶蔚,潘志松.使用Nesterov步长策略投影次梯度方法的个体收敛性[J].计算机学报,2018,41(1):164-176. [21] CHEN Yujia,TAO Wei.Optimal individual convergence rate of the Heavy-Ball-Based momentum methods[J].Journal of Computer Research and Development,2019,56(8):1686-1694.(in Chinese)程禹嘉,陶蔚.Heavy-Ball型动量方法的最优个体收敛速率[J].计算机研究与发展,2019,56(8):1686-1694. |