参考文献
[1]Blei D M,Ng A Y,Jordan M I.Latent Dirichlet Allocation[J].Journal of Machine Learning Research,2003,3(3):993-1022.
[2]Wei Xing,Croft W B.LDA-based Document Models for Ad-hoc Retrieval[C]//Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval.New York,USA:ACM Press,2006:178-185.
[3]Teh Y W,Newman D,Welling M.A Collapsed Variational Bayesian Inference Algorithm for Latent Dirichlet Allocation[M].Cambridge,USA:MIT Press,2006.
[4]Elman J L.Distributed Representations,Simple Recurrent Networks,and Grammatical Structure[J].Machine Learning,1991,7(2/3):195-225.
[5]Ackley D H,Hinton G E,Sejnowski T J.A Learning Algorithm for Boltzmann Machines[J].Cognitive Science,1985,9(1):147-169.
[6]Tieleman T.Training Restricted Boltzmann Machines Using Approximations to the Likelihood Gradient[C]//Proceedings of the 25th International Conference on Machine Learning.New York,USA:ACM Press,2008:1064-1071.
[7]Freund Y,Haussler D.Unsupervised Learning of Distri-butions on Binary Vectors Using Two Layer Networks[J].Neural Computation,2002,14(8):1711-1800.
[8]Hinton G E.Products of Experts[C]//Proceedings of the 9th International Conference on Artificial Neural Networks.Washington D.C.,USA:IEEE Press,1999:1-6.
[9]Younes L.On the Convergence of Markovian Stochastic Algorithms with Rapidly Decreasing Ergodicity Rates[J].International Journal of Probability and Stochastic Processes,1999,65(3/4):177-228.
[10]Boureau Y,Cun Y L.Sparse Feature Learning for Deep Belief Networks[D].New York,USA:New York University,2007.
[11]Gehler P V,Holub A D,Welling M.The Rate Adapting Poisson Model for Information Retrieval and Object Recognition[C]//Proceedings of the 23rd International Conference on Machine Learning.New York,USA:ACM Press,2006:337-344.
[12]Xing E P,Yan Rong,Hauptmann A G.Mining Associated Text and Images with Dual-wing Harmoniums[C]//Proceedings of the 21st Conference on Uncertainty in Artificial Intelligence.Berlin,Germany:Springer,2005:633-641.
[13]Salakhutdinov R,Hinton G.Semantic Hashing[C]//Proceedings of SIGIR Workshop on Information Retrieval and Applications of Graphical Model.Berlin,Germany:Springer,2007.
[14]Hinton G E,Salakhutdinov R.Replicated Softmax:An Undirected Topic Model[C]//Proceedings of Conference on Neural Information Processing Systems.Berlin,Germany:Springer,2009:1607-1614.
[15]Srivastava N,Salakhutdinov R R,Hinton G E.Modeling Documents with Deep Boltzmann Machines[Z].2013.
[16]Salakhutdinov R,Mnih A,Hinton G.Restricted Boltzmann Machines for Collaborative Filtering[C]//Proceedings of the 24th International Conference on Machine Learning.New York,USA:ACM Press,2007:791-798.
[17]江雨燕,李平,王清.用于多标签分类的改进Labeled LDA模型[J].南京大学学报:自然科学版,2013,49(4):425-432.
[18]江雨燕,李平,王清.基于共享背景主题的Labeled LDA模型[J].电子学报,2013,41(9):1794-1799.
[19]Casella G,George E I.Explaining the Gibbs Sam-pler[J].The American Statistician,1992,46(3):167-174.
[20]Neal R M.Annealed Importance Sampling[J].Statistics and Computing,2001,11(2):125-139.
[21]Zhang Minling,Zhou Zhihua.ML-KNN:A Lazy Learning Approach to Multi-label Learning[J].Pattern Recognition,2007,40(7):2038-2048.
[22]Bengio Y.Learning Deep Architectures for AI[J].Foundations and Trends in Machine Learning,2009,2(1):1-127.
编辑顾逸斐 |