[1]HINTON G,DENG L,YU D,et al.Deep neural networks for acoustic modeling in speech recognition:the shared views of four research groups[J].IEEE Signal Processing Magazine,2012,29(6):82-97./br [2]DAHL G E,YU D,DENG L,et al.Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition[J].IEEE Transactions on Audio,Speech,and Language Processing,2012,20(1):30-42./br [3]QIAN Y,LIU J.Cross-lingual and ensemble MLPs strategies for low-resource speech recognition[C]//Proceedings of INTERSPEECH 2012.Porland,USA:[s.n.],2012:2582-2585./br [4]SCHULTZ T,WAIBEL A.Multilingual and crosslingual speech recognition[C]//Proceedings of DARPA Workshop on Broadcast News Transcription and Understanding.[S.l.]:DARPA,1998:259-262./br [5]HUANG J T,LI J,YU D,et al.Cross-language knowledge transfer using multilingual deep neural network with shared hidden layers[C]//Proceedings of 2013 IEEE International Conference on Acoustics,Speech and Signal Processing.Washington D.C.,USA:IEEE Press,2013:7304-7308./br [6]PLAHL C,SCHLTER R,NEY H.Cross-lingual portability of Chinese and English neural network features for French and German LVCSR[C]//Proceedings of 2011 IEEE Workshop on Automatic Speech Recognition and Understanding.Washington D.C.,USA:IEEE Press,2011:371-376./br [7]GHOSHAL A,SWIETOJANSKI P,RENALS S.Multi-lingual training of deep neural networks[C]//Proceedings of 2013 IEEE International Conference on Acoustics,Speech and Signal Processing.Washington D.C.,USA:IEEE Press,2013:7319-7323./br [8]XU H,SU H,NI C,et al.Semi-supervised and cross-lingual knowledge transfer learnings for DNN hybrid acoustic models under low-resource conditions[C]//Proceedings of InterSpeech’16.San Francisco,USA:[s.n.],2016:1315-1319./br [9]PAN S J,YANG Q.A survey on transfer learning[J].IEEE Transactions on Knowledge and Data Engineering,2010,22(10):1345-1359./br [10]RAINA R,BATTLE A,LEE H,et al.Self-taught learning:transfer learning from unlabeled data[C]//Proceedings of the 24th International Conference on Machine Learning.New York,USA:ACM Press,2007:759-766./br [11]DAI W,YANG Q,XUE G R,et al.Boosting for transfer learning[C]//Proceedings of the 24th International Conference on Machine Learning.New York,USA:ACM Press,2007:193-200./br [12]HEIGOLD G,VANHOUCKE V,SENIOR A,et al.Multilingual acoustic models using distributed deep neural networks[C]//Proceedings of 2013 IEEE Inter-national Conference on Acoustics,Speech and Signal Processing.Washington D.C.,USA:IEEE Press,2013:8619-8623./br [13]艾斯卡尔·肉孜,殷实,张之勇,等.THUYG-20:一个免费的维吾尔语语音数据库[J].清华大学学报(自然科学版),2017,57(2):182-187./br [14]WANG D,ZHANG X.THCHS-30:a free Chinese speech corpus[EB/OL].[2017-06-01].https://arxiv.org/pdf/1512.01882.pdf./br [15]PANAYOTOV V,CHEN G,POVEY D,et al.Librispeech:an ASR corpus based on public domain audio books[C]//Proceedings of 2015 IEEE International Conference on Acoustics,Speech and Signal Processing.Washington D.C.,USA:IEEE Press,2015:5206-5210./br [16]POVEY D,GHOSHAL A,BOULIANNE G,et al.The Kaldi speech recognition toolkit[EB/OL].[2017-06-01].http://publications.idiap.ch/downloads/papers/2012/Provey_AS RU2011_2011.pdf./br |