[1]STURM B L.The state of the art ten years after a state of the art:future research in music information retrieval[J].Journal of New Music Research,2014,43(2):147-172.
[2]BHALKE D G,RAO C B R,BORMANE D S.Automatic musical instrument classification using fractional Fourier transform based-MFCC features and counter propagation neural network[J].Journal of Intelligent Information Systems,2016,46(3):1-22.
[3]YU L F,SU L,YANG Y H.Sparse cepstral codes and power scale for instrument identification[C]//Proceedings of IEEE International Conference on Acoustics Speech and Signal Processing.Washington D.C.,USA:IEEE Press,2014:7460-7464.
[4]ISNARD V,SUIED C,LEMAITRE G.Auditory bubbles reveal sparse time-frequency cues subserving identification of musical voices and instruments[J].Journal of the Acoustical Society of America,2017,140(4):3267.
[5]BURRED J J,ROBEL A,SIKORA T.Dynamicspectral envelope modeling for timbre analysis of musical instrument sounds[J].IEEE Transactions on Audio Speech and Language Processing,2010,18(3):663-674.
[6]HU Y,LIU G.Instrument identification and pitch estimation in multi-timbre polyphonic musical signals based on probabilistic mixture model decomposition[J].Journal of Intelligent Information Systems,2013,40(1):141-158.
[7]ARORA V,BEHERA L.Instrument identification using PLCA over stretched manifolds[C]//Proceedings of the 12th National Conference on Communications.Washington D.C.,USA:IEEE Press,2014:1-5.
[8]BENGIO Y,COURVILLE A,VINCENT P.Representation learning:a review and new perspectives[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2012,35(8):1798-1828.
[9]李彧晟,王芳,朱雨倩,等.基于深度置信网络的中国传统乐器分类方法:CN106328121A[P].[2017-01-11].
[10]HAN Y,KIM J,LEE K,et al.Deep convolutional neural networks for predominant instrument recognition in polyphonic music[J].IEEE/ACM Transactions on Audio Speech and Language Processing,2017,25(1):208-221.
[11]PEETERS G,GIORDANO B,SUSINI P,et al.The timbre toolbox:audio descriptors of musical signals[J].Journal of the Acoustical Society of America,2011,130(5):2902-2916.
[12]PONS J,SERRA X.Designing efficient architectures for modeling temporal features with convolutional neural networks[C]//Proceedings of IEEE International Conference on Acoustics,Speech and Signal Processing.Washington D.C.,USA:IEEE Press,2017:2472-2476.
[13]MEDDIS R,LOPEZPOVEDA E,FAY R R,et al.Computational models of the auditory system[M].Berlin,Germany:Springer,2010:135-149.
[14]KALCHBRENNER N,GREFENSTETTE E,BLUNSOM P.A convolutional neural network for modelling sentences[EB/OL].[2017-11-11].https://arxiv.org/pdf/1404.2188.pdf.
[15]李乐,王玉英,李小霞.一种改进的小波能量熵语音端点检测算法[J].计算机工程,2017,43(5):268-274.
[16]University of IOWA electronic music studio:a musical instrument database[EB/OL].[2017-11-09].http://theremin.music.uiowa.edu/ MISflute.html.
[17]Google Inc:Tensorflow for deep learning [EB/OL].[2017-11-09].https://www.tensorflow.org.
[18]HE K,ZHANG X,REN S,et al.Delving deep into rectifiers:surpassing human-level performance on imagenet classification[C]//Proceedings of 2015 IEEE International Conference on Computer Vision.Washington D.C.,USA:IEEE Press,2015:1026-1034.
[19]GLOROT X,BENGIO Y.Understanding the difficulty of training deep feedforward neural networks[J].Journal of Machine Learning Research,2010,9:249-256.
[20]KINGMA D P,BA J.Adam:a method for stochastic optimization[EB/OL].[2017-11-11] http://cn.arxiv.org/pdf/1412.6980v9.
[21]HINTON G E,SRIVASTAVA N,KRIZHEVSKY A,et al.Improving neural networks by preventing co-adaptation of feature detectors[J].Computer Science,2012,3(4):212-223.
[22]MAATEN L V D,HINTON G.Viualizing data using t-SNE[J].Journal of Machine Learning Research,2008,9(2605):2579-2605. |