[1] CUI X Y, CHEN Z, YIN F L.Speech enhancement based on simple recurrent unit network[J].Applied Acoustics, 2020, 157:107019. [2] WEISS M, ASCHKENASY E, PARSONS T W.Study and development of the INTEL technique for improving speech intelligibility[EB/OL].[2021-09-05].https://www.semanticscholar.org/paper/Study-and-Development-of-the-INTEL-Technique-for-Weiss-Aschkenasy/0ab966a0d8be76591cbd44009a32f7ceb3d3f7ff. [3] KAMATH S, LOIZOU P.A multi-band spectral subtraction method for enhancing speech corrupted by colored noise[C]//Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing.Washington D.C., USA:IEEE Press, 2002:4160-4164. [4] MCAULAY R, MALPASS M.Speech enhancement using a soft-decision noise suppression filter[J].IEEE Transac-tions on Acoustics, Speech, and Signal Processing, 1980, 28(2):137-145. [5] EPHRAIM Y, MALAH D.Speech enhancement using a minimum mean-square error log-spectral amplitude estimator[J].IEEE Transactions on Acoustics, Speech, and Signal Processing, 1985, 33(2):443-445. [6] LIM J, OPPENHEIM A.All-pole modeling of degraded speech[J].IEEE Transactions on Acoustics, Speech, and Signal Processing, 1978, 26(3):197-210. [7] LIM J S, OPPENHEIM A V.Enhancement and bandwidth compression of noisy speech[J].Proceedings of the IEEE, 1979, 67(12):1586-1604. [8] HU Y, LOIZOU P C.Incorporating a psychoacoustical model in frequency domain speech enhancement[J].IEEE Signal Processing Letters, 2004, 11(2):270-273. [9] JENSEN J, HEUSDENS R.Improved subspace-based single-channel speech enhancement using generalized super-Gaussian priors[J].IEEE Transactions on Audio, Speech, and Language Processing, 2007, 15(3):862-872. [10] YANG C H, WANG J C, WANG J F, et al.Design and implementation of subspace-based speech enhancement under in-car noisy environments[J].IEEE Transactions on Vehicular Technology, 2008, 57(3):1466-1479. [11] XU Y, DU J, HUANG Z, et al.Multi-objective learning and mask-based post-processing for deep neural network based speech enhancement[EB/OL].[2021-09-05].https://arxiv.org/pdf/1703.07172.pdf. [12] LÜ S B, HU Y X, ZHANG S M, et al.DCCRN+:channel-wise subband DCCRN with SNR estimation for speech enhancement[EB/OL].[2021-09-05].https://arxiv.org/pdf/2106.08672v1.pdf. [13] YUAN W H.Incorporating group update for speech enhancement based on convolutional gated recurrent network[J].Speech Communication, 2021, 132:32-39. [14] ZHOU L M, GAO Y Y, WANG Z L, et al.Complex spectral mapping with attention based convolution recurrent neural network for speech enhancement[EB/OL].[2021-09-05].https://arxiv.org/abs/2104.05267. [15] XU X M, HAO J J.AMFFCN:attentional multi-layer feature fusion convolution network for audio-visual speech enhancement[EB/OL].[2021-09-05].https://arxiv.org/abs/2101.06268. [16] CUI X Y, CHEN Z, YIN F L.Multi-objective based multi-channel speech enhancement with BiLSTM network[J].Applied Acoustics, 2021, 177:107927. [17] ZHANG Q, WANG D, ZHAO R, et al.Sensing to hear:speech enhancement for mobile devices using acoustic signals[J].Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2021, 5(3):137. [18] SUN K, ZHANG X Y.UltraSE:single-channel speech enhancement using ultrasound[C]//Proceedings of the 27th Annual International Conference on Mobile Computing and Networking.Washington D.C., USA:IEEE Press, 2021:160-173. [19] YU G C, WANG Y T, WANG H, et al.A two-stage complex network using cycle-consistent generative adversarial networks for speech enhancement[J].Speech Communication, 2021, 134:42-54. [20] 袁文浩, 梁春燕, 夏斌.基于深度神经网络的因果形式语音增强模型[J].计算机工程, 2019, 45(8):255-259. YUAN W H, LIANG C Y, XIA B.Causal speech enhancement model based on deep neural network[J].Computer Engineering, 2019, 45(8):255-259.(in Chinese) [21] LEI T, ZHANG Y.Training RNNs as fast as CNNs[EB/OL].[2021-09-05].https://arxiv.org/pdf/1709.02755v1.pdf. [22] 袁文浩, 孙文珠, 夏斌, 等.利用深度卷积神经网络提高未知噪声下的语音增强性能[J].自动化学报, 2018, 44(4):751-759. YUAN W H, SUN W Z, XIA B, et al.Improving speech enhancement in unseen noise using deep convolutional neural network[J].Acta Automatica Sinica, 2018, 44(4):751-759.(in Chinese) [23] KOUNDINYA S, KARMAKAR A.Online speech enhancement by retraining of LSTM using SURE loss and policy iteration[J].Neural Processing Letters, 2021, 53(5):3237-3251. [24] DAUPHIN Y N, FAN A, AULI M, et al.Language modeling with gated convolutional networks[EB/OL].[2021-09-05].https://arxiv.org/pdf/1612.08083.pdf. [25] GAROFOLO J S, LAMEL L F, FISHER W M, et al.TIMIT acoustic-phonetic continuous speech corpus[EB/OL].[2021-09-05].https://catalog.ldc.upen n.edu/LDC93S1. [26] HU G.100 nonspeech environmental sounds[EB/OL].[2021-09-05].http://web.cse.ohio-state.edu/pnl/corpus/HuNonsp eech/HuCorpus.html. [27] VARGA A, STEENEKEN H J M.Assessment for automatic speech recognition:II.NOISEX-92:a database and an experiment to study the effect of additive noise on speech recognition systems[J].Speech Communication, 1993, 12(3):247-251. |