[1] HAN Songqiao,HAO Xiaoling,HUANG Hailiang.An event-extraction approach for business analysis from online Chinese news[J].Electronic Commerce Research and Applications,2018,28(3):244-260. [2] ZHANG Yijie,LI Peifeng,ZHU Qiaoming.Joint learning for identifying temporal and causal relations between events[J].Computer Engineering,2020,46(7):65-71.(in Chinese)张义杰,李培峰,朱巧明.面向事件时序和因果关系识别的联合学习方法[J].计算机工程,2020,46(7):65-71. [3] XIANG Wei,WANG Bang.Survey of Chinese event extraction research[J].Computer Technology and Develop-ment,2020,30(2):1-6.(in Chinese)项威,王邦.中文事件抽取研究综述[J].计算机技术与发展,2020,30(2):1-6. [4] ZHOU Xingfa.Causal relation extraction of Uyghur events[D].Xinjiang:Xinjiang University,2018.(in Chinese)周兴发.维吾尔语事件间因果关系抽取[D].新疆:新疆大学,2018. [5] KIM H B,JOUNG J,KIM K.Semi-automatic extraction of technological causality from patents[J].Computers and Industrial Engineering,2018,115(1):532-542. [6] ZHAO Sendon,LIU Tin,ZHAO Sicheng,et al.Event causality extraction based on connectives analysis[J].Neurocomputing,2016,173(1):1943-1950. [7] ZHONG Jun,YU Long,TIAN Shengwei,et al.Causal relation extraction of Uyghur emergency events based on cascaded model[J].Acta Automatica Sinica,2014,40(4):771-779.(in Chinese)钟军,禹龙,田生伟,等.基于双层模型的维吾尔语突发事件因果关系抽取[J].自动化学报,2014,40(4):771-779. [8] LI Peifeng,HUANG Yilong,ZHU Qiaoming.Global optimization to recognize causal relations between events[J].Journal of Tsinghua University(Science and Technology),2017,57(10):1042-1047.(in Chinese)李培峰,黄一龙,朱巧明.使用全局优化方法识别中文事件因果关系[J].清华大学学报(自然科学版),2017,57(10):1042-1047. [9] TIAN Shengwei,ZHOU Xingfa,YU Long,et al.Causal relation extraction of Uyghur events based on bidirectional long short-term memory model[J].Journal of Electronics and Information Technology,2018,40(1):200-208.(in Chinese)田生伟,周兴发,禹龙,等.基于双向LSTM的维吾尔语事件因果关系抽取[J].电子与信息学报,2018,40(1):200-208. [10] PENG Yuqing,SONG Chubai,YAN Qian,et al.Research on Chinese text classification based on hybrid model of VDCNN and LSTM[J].Computer Engineering,2018,44(11):190-196.(in Chinese)彭玉青,宋初柏,闫倩,等.基于VDCNN与LSTM混合模型的中文文本分类研究[J].计算机工程,2018,44(11):190-196. [11] HE Kaiming,ZHANG Xiangyu,REN Shaoqing,et al.Deep residual learning for image recognition[C]//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition.Washington D.C.,USA:IEEE Press,2016:25-32. [12] DEVLIN J,CHANG M W,LEE K,et al.BERT:pre-training of deep bidirectional transformers for language under-standing[EB/OL].[2019-12-25].https://arxiv.org/abs/1810.04805. [13] VASWANI A,SHAZEER N,PARMAR N,et al.Attention is all you need[EB/OL].[2019-12-25].https://www.research gate.net/publication/317558625_Attention_Is_All_You_Need. [14] JIMMY L B,JAMIE R K,GEOFFREY E.H,et al.Layer Normalization[EB/OL].[2019-12-25].https://arxiv.org/abs/1607.06450. [15] CHO K,VAN M B,GULCEHRE C,et al.Learning phrase representations using RNN encoder-decoder for statistical machine translation[EB/OL].[2019-12-25].https://www.oalib.com/paper/4082023#.YD8uWPmuVB4. [16] YANG Piao,DONG Wenyong.Chinese named entity recognition method based on BERT embedding[J].Computer Engineering,2020,46(4):40-45,52.(in Chinese)杨飘,董文永.基于BERT嵌入的中文命名实体识别方法[J].计算机工程,2020,46(4):40-45,52. [17] RANA R.Gated recurrent unit for emotion classifica-tion from noisy speech[EB/OL].[2019-12-25].https://www.researchgate.net/publication/311842569_Gated_Recurrent_Unit_GRU_for_Emotion_Classification_from_Noisy_Speech. [18] LI Lishuang,WAN Jia,ZHENG Jieqiong,et al.Biomedical event extraction based on GRU integrating attention mechanism[J].BMC Bioinformatics,2018,19(9):177-184. [19] RUSSAKOVSKY O,DENG J,SU H,et al.Imagenet large scale visual recognition challenge[J].International Journal of Computer Vision,2015,115(3):211-252. [20] LAMPLE G,BALLESTEROS M,SUBRAMANIAN S,et al.Neural architectures for named entity recognition[EB/OL].[2019-12-25].https://www.researchgate.net/publication/305334469_Neural_Architectures_for_Named_Entity_Recognition. [21] FENG Chong,KANG Liqi,SHI Ge,et al.Causality extraction with GAN[J].Acta Automatica Sinica,2018,44(5):811-818.(in Chinese)冯冲,康丽琪,石戈,等.融合对抗学习的因果关系抽取[J].自动化学报,2018,44(5):811-818. |