[1] SAEDI A, FATEMI A, ALI NEMATBAKHSH M. Representation-centric approach for classification of consumer health questions[J]. Expert Systems with Applications, 2023, 229: 120436. [2] YILMAZ S, TOKLU S. A deep learning analysis on question classification task using Word2Vec representations[J]. Neural Computing and Applications, 2020, 32(7): 2909-2928. [3] LIU J, YANG Y H, LV S Q, et al. Attention-based BiGRU-CNN for Chinese question classification[J]. Journal of Ambient Intelligence and Humanized Computing, 2019, 10(7): 2675-2683. [4] ZHANG D, LEE W S. Question classification using support vector machines[C]//Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. New York,USA:ACM, 2003: 26-32. [5] LIU L, YU Z T, GUO J Y, et al. Chinese question classification based on question property kernel[J]. International Journal of Machine Learning and Cybernetics, 2014, 5(5): 713-720. [6] MISHRA M, MISHRA V K, SHARMA H R. Question classification using semantic, syntactic and lexical features[J]. International Journal of Web & Semantic Technology, 2013, 4(3): 39-47. [7] YAN W, HUYAN W, BENGONG Y. Chinese text classification with feature fusion[J]. Data Analysis and Knowledge Discovery, 2021, 5(10): 1-14. [8] 郑承宇, 王新, 王婷, 等. 基于迁移学习和集成学习的医疗文本分类[J]. 计算机技术与发展, 2022, 32(4): 28-33. ZHENG C Y, WANG X, WANG T, et al. Medical text classification based on transfer learning and integrated learning[J]. Computer Technology and Development, 2022, 32(4): 28-33. (in Chinese) [9] KIM Y. Convolutional neural networks for sentence classification[EB/OL].[2024-04-06]. https://arxiv.org/abs/1408.5882. [10] 王海涛, 宋文, 王辉. 一种基于LSTM和CNN混合模型的文本分类方法[J]. 小型微型计算机系统, 2020, 41(6): 1163-1168. WANG H T, SONG W, WANG H. A text classification method based on the mixed model of LSTM and CNN[J]. Small Microcomputer System, 2020, 41(6): 1163-1168. (in Chinese) [11] ZHANG L, WU Y, CHU Q, et al. SA-model: multi-feature fusion poetic sentiment analysis based on a hybrid word vector model[J]. CMES-Computer Modeling in Engineering & Sciences, 2023, 137(1): 631-645. [12] DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language understanding[EB/OL].[2024-04-06]. https://arxiv.org/abs/1810.04805. [13] LIU Y, OTT M, GOYAL N, et al. RoBERTa: a robustly optimized Bert pretraining approach[EB/OL].[2024-04-06]. https://arxiv.org/abs/1907.11692. [14] LAN Z, CHEN M, GOODMAN S, et al. ALBERT: a lite BERT for self-supervised learning of language representations[EB/OL].[2024-04-06]. https://arxiv.org/abs/1909.11942. [15] SANH V, DEBUT L, CHAUMOND J, et al. DistillBERT, a distilled version of BERT: smaller, faster, cheaper and lighter[EB/OL].[2024-04-06]. https://arxiv.org/abs/1910.01108. [16] JOSHI M, CHEN D Q, LIU Y H, et al. SpanBERT: improving pre-training by representing and predicting spans[J]. Transactions of the Association for Computational Linguistics, 2020, 8: 64-77. [17] QASIM R, BANGYAL W H, ALQARNI M A, et al. A fine-tuned BERT-based transfer learning approach for text classification[J]. Journal of Healthcare Engineering, 2022, 2022: 3498123. [18] YANG Z C, YANG D Y, DYER C, et al. Hierarchical attention networks for document classification[C]//Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg, USA: ACL, 2016: 1480-1489. [19] 贾旭东, 王莉. 基于多头注意力胶囊网络的文本分类模型[J]. 清华大学学报: 自然科学版, 2020, 60(5): 415-421. JIA X D, WANG L. Text classification model based on multi-head attention capsule network[J]. Tsinghua University Journal: Natural Science Edition, 2020, 60(5): 415-421. (in Chinese) [20] 李启行, 廖薇, 孟静雯. 基于注意力机制的双通道DAC-RNN文本分类模型[J]. 计算机工程与应用, 2022, 58(16): 157-163. LI Q X, LIAO W, MENG J W. Dual-channel DAC-RNN text classification model based on attention mechanism[J]. Computer Engineering and Application, 2022, 58(16): 157-163. (in Chinese) [21] HONG S, KIM J, WOO H G, et al. Screening ideas in the early stages of technology development: a Word2Vec and convolutional neural network approach[J]. Technovation, 2022, 112: 102407. [22] LI X Y, RAGA R C, SHI X M. GloVe-CNN-BiLSTM model for sentiment analysis on text reviews[J]. Journal of Sensors, 2022, 2022: 7212366. [23] JATNIKA D, BIJAKSANA M A, SURYANI A A. Word2Vec model analysis for semantic similarities in English words[J]. Procedia Computer Science, 2019, 157: 160-167. [24] TAN K L, LEE C P, LIM K M. RoBERTa-GRU: a hybrid deep learning model for enhanced sentiment analysis[J]. Applied Sciences, 2023, 13(6): 3915. [25] LIU X L, ZHAO W, MA H Q. Research on domain-specific knowledge graph based on the RoBERTa-wwm-ext pretraining model[J]. Computational Intelligence and Neuroscience, 2022, 2022: 8656013. [26] JIANG X C, SONG C, XU Y C, et al. Research on sentiment classification for netizens based on the BERT-BiLSTM-TextCNN model[J]. PeerJ Computer Science, 2022, 8: e1005. [27] ALJOHANI N R, FAYOUMI A, HASSAN S U. A novel focal-loss and class-weight-aware convolutional neural network for the classification of in-text citations[J]. Journal of Information Science, 2023, 49(1): 79-92. [28] 李建东, 傅佳, 李佳琦. 融合双向注意力和对比增强机制的多标签文本分类[J]. 计算机工程与应用, 2024, 60(16): 105-115. LI J D, FU J, LI J Q. Multi-label text classification combining bidirectional attention and contrast enhancement mechanism[J]. Computer Engineering and Applications, 2024, 60(16): 105-115. (in Chinese) [29] 徐逸舟, 林晓, 陆黎明. 基于分层式CNN的长文本情感分类模型[J]. 计算机工程与设计, 2022, 43(4): 1121-1126. XU Y Z, LIN X, LU L M. Long text emotion classification model based on hierarchical CNN[J]. Computer Engineering and Design, 2022, 43(4): 1121-1126. (in Chinese) [30] LI X L, ZHANG Y Y, JIN J Y, et al. A model of integrating convolution and BiGRU dual-channel mechanism for Chinese medical text classifications[J]. PLoS One, 2023, 18(3): e0282824. [31] CHEN N, SU X D, LIU T Y, et al. A benchmark dataset and case study for Chinese medical question intent classification[J]. BMC Medical Informatics and Decision Making, 2020, 20(Suppl 3): 125. [32] ZHANG M J, PANG J C, CAI J H, et al. DPCNN-based models for text classification[C]//Proceedings of the IEEE 10th International Conference on Cyber Security and Cloud Computing (CSCloud)/2023 IEEE 9th International Conference on Edge Computing and Scalable Cloud (EdgeCom). Washington D. C., USA: IEEE Press, 2023: 363-368. [33] 刘心惠, 陈文实, 周爱, 等. 基于联合模型的多标签文本分类研究[J]. 计算机工程与应用, 2020, 56(14): 111-117. LIU X H, CHEN W S, ZHOU A, et al. Research on multi-tag text classification based on joint model[J]. Computer Engineering and Application, 2020, 56(14): 111-117. (in Chinese) [34] 刘勇, 杜建强, 罗计根, 等. 基于语义筛选的ALBERT-TextCNN中医文本多标签分类研究[J]. 现代信息科技, 2023, 7(19): 123-128. LIU Y, DU J Q, LUO J G, et al. Research on multi-label classification of ALBERT-TextCNN TCM texts based on semantic screening[J]. Modern Information Technology, 2023, 7(19): 123-128. (in Chinese) [35] RASMY L, XIANG Y, XIE Z Q, et al. Med-BERT: pretrained contextualized embeddings on large-scale structured electronic health records for disease prediction[J]. NPJ Digital Medicine, 2021, 4: 86. [36] JIANG T, WANG D Q, SUN L L, et al. LightXML: transformer with dynamic negative sampling for high-performance extreme multi-label text classification[C]//Proceedings of the AAAI Conference on Artificial Intelligence. Palo Alto, USA: AAAI Press, 2021: 7987-7994. [37] 马雨萌, 黄金霞, 王昉, 等. 融合BERT与多尺度CNN的科技政策内容多标签分类研究[J]. 情报杂志, 2022, 41(11): 157-163. MA Y M, HUANG J X, WANG F, et al. Research on the multi label classification of science and technology policy content integrating BERT and multi-scale CNN[J]. Information Journal, 2022, 41(11): 157-163. (in Chinese) |