[1] 刘小蝶, 朱筠, 晋耀红. 中文专利中有标记并列结构的自动识别研究[J]. 计算机工程, 2018, 44(6):162-168, 175. LIU X D, ZHU Y, JIN Y H. Research on automatic identification of marked parallel structures in Chinese patent[J]. Computer Engineering, 2018, 44(6):162-168, 175.(in Chinese) [2] SOUSA M J, JAMIL G, WALTER C E, et al. Big data analytics on patents for innovation public policies[J]. Expert Systems, 2023, 40(1):e12673. [3] 罗艺雄, 吕学强, 游新冬. 融合多特征的专利功效短语识别[J]. 中文信息学报, 2022, 36(12):139-148. LUO Y X, LÜ X Q, YOU X D. Patent efficacy phrase recognition based on multiple features[J]. Journal of Chinese Information Processing, 2022, 36(12):139-148.(in Chinese) [4] 殷亚珏, 高晓雅, 王晶晶, 等. 基于多视角注意力机制的专利匹配方法[J]. 中文信息学报, 2022, 36(7):106-113. YIN Y J, GAO X Y, WANG J J, et al. Patent matching with multi-view attentive network[J]. Journal of Chinese Information Processing, 2022, 36(7):106-113.(in Chinese) [5] KIM J, YOON J, PARK E, et al. Patent document clustering with deep embeddings[J]. Scientometrics, 2020, 123(2):563-577. [6] MASE H, MATSUBAYASHI T, OGAWA Y, et al. Proposal of two-stage patent retrieval method considering the claim structure[J]. ACM Transactions on Asian Language Information Processing, 2005, 4(2):190-206. [7] DEVLIN J, CHANG M W, LEE K, et al. BERT:pre-training of deep bidirectional transformers for language understanding[C]//Proceedings of Conference of the North American Chapter of the Association for Computational Linguistics.[S.l.]:Association for Computational Linguistics, 2019:4171-4186. [8] BINZ M, SCHULZ E. Using cognitive psychology to understand GPT-3[J]. Proceedings of the National Academy of Sciences of the United States of America, 2023, 120(6):e2218. [9] ALT C, GABRYSZAK A, HENNIG L. TACRED revisited:a thorough evaluation of the TACRED relation extraction task[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics.[S.l.]:Association for Computational Linguistics, 2020:1558-1569. [10] 张宁豫, 谢辛, 陈想, 等. 基于知识协同微调的低资源知识图谱补全方法[J]. 软件学报, 2022, 33(10):3531-3545. ZHANG N Y, XIE X, CHEN X, et al. Knowledge collaborative fine-tuning for low-resource knowledge graph completion[J]. Journal of Software, 2022, 33(10):3531-3545.(in Chinese) [11] BHARGAVA P, NG V. Commonsense knowledge reasoning and generation with pre-trained language models:a survey[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2022, 36(11):12317-12325. [12] SINGH G V, FIRDAUS M, MISHRA S, et al. Knowing what to say:towards knowledge grounded code-mixed response generation for open-domain conversations[J]. Knowledge-Based Systems, 2022, 249:108900. [13] HU S D, DING N, WANG H D, et al. Knowledgeable prompt-tuning:incorporating knowledge into prompt verbalizer for text classification[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics.[S.l.]:Association for Computational Linguistics, 2022:2225-2240. [14] YE H B, ZHANG N Y, DENG S M, et al. Ontology-enhanced prompt-tuning for few-shot learning[C]//Proceedings of the ACM Web Conference. New York,USA:ACM Press,2022:778-787. [15] FUJII A. Enhancing patent retrieval by citation analysis[C]//Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. New York,USA:ACM Press,2007:793-794. [16] KANG I S, NA S H, KIM J, et al. Cluster-based patent retrieval[J]. Information Processing & Management, 2007, 43(5):1173-1182. [17] LI S B, HU J, CUI Y X, et al. DeepPatent:patent classification with convolutional neural networks and word embedding[J]. Scientometrics, 2018, 117(2):721-744. [18] FANG L T, ZHANG L, WU H, et al. Patent2Vec:multi-view representation learning on patent-graphs for patent classification[J]. World Wide Web, 2021, 24(5):1791-1812. [19] GUU K, LEE K, TUNG Z, et al. Retrieval augmented language model pre-training[C]//Proceedings of International Conference on Machine Learning. New York,USA:ACM Press,2020:3929-3938. [20] LIU B, LIN T, LI M. Enhancing aspect-category sentiment analysis via syntactic data augmentation and knowledge enhancement[J]. Knowledge-Based Systems, 2023, 264:110339. [21] INCITTI F, URLI F, SNIDARO L. Beyond word embeddings:a survey[J]. Information Fusion, 2023, 89:418-436. [22] 陈彦桦, 李剑. 一种基于结构特征的树相似度计算方法[J]. 计算机工程, 2018, 44(11):197-201, 208. CHEN Y H, LI J. A tree similarity computation method based on structrue feature[J]. Computer Engineering, 2018, 44(11):197-201, 208.(in Chinese) [23] ROGERS A, GARDNER M, AUGENSTEIN I. QA dataset explosion:a taxonomy of NLP resources for question answering and reading comprehension[J]. ACM Computing Surveys, 2023, 55(10):1-45. [24] 陆晓蕾, 倪斌. 基于预训练语言模型的BERT-CNN多层级专利分类研究[J]. 中文信息学报, 2021, 35(11):70-79. LU X L, NI B. BERT-CNN:a hierarchical patent classifier based on pre-trained language model[J]. Journal of Chinese Information Processing, 2021, 35(11):70-79.(in Chinese) [25] 亢晓勉, 宗成庆. 基于篇章结构多任务学习的神经机器翻译[J]. 软件学报, 2022, 33(10):3806-3818. KANG X M, ZONG C Q. Neural machine translation based on multi-task learning of discourse structure[J]. Journal of Software, 2022, 33(10):3806-3818.(in Chinese) [26] LUND B D, WANG T. Chatting about ChatGPT:how may AI and GPT impact academia and libraries?[J]. Library Hi Tech News, 2023, 40(3):26-29. [27] SONG C Y, CAI F, WANG M R, et al. TaxonPrompt:Taxonomy-aware curriculum prompt learning for few-shot event classification[J]. Knowledge-Based Systems, 2023, 264:110290. [28] LIU X, JI K X, FU Y C, et al. P-tuning:prompt tuning can be comparable to fine-tuning across scales and tasks[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics.[S.l.]:Association for Computational Linguistics, 2022:61-68. [29] SCHICK T, SCHVTZE H. Exploiting cloze-questions for few-shot text classification and natural language inference[C]//Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics.[S.l.]:Association for Computational Linguistics, 2021:255-269. [30] LI X L, LIANG P. Prefix-tuning:optimizing continuous prompts for generation[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing.[S.l.]:Association for Computational Linguistics, 2021:4582-4597. [31] SUN T X, LIU X Y, QIU X P, et al. Paradigm shift in natural language processing[J]. Machine Intelligence Research, 2022, 19(3):169-183. [32] GROVER A, LESKOVEC J. node2vec:scalable feature learning for networks[C]//Proceedings of ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York,USA:ACM Press,2016:855-864. [33] MIKOLOV T, SUTSKEVER I, CHEN K, et al. Distributed representations of words and phrases and their compositionality[EB/OL].[2023-04-05].https://arxiv.org/abs/1310.4546. [34] PENNINGTON J, SOCHER R, MANNING C. GloVe:global vectors for word representation[C]//Proceedings of 2014 Conference on Empirical Methods in Natural Language Processing.[S.l.]:Association for Computational Linguistics, 2014:1532-1543. [35] JOULIN A, GRAVE E, BOJANOWSKI P, et al. Bag of tricks for efficient text classification[C]//Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics.[S.l.]:Association for Computational Linguistics, 2017:427-431. [36] REIMERS N, GUREVYCH I. Sentence-BERT:sentence embeddings using siamese BERT-networks[C]//Proceedings of 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing.[S.l.]:Association for Computational Linguistics, 2019:3982-3992. |