作者投稿和查稿 主编审稿 专家审稿 编委审稿 远程编辑

计算机工程 ›› 2022, Vol. 48 ›› Issue (6): 288-294. doi: 10.19678/j.issn.1000-3428.0061119

• 开发研究与工程应用 • 上一篇    下一篇

面向法律文书的分段式摘要模型

王刚, 孙媛媛, 陈彦光, 林鸿飞   

  1. 大连理工大学 计算机科学与技术学院, 辽宁 大连 116024
  • 收稿日期:2021-03-15 修回日期:2021-07-15 发布日期:2021-07-27
  • 作者简介:王刚(1994—),男,硕士研究生,主研方向为自然语言处理;孙媛媛(通信作者),教授、博士生导师;陈彦光,硕士研究生;林鸿飞,教授、博士生导师。
  • 基金资助:
    国家重点研发计划(2018YFC0830604)。

Segmented Summarization Model for Legal Documents

WANG Gang, SUN Yuanyuan, CHEN Yanguang, LIN Hongfei   

  1. School of Computer Science and Technology, Dalian University of Technology, Dalian, Liaoning 116024, China
  • Received:2021-03-15 Revised:2021-07-15 Published:2021-07-27

摘要: 文本摘要是指对文本信息内容进行概括、提取主要内容进而形成摘要的过程。现有的文本摘要模型通常将内容选择和摘要生成独立分析,虽然能够有效提高句子压缩和融合的性能,但是在抽取过程中会丢失部分文本信息,导致准确率降低。基于预训练模型和Transformer结构的文档级句子编码器,提出一种结合内容抽取与摘要生成的分段式摘要模型。采用BERT模型对大量语料进行自监督学习,获得包含丰富语义信息的词表示。基于Transformer结构,通过全连接网络分类器将每个句子分成3类标签,抽取每句摘要对应的原文句子集合。利用指针生成器网络对原文句子集合进行压缩,将多个句子集合生成单句摘要,缩短输出序列和输入序列的长度。实验结果表明,相比直接生成摘要全文,该模型在生成句子上ROUGE-1、ROUGE-2和ROUGE-L的F1平均值提高了1.69个百分点,能够有效提高生成句子的准确率。

关键词: 司法摘要, 预训练模型, Transformer编码器, 序列标注, 指针生成器网络, 分段式摘要模型

Abstract: Text summary refers to the process of summarization the content of text information, extracting the relevant content, and then formulating a summarization.Existing text summarization models usually analyze content selection and summarization generation separately.Although they can effectively improve the performance of sentence compression and fusion models, some text information will be lost in the extraction process, resulting in reduced accuracy.Based on the pre-training model and the document level sentence encoder with Transformer structure, a segmented summarization model combining content extraction and summarization generation is proposed.The BERT model is used to conduct self-supervised learning on a large corpus to arrive at a word representation containing rich semantic information.Based on the Transformer structure, each sentence is divided into three types of tags through the fully connected network classifier, and the original sentence set corresponding to each sentence summarization extracted.The Pointer-Generator(PG) network is used to compress the original sentence set, and multiple sentence sets generated into a single sentence summarization to shorten the length of the output and input sequences.The experimental results show that compared with the direct generation of summarization full text, the F1 average value of ROUGE-1, ROUGE-2, and ROUGE-L in generating sentences increased by 1.69 percentage points, which can effectively improve the accuracy of generating sentences.

Key words: judical summarization, pre-training model, Transformer encoder, sequence labeling, Pointer-Generator(PG) network, segmented summarization model

中图分类号: