作者投稿和查稿 主编审稿 专家审稿 编委审稿 远程编辑

计算机工程 ›› 2010, Vol. 36 ›› Issue (1): 12-14. doi: 10.3969/j.issn.1000-3428.2010.01.005

• 博士论文 • 上一篇    下一篇

基于贝叶斯网络的二元语法中文分词模型

刘 丹,方卫国,周 泓   

  1. (北京航空航天大学经济管理学院,北京 100083)
  • 收稿日期:1900-01-01 修回日期:1900-01-01 出版日期:2010-01-05 发布日期:2010-01-05

Bigram Chinese Word Segmentation Model Based on Bayesian Network

LIU Dan, FANG Wei-guo, ZHOU Hong   

  1. (School of Economy and Management, Beihang University, Beijing 100083)
  • Received:1900-01-01 Revised:1900-01-01 Online:2010-01-05 Published:2010-01-05

摘要: 提出基于贝叶斯网络的中文分词模型,使用性能更好的平滑算法,可同时实现交叉、组合歧义消解以及译名、人名识别。应用字齐Viterbi算法求解,在保证精度和召回率的前提下,有效提高了分词效率。实验结果显示,该模型封闭测试的精度、召回率分别为99.68%和99.7%,分词速度约为每秒74 800字。

关键词: 中文分词, 贝叶斯网络, Viterbi算法, N元语法

Abstract: This paper proposes Chinese word segmentation model based on Bayesian network, which adopts better smoothing algorithm to achieves word sense disambiguation and automatic recognition of foreign/domestic person names together. Viterbi algorithm is used in the model, which is demonstrated to be more efficient in word segmentation under acceptable accuracy and recall rate. Experimental results show that precision rate is 99.68% and recall rate is 99.7% in close test, with the speed of 74 800 words per second.

Key words: Chinese word segmentation, Bayesian network, Viterbi algorithm, N-gram

中图分类号: