摘要: Deep Web包含丰富的、高质量的信息资源,由于没有直接指向Deep Web页面的静态链接,目前大多搜索引擎不能发现这些页 面,只能通过填写表单提交查询获取。为此,提出一种Deep Web爬虫爬行策略。用网页分类器的分层结果指导链接信息提取器提取有前途的链接,将爬行深度限定在3层,从最靠近查询表单中提取链接,且只提取属于这3个层次的链接,从而减少爬虫爬行时间,提高爬虫的准确度,并设计聚焦爬行算法的约束条件。实验结果表明,该策略可以有效地下载Deep Web页面,提高爬行效率。
关键词:
Deep Web页面,
反馈机制,
爬行策略,
聚焦爬虫,
网络数据库,
分类器
Abstract: Deep Web has rich, high-quality information resources. Because it does not have a static page links which is directly point to the Deep Web, most current search engines can not find these pages, but they can submit a query by filling out the form to obtain. This paper proposes a crawling strategy of Deep Web crawler. A hierarchical classification with the results of Web links guides the information extractor to extract the link. The strategy limits the crawling depth in three, extracts from the nearest link to the query form, and only extracts the links belonging to these three levels. Thus it reduces the reptiles crawling time, and improves the accuracy of reptiles. At the same time, it designs the Deep Web data sources for focused crawling algorithm. Experimental results show that this strategy can effectively download Deep Web pages to improve crawling efficiency.
Key words:
Deep Web page,
feedback mechanism,
crawling strategy,
focused crawler,
network database,
classifier
中图分类号:
刘徽, 黄宽娜, 余建桥. 一种Deep Web爬虫爬行策略[J]. 计算机工程, 2012, 38(11): 284-286.
LIU Hui, HUANG Kuan-Na, TU Jian-Qiao. Crawling Strategy of Deep Web Crawler[J]. Computer Engineering, 2012, 38(11): 284-286.