文章采集程序(Python程序设计有所采集程序的技巧实例分析)

优采云 发布时间: 2022-01-16 02:15

  文章采集程序(Python程序设计有所采集程序的技巧实例分析)

  本文章主要介绍基于scrapy的简单spider采集程序,分析scrapy实现采集程序的技巧,具有一定的参考价值。有需要的朋友可以往下看

  本文的例子描述了一个简单的基于scrapy的spider采集程序。分享给大家,供大家参考。详情如下:

  

# Standard Python library imports

# 3rd party imports

from scrapy.contrib.spiders import CrawlSpider, Rule

from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor

from scrapy.selector import HtmlXPathSelector

# My imports

from poetry_analysis.items import PoetryAnalysisItem

HTML_FILE_NAME = r'.+\.html'

class PoetryParser(object):

"""

Provides common parsing method for poems formatted this one specific way.

"""

date_pattern = r'(\d{2} \w{3,9} \d{4})'

def parse_poem(self, response):

hxs = HtmlXPathSelector(response)

item = PoetryAnalysisItem()

# All poetry text is in pre tags

text = hxs.select('//pre/text()').extract()

item['text'] = ''.join(text)

item['url'] = response.url

# head/title contains title - a poem by author

title_text = hxs.select('//head/title/text()').extract()[0]

item['title'], item['author'] = title_text.split(' - ')

item['author'] = item['author'].replace('a poem by', '')

for key in ['title', 'author']:

item[key] = item[key].strip()

item['date'] = hxs.select("//p[@class='small']/text()").re(date_pattern)

return item

class PoetrySpider(CrawlSpider, PoetryParser):

name = 'example.com_poetry'

allowed_domains = ['www.example.com']

root_path = 'someuser/poetry/'

start_urls = ['http://www.example.com/someuser/poetry/recent/',

'http://www.example.com/someuser/poetry/less_recent/']

rules = [Rule(SgmlLinkExtractor(allow=[start_urls[0] + HTML_FILE_NAME]),

callback='parse_poem'),

Rule(SgmlLinkExtractor(allow=[start_urls[1] + HTML_FILE_NAME]),

callback='parse_poem')]

  希望本文对您的 Python 编程有所帮助。

0 个评论

要回复文章请先登录注册


官方客服QQ群

微信人工客服

QQ人工客服


线