scrapy分页抓取网页(小编来一起_crawler-crawler文件蜘蛛代码_快照)

优采云 发布时间: 2022-03-07 07:21

  scrapy分页抓取网页(小编来一起_crawler-crawler文件蜘蛛代码_快照)

  本文介绍从给定的URL抓取数据并使用scrapy将其放入文件的处理方法。对大家解决问题有一定的参考价值。有需要的朋友,和小编一起学习吧!问题描述

  我正在尝试深入刮取给定的 网站 并从所有页面中刮取文本。我正在使用 scrapy 来抓取 网站

  这就是我运行蜘蛛爬虫的方式 stack_crawler -o items.json

  item.json 文件为空

  这是蜘蛛code_snapshot

  # -*- coding: utf-8 -*-

import scrapy

from scrapy.linkextractors import LinkExtractor

from scrapy.spiders import CrawlSpider, Rule

#from tutorial.items import TutorialItem

from tutorial.items import DmozItem

class StackCrawlerSpider(CrawlSpider):

name = 'stack_crawler'

allowed_domains = ['http://www.dmoz.org']

start_urls = ['http://www.dmoz.org/']

rules = (

Rule(LinkExtractor(allow=r'Items/'), callback='parse_item', follow=True),

)

def parse_item(self, response):

i = TutorialItem()

i['domain_id'] = response.xpath('//input[@id="sid"]/@value').extract()

i['name'] = response.xpath('//div[@id="name"]').extract()

i['description'] = response.xpath('//div[@id="description"]').extract()

return i

  这是我运行蜘蛛爬行时得到的日志

  dummy-MacBook-Pro:spiders Dummy$ scrapy crawl stack_crawler -o items.json

2016-06-09 10:22:23 [scrapy] INFO: Scrapy 1.1.0 started (bot: tutorial)

2016-06-09 10:22:23 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'tutorial.spiders', 'FEED_URI': 'items.json', 'SPIDER_MODULES': ['tutorial.spiders'], 'BOT_NAME': 'tutorial', 'ROBOTSTXT_OBEY': True, 'FEED_FORMAT': 'json'}

2016-06-09 10:22:23 [scrapy] INFO: Enabled extensions:

['scrapy.extensions.feedexport.FeedExporter',

'scrapy.extensions.logstats.LogStats',

'scrapy.extensions.telnet.TelnetConsole',

'scrapy.extensions.corestats.CoreStats']

2016-06-09 10:22:23 [scrapy] INFO: Enabled downloader middlewares:

['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',

'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',

'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',

'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',

'scrapy.downloadermiddlewares.retry.RetryMiddleware',

'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',

'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',

'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',

'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',

'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',

'scrapy.downloadermiddlewares.chunked.ChunkedTransferMiddleware',

'scrapy.downloadermiddlewares.stats.DownloaderStats']

2016-06-09 10:22:23 [scrapy] INFO: Enabled spider middlewares:

['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',

'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',

'scrapy.spidermiddlewares.referer.RefererMiddleware',

'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',

'scrapy.spidermiddlewares.depth.DepthMiddleware']

2016-06-09 10:22:23 [scrapy] INFO: Enabled item pipelines:

[]

2016-06-09 10:22:23 [scrapy] INFO: Spider opened

2016-06-09 10:22:23 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)

2016-06-09 10:22:23 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6024

2016-06-09 10:22:24 [scrapy] DEBUG: Crawled (200) (referer: None)

2016-06-09 10:22:24 [scrapy] DEBUG: Crawled (200) (referer: None)

2016-06-09 10:22:24 [scrapy] INFO: Closing spider (finished)

2016-06-09 10:22:24 [scrapy] INFO: Dumping Scrapy stats:

{'downloader/request_bytes': 430,

'downloader/request_count': 2,

'downloader/request_method_count/GET': 2,

'downloader/response_bytes': 5694,

'downloader/response_count': 2,

'downloader/response_status_count/200': 2,

'finish_reason': 'finished',

'finish_time': datetime.datetime(2016, 6, 9, 4, 52, 24, 862900),

'log_count/DEBUG': 3,

'log_count/INFO': 7,

'response_received_count': 2,

'scheduler/dequeued': 1,

'scheduler/dequeued/memory': 1,

'scheduler/enqueued': 1,

'scheduler/enqueued/memory': 1,

'start_time': datetime.datetime(2016, 6, 9, 4, 52, 23, 483092)}

2016-06-09 10:22:24 [scrapy] INFO: Spider closed (finished)

  产品代码快照

  import scrapy

class DmozItem(scrapy.Item):

title = scrapy.Field()

link = scrapy.Field()

desc = scrapy.Field()

  谁能帮我弄清楚我在代码级别做错了什么来获取数据。

  推荐答案

  我认为您是scrapy的新手,并且您在代码中犯了很多错误

  1.scrapy 中有默认函数 parse 或 start_product_requests,所以你可以避免在那里使用 LinkExtractor。使用 parse 函数并直接在那里获取 start_urls 响应。

  2.您在 items.py 中定义了一项并使用另一项。所以字段名不同,就会有冲突。

  3.您为字段值选择的路径是正确的。

  你必须试试这个

  蜘蛛code_snapshot

  import scrapy

from lxml import html

from scrapy.spiders import CrawlSpider, Rule

from tutorial.items import DmozItem

class StackCrawlerSpider(CrawlSpider):

name = 'stack_crawler'

allowed_domains = ['http://www.dmoz.org']

start_urls = ['http://www.dmoz.org/']

def parse(self, response):

doc = html.fromstring(response.body)

i = DmozItem()

i['title'] = doc.xpath('//meta[@property="og:title"]/@content')

i['link'] = response.url

i['desc'] = doc.xpath('//meta[@name="description"]/@content')

yield i

  产品代码快照

  import scrapy

class DmozItem(scrapy.Item):

title = scrapy.Field()

link = scrapy.Field()

desc = scrapy.Field()

  这行得通。

  这篇关于从给定URL抓取数据并使用scrapy将其放入文件的文章文章就到这里了,希望我们推荐的答案对您有所帮助,也希望大家支持IT之家!

0 个评论

要回复文章请先登录注册


官方客服QQ群

微信人工客服

QQ人工客服


线