我的Scrapy脚本非常慢,在3分钟内提取了100个项目

问题描述

我正在学习scrapy,因为我了解到它是异步运行的,因此比Selenium更快。但是实际上只刮取100个项目大约需要3分钟。我不知道为什么请我帮忙。

import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider,Rule
from scrapy.loader import Itemloader
from  batt_data.items import BattDataItem
import urllib.parse
from selenium import webdriver

class BatterySpider(CrawlSpider):
    name = 'battery'
#     allowed_domains = ['web']
    start_urls = ['https://www.made-in-china.com/multi-search/24v%2Bbattery/F1/1.html']
    base_url = ['https://www.made-in-china.com/multi-search/24v%2Bbattery/F1/1.html']
    
    
    # driver = webdriver.Chrome()
    # driver.find_element_by_xpath('//a[contains(@class,"list-switch-btn list-switch-btn-right selected")]').click()

    rules = (
        Rule(LinkExtractor(restrict_xpaths='//*[contains(@class,"nextpage")]'),callback='parse_item',follow=True),)

    def parse_item(self,response):
        price = response.css('.price::text').extract()
        
        description = response.xpath('//img[@class="J-firstLazyload"]/@alt').extract()
        chemistry = response.xpath('//li[@class="J-faketitle ellipsis"][1]/span/text()').extract()
        applications = response.xpath('//li[@class="J-faketitle ellipsis"][2]/span/text()').extract()
        discharge_rate = response.xpath('//li[@class="J-faketitle ellipsis"][4]/span/text()').extract()
        shape = response.xpath('//li[@class="J-faketitle ellipsis"][5]/span/text()').extract()
        
        data = zip(description,price,chemistry,applications,discharge_rate,shape)
        for item in data:
            scraped = {
                'description': item[0],'price' : item[1],'chemistry' : item[2],'applications' : item[3],'discharge_rate' : item[4],'shape' : item[5],}
                
        yield scraped

解决方法

我实际上发送了太多请求。我通过遍历具有我需要的所有物品的容器来处理它。更新后的蜘蛛在不到1分钟的时间内完成了工作