Scrapy Cloud 跳过循环

问题描述

这个蜘蛛应该遍历 https://lihkg.com/thread/`2169007 - i*10`/page/1。但出于某种原因,它会在循环中跳过页面

我查看了在 Scrapy Cloud 中抓取的项目,抓取了具有以下 url 的项目:

...
Item 10: https://lihkg.com/thread/2479941/page/1
Item 11: https://lihkg.com/thread/2479981/page/1
Item 12: https://lihkg.com/thread/2479971/page/1
Item 13: https://lihkg.com/thread/2479931/page/1
Item 14: https://lihkg.com/thread/2479751/page/1
Item 15: https://lihkg.com/thread/2479991/page/1
Item 16: https://lihkg.com/thread/1504771/page/1
Item 17: https://lihkg.com/thread/1184871/page/1
Item 18: https://lihkg.com/thread/1115901/page/1
Item 19: https://lihkg.com/thread/1062181/page/1
Item 20: https://lihkg.com/thread/1015801/page/1
Item 21: https://lihkg.com/thread/955001/page/1
Item 22: https://lihkg.com/thread/955011/page/1
Item 23: https://lihkg.com/thread/955021/page/1
Item 24: https://lihkg.com/thread/955041/page/1
...

大约有 100 万页被跳过。

代码如下:

from lihkg.items import LihkgItem
import scrapy
import time
from scrapy_splash import SplashRequest

class LihkgSpider13(scrapy.Spider):
    name = 'lihkg1-950000'
    http_user = '(my splash api key here)'
    allowed_domains = ['lihkg.com']
    start_urls = ['https://lihkg.com/']

    script1 = """
                function main(splash,args)
                splash.images_enabled = false
                assert (splash:go(args.url))
                assert (splash:wait(2))
                return {
                    html = splash: html(),png = splash:png(),har = splash:har(),}
                end
              """

    def parse(self,response):
        for i in range(152500):
            time.sleep(0)
            url = "https://lihkg.com/thread/" + str(2479991 - i*10) + "/page/1"
            yield SplashRequest (url=url,callback=self.parse_article,endpoint='execute',args={
                                    'html': 1,'lua_source': self.script1,'wait': 2,})

    def parse_article(self,response):
        item = LihkgItem()
        item['author'] = response.xpath('//*[@id="1"]/div/small/span[2]/a/text()').get()
        item['time'] = response.xpath('//*[@id="1"]/div/small/span[4]/@data-tip').get()
        item['texts'] = response.xpath('//*[@id="1"]/div/div[1]/div/text()').getall()
        item['images'] = response.xpath('//*[@id="1"]/div/div[1]/div/a/@href').getall()
        item['emoji'] = response.xpath('//*[@id="1"]/div/div[1]/div/img/@src').getall()
        item['title'] = response.xpath('//*[@id="app"]/nav/div[2]/div[1]/span/text()').get()
        item['likes'] = response.xpath('//*[@id="1"]/div/div[2]/div/div[1]/div/div[1]/label/text()').get()
        item['dislikes'] = response.xpath('//*[@id="1"]/div/div[2]/div/div[1]/div/div[2]/label/text()').get()
        item['category'] = response.xpath('//*[@id="app"]/nav/div[1]/div[2]/div/span/text()').get()
        item['url'] = response.url

        yield item

我在项目中启用了 Crawlera、DeltaFetch 和 DotScrapy Persistence。

解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)