Python自定义scrapy中间模块避免重复采集的方法

本文实例讲述了Python自定义scrapy中间模块避免重复采集的方法分享给大家供大家参考。具体如下:

from scrapy import log
from scrapy.http import Request
from scrapy.item import BaseItem
from scrapy.utils.request import request_fingerprint
from myproject.items import MyItem
class IgnoreVisitedItems(object):
  """Middleware to ignore re-visiting item pages if they
  were already visited before. 
  The requests to be filtered by have a Meta['filter_visited']
  flag enabled and optionally define an id to use 
  for identifying them,which defaults the request fingerprint,although you'd want to use the item id,if you already have it beforehand to make it more robust.
  """
  FILTER_VISITED = 'filter_visited'
  VISITED_ID = 'visited_id'
  CONTEXT_KEY = 'visited_ids'
  def process_spider_output(self,response,result,spider):
    context = getattr(spider,'context',{})
    visited_ids = context.setdefault(self.CONTEXT_KEY,{})
    ret = []
    for x in result:
      visited = False
      if isinstance(x,Request):
        if self.FILTER_VISITED in x.Meta:
          visit_id = self._visited_id(x)
          if visit_id in visited_ids:
            log.msg("Ignoring already visited: %s" % x.url,level=log.INFO,spider=spider)
            visited = True
      elif isinstance(x,BaseItem):
        visit_id = self._visited_id(response.request)
        if visit_id:
          visited_ids[visit_id] = True
          x['visit_id'] = visit_id
          x['visit_status'] = 'new'
      if visited:
        ret.append(MyItem(visit_id=visit_id,visit_status='old'))
      else:
        ret.append(x)
    return ret
  def _visited_id(self,request):
    return request.Meta.get(self.VISITED_ID) or request_fingerprint(request)

希望本文所述对大家的Python程序设计有所帮助。

相关文章

方案一 代码 在Python中,可以使用wave模块来读取双通道(立...
简介 一个用python实现的科学计算,包括: 1、一个强大的N维...
使用爬虫利器 Playwright,轻松爬取抖查查数据 我们先分析登...
轻松爬取灰豚数据的抖音商品数据 调用两次登录接口实现模拟登...
成功绕过阿里无痕验证码,一键爬取飞瓜数据 飞瓜数据的登录接...
一文教你从零开始入门蝉妈妈数据爬取,成功逆向破解数据加密...