问题描述
背景-TLDR:我的项目中发生内存泄漏
花了几天时间仔细查看内存泄漏文档,但找不到问题。 我正在开发一个中等大小的scrapy项目,每天大约有4万个请求。
我正在使用scrapinghub的预定运行来托管它。
在scrapinghub上,每月为$ 9,您将获得1个具有1GB RAM的VM来运行搜寻器。
我已经在本地开发了一个搜寻器,并上传到scrapinghub,唯一的问题是,在运行结束时,我已经超出了内存。
本地化设置CONCURRENT_REQUESTS=16
可以很好地工作,但会导致在scrapinghub上的内存超出50%。设置CONCURRENT_REQUESTS=4
时,内存超出了95%,因此减少到2应该可以解决问题,但是我的搜寻器变得太慢了。
另一种解决方案是为2个VM付费,以增加RAM,但是我感觉我设置搜寻器的方式会导致内存泄漏。
在此示例中,该项目将抓取在线零售商。
在本地运行时,我的memusage/max
和CONCURRENT_REQUESTS=16
是2.7gb。
我现在将遍历我的爬网结构
- 获取要抓取的页面总数
- 使用www.example.com/page={page_num} 浏览所有这些页面
- 在每个页面上,收集有关48种产品的信息
- 对于每种产品,请转到其页面并获取一些信息
- 使用该信息,针对每种产品直接调用API
- 使用项目管道保存这些内容(我在本地写入csv,但不在scrapinghub上保存)
- 管道
class Pipeline(object):
def process_item(self,item,spider):
item['stock_jsons'] = json.loads(item['stock_jsons'])['subProducts']
return item
- 项目
class mainItem(scrapy.Item):
date = scrapy.Field()
url = scrapy.Field()
active_col_num = scrapy.Field()
all_col_nums = scrapy.Field()
old_price = scrapy.Field()
current_price = scrapy.Field()
image_urls_full = scrapy.Field()
stock_jsons = scrapy.Field()
class URLItem(scrapy.Item):
urls = scrapy.Field()
- 主蜘蛛
class ProductSpider(scrapy.Spider):
name = 'product'
def __init__(self,**kwargs):
page = requests.get('www.example.com',headers=headers)
self.num_pages = # gets the number of pages to search
def start_requests(self):
for page in tqdm(range(1,self.num_pages+1)):
url = 'www.example.com/page={page}'
yield scrapy.Request(url = url,headers=headers,callback = self.prod_url)
def prod_url(self,response):
urls_item = URLItem()
extracted_urls = response.xpath(####).extract() # Gets URLs to follow
urls_item['urls'] = [# Get a list of urls]
for url in urls_item['urls']:
yield scrapy.Request(url = url,callback = self.parse)
def parse(self,response) # Parse the main product page
item = mainItem()
item['date'] = DATETIME_VAR
item['url'] = response.url
item['active_col_num'] = XXX
item['all_col_nums'] = XXX
item['old_price'] = XXX
item['current_price'] = XXX
item['image_urls_full'] = XXX
try:
new_url = 'www.exampleAPI.com/' + item['active_col_num']
except TypeError:
new_url = 'www.exampleAPI.com/{dummy_number}'
yield scrapy.Request(new_url,callback=self.parse_attr,meta={'item': item})
def parse_attr(self,response):
## This calls an API Step 5
item = response.meta['item']
item['stock_jsons'] = response.text
yield item
到目前为止我尝试过什么?
-
psutils,没有太大帮助。
-
trackref.print_live_refs()
最后返回以下内容:
HtmlResponse 31 oldest: 3s ago
mainItem 18 oldest: 5s ago
ProductSpider 1 oldest: 3321s ago
Request 43 oldest: 105s ago
Selector 16 oldest: 3s ago
- 随时间推移打印前10个全局变量
- 随着时间的推移打印排名前10位的商品类型
问题
- 如何找到内存泄漏?
- 有人可以看到我在哪里泄漏内存吗?
- 我的拼凑结构是否存在基本问题?
请让我知道是否需要更多信息
请求的其他信息
- 请注意,以下输出来自我的本地计算机,那里有足够的RAM,因此要抓取的网站成为瓶颈。使用scrapinghub时,由于1GB的限制,可疑的内存泄漏成为问题。
请让我知道是否需要scrapinghub的输出,我想它应该是相同的,但是出于完成原因,消息超出了内存。
1。从头开始记录行(从INFO:Scrapy xxx开始到Spider打开)。
2020-09-17 11:54:11 [scrapy.utils.log] INFO: Scrapy 2.3.0 started (bot: PLT)
2020-09-17 11:54:11 [scrapy.utils.log] INFO: Versions: lxml 4.5.2.0,libxml2 2.9.10,cssselect 1.1.0,parsel 1.6.0,w3lib 1.22.0,Twisted 20.3.0,Python 3.7.4 (v3.7.4:e09359112e,Jul 8 2019,14:54:52) - [Clang 6.0 (clang-600.0.57)],pyOpenSSL 19.1.0 (OpenSSL 1.1.1g 21 Apr 2020),cryptography 3.1,Platform Darwin-18.7.0-x86_64-i386-64bit
2020-09-17 11:54:11 [scrapy.crawler] INFO: Overridden settings:
{'BOT_NAME': 'PLT','CONCURRENT_REQUESTS': 14,'CONCURRENT_REQUESTS_PER_DOMAIN': 14,'DOWNLOAD_DELAY': 0.05,'LOG_LEVEL': 'INFO','NEWSPIDER_MODULE': 'PLT.spiders','SPIDER_MODULES': ['PLT.spiders']}
2020-09-17 11:54:11 [scrapy.extensions.telnet] INFO: Telnet Password: # blocked
2020-09-17 11:54:11 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats','scrapy.extensions.telnet.TelnetConsole','scrapy.extensions.memusage.MemoryUsage','scrapy.extensions.logstats.LogStats']
2020-09-17 11:54:12 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware','scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware','scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware','scrapy.downloadermiddlewares.useragent.UserAgentMiddleware','scrapy.downloadermiddlewares.retry.RetryMiddleware','scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware','scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware','scrapy.downloadermiddlewares.redirect.RedirectMiddleware','scrapy.downloadermiddlewares.cookies.CookiesMiddleware','scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware','scrapy.downloadermiddlewares.stats.DownloaderStats']
2020-09-17 11:54:12 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware','scrapy.spidermiddlewares.offsite.OffsiteMiddleware','scrapy.spidermiddlewares.referer.RefererMiddleware','scrapy.spidermiddlewares.urllength.UrlLengthMiddleware','scrapy.spidermiddlewares.depth.DepthMiddleware']
=======
17_Sep_2020_11_54_12
=======
2020-09-17 11:54:12 [scrapy.middleware] INFO: Enabled item pipelines:
['PLT.pipelines.PltPipeline']
2020-09-17 11:54:12 [scrapy.core.engine] INFO: Spider opened
2。结束日志行(INFO:将Scrapy统计信息转储为结束)。
2020-09-17 11:16:43 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 15842233,'downloader/request_count': 42031,'downloader/request_method_count/GET': 42031,'downloader/response_bytes': 1108804016,'downloader/response_count': 42031,'downloader/response_status_count/200': 41999,'downloader/response_status_count/403': 9,'downloader/response_status_count/404': 1,'downloader/response_status_count/504': 22,'dupefilter/filtered': 110,'elapsed_time_seconds': 3325.171148,'finish_reason': 'finished','finish_time': datetime.datetime(2020,9,17,10,16,43,258108),'httperror/response_ignored_count': 10,'httperror/response_ignored_status_count/403': 9,'httperror/response_ignored_status_count/404': 1,'item_scraped_count': 20769,'log_count/INFO': 75,'memusage/max': 2707484672,'memusage/startup': 100196352,'request_depth_max': 2,'response_received_count': 42009,'retry/count': 22,'retry/reason_count/504 Gateway Time-out': 22,'scheduler/dequeued': 42031,'scheduler/dequeued/memory': 42031,'scheduler/enqueued': 42031,'scheduler/enqueued/memory': 42031,'start_time': datetime.datetime(2020,21,18,86960)}
2020-09-17 11:16:43 [scrapy.core.engine] INFO: Spider closed (finished)
- self.num_pages变量使用什么值?
我要抓取的网站大约有2万种产品,每页显示48个。因此,它转到该站点,请参阅20103产品,然后除以48(然后是math.ceil)以获取页数。
- 在更新中间件之后添加来自抓取中心的输出
downloader/request_bytes 2945159
downloader/request_count 16518
downloader/request_method_count/GET 16518
downloader/response_bytes 3366280619
downloader/response_count 16516
downloader/response_status_count/200 16513
downloader/response_status_count/404 3
dupefilter/filtered 7
elapsed_time_seconds 4805.867308
finish_reason memusage_exceeded
finish_time 1600567332341
httperror/response_ignored_count 3
httperror/response_ignored_status_count/404 3
item_scraped_count 8156
log_count/ERROR 1
log_count/INFO 94
memusage/limit_reached 1
memusage/max 1074937856
memusage/startup 109555712
request_depth_max 2
response_received_count 16516
retry/count 2
retry/reason_count/504 Gateway Time-out 2
scheduler/dequeued 16518
scheduler/dequeued/disk 16518
scheduler/enqueued 17280
scheduler/enqueued/disk 17280
start_time 1600562526474
解决方法
暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!
如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。
小编邮箱:dio#foxmail.com (将#修改为@)