Python 请求/urrllib3 - 在收到 200 个标头后重试读取超时

问题描述

我正在使用请求下载一些大文件(100-5000 MB)。我正在使用 session 和 urllib3.Retry 来获得自动重试。 看来此类重试仅适用于收到 HTTP 标头且内容开始流式传输之前。发送 200 后,网络下降将作为 ReadTimeoutError 引发。

参见以下示例:

import requests,logging
from requests.adapters import HTTPAdapter
from urllib3 import Retry


def create_session():
    retries = Retry(total=5,backoff_factor=1)
    s = requests.Session()
    s.mount("http://",HTTPAdapter(max_retries=retries))
    s.mount("https://",HTTPAdapter(max_retries=retries))
    return s

logging.basicConfig(level=logging.DEBUG,stream=sys.stderr)
session = create_session()
response = session.get(url,timeout=(120,10)) # Deliberate short read-timeout

这给出了以下日志输出

DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): example:443
DEBUG:urllib3.connectionpool:https://example:443 "GET /example.zip HTTP/1.1" 200 1568141974

< UNPLUG NETWORK CABLE FOR 10-15 sec HERE > 

Traceback (most recent call last):
  File "urllib3/response.py",line 438,in _error_catcher
    yield
  File "urllib3/response.py",line 519,in read
    data = self._fp.read(amt) if not fp_closed else b""
  File "/usr/lib/python3.8/http/client.py",line 458,in read
    n = self.readinto(b)
  File "/usr/lib/python3.8/http/client.py",line 502,in readinto
    n = self.fp.readinto(b)
  File "/usr/lib/python3.8/socket.py",line 669,in readinto
    return self._sock.recv_into(b)
  File "/usr/lib/python3.8/ssl.py",line 1241,in recv_into
    return self.read(nbytes,buffer)
  File "/usr/lib/python3.8/ssl.py",line 1099,in read
    return self._sslobj.read(len,buffer)
socket.timeout: The read operation timed out

During handling of the above exception,another exception occurred:

Traceback (most recent call last):
  File "requests/models.py",line 753,in generate
    for chunk in self.raw.stream(chunk_size,decode_content=True):
  File "urllib3/response.py",line 576,in stream
    data = self.read(amt=amt,decode_content=decode_content)
  File "urllib3/response.py",line 541,in read
    raise IncompleteRead(self._fp_bytes_read,self.length_remaining)
  File "/usr/lib/python3.8/contextlib.py",line 131,in __exit__
    self.gen.throw(type,value,traceback)
  File "urllib3/response.py",line 443,in _error_catcher
    raise ReadTimeoutError(self._pool,None,"Read timed out.")
urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='example',port=443): Read timed out.

During handling of the above exception,another exception occurred:

Traceback (most recent call last):
  File "example.py",line 14,in _download
    response = session.get(url,headers=headers,timeout=300)
  File "requests/sessions.py",line 555,in get
    return self.request('GET',url,**kwargs)
  File "requests/sessions.py",line 542,in request
    resp = self.send(prep,**send_kwargs)
  File "requests/sessions.py",line 697,in send
    r.content
  File "requests/models.py",line 831,in content
    self._content = b''.join(self.iter_content(CONTENT_CHUNK_SIZE)) or b''
  File "requests/models.py",line 760,in generate
    raise ConnectionError(e)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='example',port=443): Read timed out. 

我可以理解为什么这不起作用,当您将 stream=True 参数与 response.iter_content() 一起添加时,事情变得更加明显。 我认为基本原理是 read_timeout 和 TCP 层应该处理这个(在我的例子中,我故意将 read_timeout 设置得低来激发它)。但是我们遇到了服务器重启/崩溃或防火墙在流中间断开连接的情况,客户端的唯一选择是重试整个过程。

这个问题有没有简单的解决方案,最好是内置在请求中?人们总是可以用坚韧或手动重试来包装整个事情,但理想情况下我想避免这种情况,因为这意味着添加另一层,并且需要从其他实际错误中识别网络错误等。

解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)