问题描述
我有以下在 Jupyter Notebook 中运行的 Python 代码。它从源位置下载一个 tar
文件,将其解压缩并上传到 Azure Blob 存储。
import os
import tarfile
from azure.storage.blob import BlobClient
def upload_folder(local_path):
connection_string = "XXX"
container_name = "mycontainername"
with tarfile.open(local_path,"r") as file:
for each in file.getnames():
print(each)
file.extract(each)
blob = BlobClient.from_connection_string(connection_string,container_name=container_name,blob_name=each)
with open(each,"rb") as f:
blob.upload_blob(f,overwrite=True)
os.remove(each)
# MAIN
!wget https://path/to/myarchive.tar.gz
local_path = "myarchive.tar.gz"
upload_folder(local_path)
!rm -rf myarchive.tar.gz
!rm -rf myarchive
myarchive.tar.gz
占用 1Gb,相当于大约 4Gb 的未压缩数据。
问题是即使对于如此相对较小的数据量,运行此代码也需要很长时间。大约需要 5-6 个小时。