问题描述
我无法将本地文件保存到s3bucket。
我的Django项目中有一个玉米工作,经过一段时间后,它会生成一个pdf
文件。我想将文件保存在s3bucket中。
当前,Django s3bucket的运行状况非常好,就像将我上传的文件保存到s3bucket一样,还有更多的工作要做。
但是我不知道如何复制本地文件并将其保存在s3bucket中。
目前,我正在像这样通过本地机器来保存该邮件:
shutil.copyfile('/var/www/local.pdf','media/newfileins3bucket.pdf')
但是我不能直接将其保存到s3bucket中。
在这种情况下,有人可以帮助我吗?
我正在使用它,没有意义直接将pdf保存到s3bucket: https://django-storages.readthedocs.io/en/latest/backends/amazon-S3.html
解决方法
from copy import deepcopy
s3 = boto3.client('s3',region_name=""#put region here,aws_access_key_id=aws_access_key_id,aws_secret_access_key=aws_secret_access_key)
files = request.FILES.getlist('file') # get all files
for file in files:
deep_file = deepcopy(file)
status,aws_file_path = upload_to_aws(deep_file)
if status == api_status.HTTP_200_OK:
reference_id = [aws_file_path]
logger.debug("AWS_STORAGE file path {}".format(reference_id))
message = "Uploaded Successfully"
else:
message = "COULD NOT CONNECT AWS"
status_api = status
return HttpResponse({},status=status_api)
def upload_to_aws(file):
global s3
try:
is_bucket = check_is_bucket_present()
except botocore.exceptions.NoCredentialsError:
logger.debug("Unable to locate credentials for AWS")
return api_status.HTTP_500_INTERNAL_SERVER_ERROR,{}
if not is_bucket:
try:
bucket = s3.create_bucket(Bucket=settings.AWS_BUCKET)
except botocore.exceptions.ClientError as e:
logger.debug("AWS_ Error while Creating Bucket {} : ".format(str(e)))
return api_status.HTTP_500_INTERNAL_SERVER_ERROR,{}
file_name_uuid = uuid.uuid4().hex[:20]
folder_name = ''.join(file_name_uuid)
try:
# this does not return anything so try except
file_path = str(folder_name + "/resume/" + str(file.name))
# todo need to look into uploading via client
# GB = 1024 ** 3
# config = TransferConfig(multipart_threshold=5 * GB)
# s3.upload_file('result1.csv',bucket_name,'folder_name/result1.csv',Config=config)
# was working with path but not with inmemoryobject
s3 = boto3.resource('s3',region_name="us-------"#put region here,aws_secret_access_key=aws_secret_access_key
)
s3.Bucket(settings.AWS_BUCKET).put_object(Key=file_path,Body=file)
file_path_bucket = settings.AWS_BUCKET + "/" + file_path
return api_status.HTTP_200_OK,file_path_bucket
except botocore.exceptions.ClientError as e:
logger.debug("AWS_STORAGE Error {}".format(str(e)))
return api_status.HTTP_500_INTERNAL_SERVER_ERROR,{}
except Exception as e:
logger.debug("AWS_STORAGE Error {}".format(str(e)))
return api_status.HTTP_500_INTERNAL_SERVER_ERROR,{}
,
有几种方法可以做到这一点,但我认为以下一种方法可以为您服务。
注意:我假设(如您所述)您的Django存储已在默认设置中设置了S3后端。
在模型上使用FileField上载
如果您建立的模型中保存了对所生成报告的引用,则可以执行以下操作:
from django.db import models
from django.core.files import File
class Report(models.Model):
# this links to the S3 bucket if you use the correct Django Storages backend
report_file = models.FileField()
# in your cron script,when your report is generated at '/var/www/local.pdf'
local_file = open('/var/www/local.pdf','rb')
report = Report()
# this uploads the context of file to s3,also saves to database
report.report_file.save('media/newfileins3bucket.pdf',File(local_file))
请注意,您必须将本地文件包装在Django File
对象中。
在文件字段上调用save()
也会自动将模型保存到数据库中,除非您将save=False
添加到调用中。
有关更多信息,请参见FileField.save()
无需模型直接上传
如果仅要将文件上传到S3而不将其保存在模型中,则可以执行以下操作:
from django.core.files.storage import default_storage
local_file = open('/var/www/local.pdf','rb')
# default_storage will be the S3 storage if you set use Django Storage with S3 backend in your settings
with default_storage.open('media/newfileins3bucket.pdf','wb') as target:
target.write(local_file.read())
免责声明:我使用了类似的内容,但尚未测试上面的确切代码。但这应该会指引您正确的方向。