问题描述
EDIT4 29/12/2020:
我认为这可能是 DEBUG 设置,但都不起作用
我阅读了许多教程和文档,但我不明白我的代码有什么问题 here for example
EDIT3 29/12/2020:
我尝试 call_command('flush','--noinput') 和 send_email 并且它有效 不明白为什么 call_command('dumpdata') 或 call_command('dbbackup') 不起作用?
redis_1 | 1:C 29 Dec 2020 14:08:38.361 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_1 | 1:C 29 Dec 2020 14:08:38.361 # Redis version=6.0.9,bits=64,commit=00000000,modified=0,pid=1,just started
redis_1 | 1:C 29 Dec 2020 14:08:38.361 # Warning: no config file specified,using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis_1 | 1:M 29 Dec 2020 14:08:38.363 * Running mode=standalone,port=6379.
redis_1 | 1:M 29 Dec 2020 14:08:38.363 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
redis_1 | 1:M 29 Dec 2020 14:08:38.363 # Server initialized
redis_1 | 1:M 29 Dec 2020 14:08:38.363 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
redis_1 | 1:M 29 Dec 2020 14:08:38.363 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo madvise > /sys/kernel/mm/transparent_hugepage/enabled' as root,and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled (set to 'madvise' or 'never').
redis_1 | 1:M 29 Dec 2020 14:08:38.369 * Loading RDB produced by version 6.0.9
redis_1 | 1:M 29 Dec 2020 14:08:38.370 * RDB age 47 seconds
redis_1 | 1:M 29 Dec 2020 14:08:38.370 * RDB memory usage when created 0.77 Mb
redis_1 | 1:M 29 Dec 2020 14:08:38.370 * DB loaded from disk: 0.001 seconds
redis_1 | 1:M 29 Dec 2020 14:08:38.370 * Ready to accept connections
EDIT2 29/12/2020:
我尝试使用 dumpdata 进行备份,但都不起作用。没有任何错误但即使我指定了一个存储文件夹也找不到我的文件
见下面我的 task.py 更新 如果我不重定向,我可以在标准输出中看到记录 所以我想我可以解决重定向但是...
编辑 1:
我安装了 postgresql-client 并且备份似乎可以工作,但找不到它在哪里...即使在我的 settings.py 中指定了该文件夹,它也不在我的备份文件夹中
编辑:
我重建了,现在任务正在运行,错误 [Errno 2] No such file or directory: 'pg_dump'
我尝试实现 celery 和 celery-beat 来运行周期性任务。 我的目标是使用 Django-dbbackup 备份 Postgresql 数据库
但只有我的测试 hello 正在运行,而 3 个任务已注册
celery_1 | [tasks]
celery_1 | . cafe.tasks.backup
celery_1 | . cafe.tasks.hello
celery_1 | . core.celery.debug_task
celery_1 | [2020-12-28 17:05:00,075: WARNING/ForkPoolWorker-4] Hello there!
celery_1 | [2020-12-28 17:05:00,081: INFO/ForkPoolWorker-4] Task cafe.tasks.hello[5b3e46b5-16bc-4d6a-b608-69ffdf8e5664] succeeded in 0.006272200000239536s: None
任务.py
@shared_task
def backup():
print("backup")
try:
print("backup done on " + str(timezone.Now()))
print(settings.BASE_DIR)
# management.call_command('dumpdata')
sysout = sys.stdout
sys.stdout = open('/usr/src/app/backup/filename.json','w')
management.call_command('dumpdata','parameters')
sys.stdout = sysout
except:
print("Error during backup on " + str(timezone.Now()))
settings.py
CELERY_broKER_URL = 'redis://redis:6379'
CELERY_RESULT_BACKEND = 'redis://redis:6379'
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_BEAT_SCHEDULE = {
'hello': {
'task': 'cafe.tasks.hello','schedule': crontab() # execute every minute
},'backup': {
'task': 'cafe.tasks.backup','schedule': crontab() # execute every minute
}
}
解决方法
好的,我解决了我的问题。问题来自 celery 和 celery-beat 的音量路径是错误的。 现在 dumpdata 和 dbbackup 工作并在预期文件夹中生成文件