问题描述
这是我最近面临的问题:
我有一些 Python 脚本可以进行一些数据分析。
代码部署在一个使用 beat 和 Redis 作为代理的 celery worker 中。我也在使用 Flower 来监控任务。
我使用的 Redis 实例是来自 AWS 的 Redis..
这 3 项服务使用推送到 AWS 上的 ECR 的 docker 映像。
我遇到的问题是,当我尝试从花实例 ping 工作器实例时,它没有到达工作器。如何让花与工人沟通?
当我在本地机器上将它作为 docker 容器运行时,一切正常。
以下是我的容器在 AWS 的任务定义中的定义方式:
{
"requiresCompatibilities": [
"FARGATE"
],"inferenceAccelerators": [],"containerDeFinitions": [{
"command": [
"celery","-A","celery_factory:celery","beat","-S","redbeat.RedBeatScheduler","--loglevel=info"
],"essential": true,"image": "spiny-pi-cards-aws:00000","logConfiguration": {
"logDriver": "awslogs","options": {
"awslogs-group": "spiny-pi-cards-logs","awslogs-region": "us-east-2","awslogs-stream-prefix": "celery-beat"
}
},"name": "celery-beat","memory": 256
},{
"command": [
"celery","worker","--loglevel=info","-E"
],"logConfiguration": {
"logDriver": "awslogs","options": {
"awslogs-group": "spiny-pi-cards-logs","awslogs-stream-prefix": "celery-worker"
}
},"name": "celery-worker","memory": 256
},{
"command": [
"./start_flower"
],"environment": [
{
"name": "FLOWER_PORT","value": "5556"
}
],"awslogs-stream-prefix": "celery-flower"
}
},"name": "flower","memory": 256,"portMappings": [
{
"containerPort": 5556,"hostPort": 5556
}
]
}
],"volumes": [],"networkMode": "awsvpc","memory": "1024","cpu": "256","executionRoleArn": "somesecret_stu","family": "spiny_pi_cards-task-deFinition","taskRoleArn": "","placementConstraints": []
}
这是我运行 ping 的方式:
这是我的./start_flower.sh
until timeout 10s celery -A celery_factory:celery inspect -d celery@ip.us-east-2.compute.internal ping; do
>&2 echo "Celery workers not available"
done
echo 'Starting flower'
celery -A celery_factory:celery flower --loglevel=info -E
在花容器内运行时,它没有到达工作人员...
它使返回的工人无法使用...
解决方法
暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!
如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。
小编邮箱:dio#foxmail.com (将#修改为@)