在 docker swarm 集群中扩展 Uber cadence 的匹配服务后获得大量的 DecisionTaskTimedOut

问题描述

我正在尝试独立运行每个节奏服务,以便我可以轻松地扩展和扩展它们。 我的团队正在使用 docker-swarm,我们使用 Portainer UI 管理所有内容。 到目前为止,我已经能够将前端服务扩展为有两个副本,但是如果我对匹配服务做同样的事情,我将通过工作流执行获得很多 DecisionTaskTimedOut。最终,执行将成功完成,但需要很长时间。有一个想法,两个匹配的服务副本需要2分钟,而一个只需要7秒。

这是一个测试环境。我正在使用 dockerized cassand db(由于某些预算限制,我们无法使用真实的数据库)也许这就是问题所在? Docker 镜像配置了以下环境变量:

RINGPOP_BOOTSTRAP_MODE=dns
KEYSPACE=cadence
BIND_ON_IP=0.0.0.0
SKIP_SCHEMA_SETUP=false
VISIBILITY_KEYSPACE=cadence_visibility
CASSANDRA_HOSTNAME=soap_cassandra
RINGPOP_SEEDS=soap_cadence_frontend:7933,soap_cadence_history:7934,soap_cadence_worker:7939
CADENCE_HOME=/etc/cadence
SERVICES=matching

您可以为上面没有看到的任何其他环境变量假定默认值

RINGPOP_SEEDS 是分配给每个 cadence 服务的服务名称,如果声明了 1 个以上的副本,docker-swarm 会从中创建一个 DNS 条目以及负载均衡器。

匹配的服务似乎正确启动,日志:

{"level":"info","ts":"2021-02-18T22:47:36.296Z","msg":"Created RPC dispatcher and listening","service":"cadence-matching","address":"0.0.0.0:7935","logging-call-at":"rpc.go:81"},{"level":"warn","ts":"2021-02-18T22:47:36.321Z","msg":"Failed to fetch key from dynamic config","key":"system.advancedVisibilityWritingMode","error":"unable to find key","logging-call-at":"config.go:68"},{"level":"info","ts":"2021-02-18T22:47:36.336Z","msg":"Add new peers by DNS lookup","address":"0.0.0.0","addresses":"[0.0.0.0:7933]","logging-call-at":"clientBean.go:321"},"msg":"Creating RPC dispatcher outbound","service":"cadence-frontend","address":"0.0.0.0:7933","logging-call-at":"clientBean.go:277"},"ts":"2021-02-18T22:47:36.441Z","msg":"Starting service matching","logging-call-at":"server.go:217"},"key":"matching.throttledLogRPS","address":"127.0.0.1:7933","ts":"2021-02-18T22:47:36.442Z","address":"127.0.0.1","addresses":"[127.0.0.1:7933]","ts":"2021-02-18T22:47:36.713Z","msg":"matching starting","logging-call-at":"service.go:90"},"ts":"2021-02-18T22:47:36.734Z","msg":"RuntimeMetricsReporter started","logging-call-at":"runtime.go:169"},"msg":"PProf not started due to port not set","logging-call-at":"pprof.go:64"},"ts":"2021-02-18T22:47:36.799Z","msg":"Current reachable members","component":"service-resolver","addresses":"[[::]:7935]","logging-call-at":"rpServiceResolver.go:246"},"service":"cadence-worker","addresses":"[[::]:7939]","ts":"2021-02-18T22:47:36.800Z","addresses":"[[::]:7933]","ts":"2021-02-18T22:47:36.814Z","msg":"service started","logging-call-at":"resourceImpl.go:383"},"msg":"matching started","logging-call-at":"service.go:99"}

当工作流执行时,我可以在日志中看到以下错误:

{"level":"error","ts":"2021-02-18T22:17:07.281Z","msg":"Persistent store operation failure","component":"matching-engine","wf-task-list-name":"ae85d0ac1629:f8102a0f-406a-4fc7-8abf-e4b3fd66a278","wf-task-list-type":0,"store-operation":"create-task","error":"Failed to create task. TaskList: ae85d0ac1629:f8102a0f-406a-4fc7-8abf-e4b3fd66a278,taskListType: 0,rangeID: 14,db rangeID: 15","number":1300001,"next-number":1300001,"logging-call-at":"taskWriter.go:176","stacktrace":"github.com/uber/cadence/common/log/loggerimpl.(*loggerImpl).Error\n\t/cadence/common/log/loggerimpl/logger.go:134\ngithub.com/uber/cadence/service/matching.(*taskWriter).taskWriterLoop\n\t/cadence/service/matching/taskWriter.go:176"},{"level":"error","ts":"2021-02-18T22:52:03.740Z","wf-task-list-name":"8dd84fa9834d:258a1229-bdfd-4ef3-b315-ffbf749221ca","error":"Failed to create task. TaskList: 8dd84fa9834d:258a1229-bdfd-4ef3-b315-ffbf749221ca,rangeID: 16,db rangeID: 17","number":1500002,"next-number":1500002,"ts":"2021-02-18T22:10:10.971Z","wf-task-list-name":"FeaTaskList","wf-task-list-type":1,"error":"Failed to create task. TaskList: FeaTaskList,taskListType: 1,rangeID: 94,db rangeID: 95","number":9300001,"next-number":9300001,"ts":"2021-02-18T22:09:53.345Z","ts":"2021-02-18T22:53:56.145Z",rangeID: 17,db rangeID: 18","number":1600001,"next-number":1600001,"stacktrace":"github.com/uber/cadence/common/log/loggerimpl.(*loggerImpl).Error\n\t/cadence/common/log/loggerimpl/logger.go:134\ngithub.com/uber/cadence/service/matching.(*taskWriter).taskWriterLoop\n\t/cadence/service/matching/taskWriter.go:176"}

我目前使用的docker镜像版本是:ubercadence/server:0.15.1

有什么办法可以解决这个问题吗?

解决方法

我最好的猜测是问题是 BIND_ON_IP=0.0.0.0。每个实例都应该使用唯一的 hostIP:Port 作为它们的地址。因为都是 0.0.0.0,所以每个服务只有在运行一个实例时才能工作。因为超过实例就会有冲突。

然而,这对于前端服务来说不是问题,因为 FE 是有状态的。 匹配/历史会遇到这个问题---

HostA 使用 0.0.0.0:7935 将其注册到 mathcing 服务,然后 HostB 尝试执行相同操作。这会导致一致性哈希环不稳定。任务列表所有权在 HostA 和 HostB 之间不断切换。

要解决此问题,您需要让每个实例使用自己的主机IP。就像在 K8s 中使用 PodIP。

解决此问题后,您将在 FE/history 的日志中看到它们成功连接到两个匹配的主机:

{"level":"info","ts":"2021-02-18T22:47:36.799Z","msg":"Current reachable members","component":"service-resolver","service":"cadence-matching","addresses":"[HostA_IP:7935,HostB_IP:7935]","logging-call-at":"rpServiceResolver.go:246"},

参见 Cadence Helm 图表中的示例,说明我们如何为 K8s 做到这一点:https://github.com/banzaicloud/banzai-charts/blob/87cf2946434c22cb963fea47b662ea85974ecfc0/cadence/templates/server-configmap.yaml#L82

相关问答

错误1:Request method ‘DELETE‘ not supported 错误还原:...
错误1:启动docker镜像时报错:Error response from daemon:...
错误1:private field ‘xxx‘ is never assigned 按Alt...
报错如下,通过源不能下载,最后警告pip需升级版本 Requirem...