问题描述
我正在设置k8s本地k8s集群。对于测试,我在通过kubeadm设置的vm上使用单节点群集。 我的要求包括在k8s中运行MQTT集群(vernemq),并通过Ingress(istio)进行外部访问。
我无需部署入口,就可以通过NodePort或LoadBalancer服务连接(mosquitto_sub)。
Istio是使用istioctl install --set profile=demo
问题
我正在尝试从群集外部访问VerneMQ代理。入口(Istio网关)–在这种情况下似乎是完美的解决方案,但是我无法建立与代理的TCP连接(既不能通过入口IP,也不能直接通过svc / vernemq IP)。
那么,如何通过Istio入口从外部客户端建立此TCP连接?
我尝试过的
我创建了两个名称空间:
- exist-with-istio –使用istio代理注入
- 带有负载均衡器的暴露对象-没有istio代理
在exposed-with-loadbalancer
命名空间中,我使用LoadBalancer Service部署了vernemq。它起作用了,这就是我知道可以访问VerneMQ的方式(使用mosquitto_sub -h <host> -p 1883 -t hello
,主机是svc / vernemq的ClusterIP或ExternalIP)。可在主机:8888 / status上访问仪表板,仪表板上的“在线客户”增量。
在exposed-with-istio
内,我通过ClusterIP Service,Istios Gateway和VirtualService部署了vernemq。
在代理注入后立即进行istio代理之后,mosquitto_sub无法通过svc / vernemq IP或istio入口(网关)IP进行订阅。命令只是永远挂起,不断重试。
同时,可以通过服务ip和istio网关访问vernemq仪表板端点。
我猜必须配置istio代理才能使mqtt正常工作。
这是istio-ingressgateway服务:
kubectl describe svc/istio-ingressgateway -n istio-system
Name: istio-ingressgateway
Namespace: istio-system
Labels: app=istio-ingressgateway
install.operator.istio.io/owning-resource=installed-state
install.operator.istio.io/owning-resource-namespace=istio-system
istio=ingressgateway
istio.io/rev=default
operator.istio.io/component=IngressGateways
operator.istio.io/managed=Reconcile
operator.istio.io/version=1.7.0
release=istio
Annotations: Selector: app=istio-ingressgateway,istio=ingressgateway
Type: LoadBalancer
IP: 10.100.213.45
LoadBalancer Ingress: 192.168.100.240
Port: status-port 15021/TCP
TargetPort: 15021/TCP
Port: http2 80/TCP
TargetPort: 8080/TCP
Port: https 443/TCP
TargetPort: 8443/TCP
Port: tcp 31400/TCP
TargetPort: 31400/TCP
Port: tls 15443/TCP
TargetPort: 15443/TCP
Session Affinity: None
External Traffic Policy: Cluster
...
这是来自istio-proxy的调试日志
kubectl logs svc/vernemq -n test istio-proxy
2020-08-24T07:57:52.294477Z debug envoy filter original_dst: New connection accepted
2020-08-24T07:57:52.294516Z debug envoy filter tls inspector: new connection accepted
2020-08-24T07:57:52.294532Z debug envoy filter http inspector: new connection accepted
2020-08-24T07:57:52.294580Z debug envoy filter [C5645] new tcp proxy session
2020-08-24T07:57:52.294614Z debug envoy filter [C5645] Creating connection to cluster inbound|1883|mqtt|vernemq.test.svc.cluster.local
2020-08-24T07:57:52.294638Z debug envoy pool creating a new connection
2020-08-24T07:57:52.294671Z debug envoy pool [C5646] connecting
2020-08-24T07:57:52.294684Z debug envoy connection [C5646] connecting to 127.0.0.1:1883
2020-08-24T07:57:52.294725Z debug envoy connection [C5646] connection in progress
2020-08-24T07:57:52.294746Z debug envoy pool queueing request due to no available connections
2020-08-24T07:57:52.294750Z debug envoy conn_handler [C5645] new connection
2020-08-24T07:57:52.294768Z debug envoy connection [C5646] delayed connection error: 111
2020-08-24T07:57:52.294772Z debug envoy connection [C5646] closing socket: 0
2020-08-24T07:57:52.294783Z debug envoy pool [C5646] client disconnected
2020-08-24T07:57:52.294790Z debug envoy filter [C5645] Creating connection to cluster inbound|1883|mqtt|vernemq.test.svc.cluster.local
2020-08-24T07:57:52.294794Z debug envoy connection [C5645] closing data_to_write=0 type=1
2020-08-24T07:57:52.294796Z debug envoy connection [C5645] closing socket: 1
2020-08-24T07:57:52.294864Z debug envoy wasm wasm log: [extensions/stats/plugin.cc:609]::report() metricKey cache hit,stat=12
2020-08-24T07:57:52.294882Z debug envoy wasm wasm log: [extensions/stats/plugin.cc:609]::report() metricKey cache hit,stat=16
2020-08-24T07:57:52.294885Z debug envoy wasm wasm log: [extensions/stats/plugin.cc:609]::report() metricKey cache hit,stat=20
2020-08-24T07:57:52.294887Z debug envoy wasm wasm log: [extensions/stats/plugin.cc:609]::report() metricKey cache hit,stat=24
2020-08-24T07:57:52.294891Z debug envoy conn_handler [C5645] adding to cleanup list
2020-08-24T07:57:52.294949Z debug envoy pool [C5646] connection destroyed
这是来自istio-ingressagateway的日志。 IP 10.244.243.205
属于VerneMQ pod,而不属于服务(可能是预期的)。
2020-08-24T08:48:31.536593Z debug envoy filter [C13236] new tcp proxy session
2020-08-24T08:48:31.536702Z debug envoy filter [C13236] Creating connection to cluster outbound|1883||vernemq.test.svc.cluster.local
2020-08-24T08:48:31.536728Z debug envoy pool creating a new connection
2020-08-24T08:48:31.536778Z debug envoy pool [C13237] connecting
2020-08-24T08:48:31.536784Z debug envoy connection [C13237] connecting to 10.244.243.205:1883
2020-08-24T08:48:31.537074Z debug envoy connection [C13237] connection in progress
2020-08-24T08:48:31.537116Z debug envoy pool queueing request due to no available connections
2020-08-24T08:48:31.537138Z debug envoy conn_handler [C13236] new connection
2020-08-24T08:48:31.537181Z debug envoy connection [C13237] connected
2020-08-24T08:48:31.537204Z debug envoy pool [C13237] assigning connection
2020-08-24T08:48:31.537221Z debug envoy filter TCP:onUpstreamEvent(),requestedServerName:
2020-08-24T08:48:31.537880Z debug envoy misc UnkNown error code 104 details Connection reset by peer
2020-08-24T08:48:31.537907Z debug envoy connection [C13237] remote close
2020-08-24T08:48:31.537913Z debug envoy connection [C13237] closing socket: 0
2020-08-24T08:48:31.537938Z debug envoy pool [C13237] client disconnected
2020-08-24T08:48:31.537953Z debug envoy connection [C13236] closing data_to_write=0 type=0
2020-08-24T08:48:31.537958Z debug envoy connection [C13236] closing socket: 1
2020-08-24T08:48:31.538156Z debug envoy conn_handler [C13236] adding to cleanup list
2020-08-24T08:48:31.538191Z debug envoy pool [C13237] connection destroyed
我的配置
vernemq-istio-ingress.yamlapiVersion: v1
kind: Namespace
Metadata:
name: exposed-with-istio
labels:
istio-injection: enabled
---
apiVersion: v1
kind: ServiceAccount
Metadata:
name: vernemq
namespace: exposed-with-istio
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
Metadata:
name: endpoint-reader
namespace: exposed-with-istio
rules:
- apiGroups: ["","extensions","apps"]
resources: ["endpoints","deployments","replicasets","pods"]
verbs: ["get","list"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
Metadata:
name: endpoint-reader
namespace: exposed-with-istio
subjects:
- kind: ServiceAccount
name: vernemq
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: endpoint-reader
---
apiVersion: v1
kind: Service
Metadata:
name: vernemq
namespace: exposed-with-istio
labels:
app: vernemq
spec:
selector:
app: vernemq
type: ClusterIP
ports:
- port: 4369
name: empd
- port: 44053
name: vmq
- port: 8888
name: http-dashboard
- port: 1883
name: tcp-mqtt
targetPort: 1883
- port: 9001
name: tcp-mqtt-ws
targetPort: 9001
---
apiVersion: apps/v1
kind: Deployment
Metadata:
name: vernemq
namespace: exposed-with-istio
spec:
replicas: 1
selector:
matchLabels:
app: vernemq
template:
Metadata:
labels:
app: vernemq
spec:
serviceAccountName: vernemq
containers:
- name: vernemq
image: vernemq/vernemq
ports:
- containerPort: 1883
name: tcp-mqtt
protocol: TCP
- containerPort: 8080
name: tcp-mqtt-ws
- containerPort: 8888
name: http-dashboard
- containerPort: 4369
name: epmd
- containerPort: 44053
name: vmq
- containerPort: 9100-9109 # shortened
env:
- name: DOCKER_VERNEMQ_ACCEPT_EULA
value: "yes"
- name: DOCKER_VERNEMQ_ALLOW_ANONYMOUS
value: "on"
- name: DOCKER_VERNEMQ_listener__tcp__allowed_protocol_versions
value: "3,4,5"
- name: DOCKER_VERNEMQ_allow_register_during_netsplit
value: "on"
- name: DOCKER_VERNEMQ_disCOVERY_KUBERNETES
value: "1"
- name: DOCKER_VERNEMQ_KUBERNETES_APP_LABEL
value: "vernemq"
- name: DOCKER_VERNEMQ_KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldpath: Metadata.namespace
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldpath: Metadata.name
- name: DOCKER_VERNEMQ_ERLANG__disTRIBUTION__PORT_RANGE__MINIMUM
value: "9100"
- name: DOCKER_VERNEMQ_ERLANG__disTRIBUTION__PORT_RANGE__MAXIMUM
value: "9109"
- name: DOCKER_VERNEMQ_KUBERNETES_INSECURE
value: "1"
vernemq-loadbalancer-service.yaml
---
apiVersion: v1
kind: Namespace
Metadata:
name: exposed-with-loadbalancer
---
... the rest it the same except for namespace and service type ...
istio.yaml
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
Metadata:
name: vernemq-destination
namespace: exposed-with-istio
spec:
host: vernemq.exposed-with-istio.svc.cluster.local
trafficPolicy:
tls:
mode: disABLE
---
apiVersion: networking.istio.io/v1beta1
kind: Gateway
Metadata:
name: vernemq-gateway
namespace: exposed-with-istio
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 31400
name: tcp
protocol: TCP
hosts:
- "*"
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
Metadata:
name: vernemq-virtualservice
namespace: exposed-with-istio
spec:
hosts:
- "*"
gateways:
- vernemq-gateway
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
Metadata:
name: vernemq-virtualservice
namespace: exposed-with-istio
spec:
hosts:
- "*"
gateways:
- vernemq-gateway
http:
- match:
- uri:
prefix: /status
route:
- destination:
host: vernemq.exposed-with-istio.svc.cluster.local
port:
number: 8888
tcp:
- match:
- port: 31400
route:
- destination:
host: vernemq.exposed-with-istio.svc.cluster.local
port:
number: 1883
Kiali屏幕快照是否暗示Ingressgateway仅将HTTP流量转发到服务并吞噬所有TCP?
UPD
以下是建议:
**但是您的特使日志显示了一个问题:特使杂项未知错误代码104详细信息由对等方和特使池[C5648]客户端断开的连接重置。
istioctl proxy-config listeners vernemq-c945876f-tvvz7.exposed-with-istio
首先用| grep 8888和| grep 1883
0.0.0.0 8888 App: HTTP Route: 8888
0.0.0.0 8888 ALL PassthroughCluster
10.107.205.214 1883 ALL Cluster: outbound|1883||vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 15006 ALL Cluster: inbound|1883|tcp-mqtt|vernemq.exposed-with-istio.svc.cluster.local
... Cluster: outbound|853||istiod.istio-system.svc.cluster.local
10.107.205.214 1883 ALL Cluster: outbound|1883||vernemq.exposed-with-istio.svc.cluster.local
10.108.218.134 3000 App: HTTP Route: grafana.istio-system.svc.cluster.local:3000
10.108.218.134 3000 ALL Cluster: outbound|3000||grafana.istio-system.svc.cluster.local
10.107.205.214 4369 App: HTTP Route: vernemq.exposed-with-istio.svc.cluster.local:4369
10.107.205.214 4369 ALL Cluster: outbound|4369||vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 8888 App: HTTP Route: 8888
0.0.0.0 8888 ALL PassthroughCluster
10.107.205.214 9001 ALL Cluster: outbound|9001||vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 9090 App: HTTP Route: 9090
0.0.0.0 9090 ALL PassthroughCluster
10.96.0.10 9153 App: HTTP Route: kube-dns.kube-system.svc.cluster.local:9153
10.96.0.10 9153 ALL Cluster: outbound|9153||kube-dns.kube-system.svc.cluster.local
0.0.0.0 9411 App: HTTP ...
0.0.0.0 15006 Trans: tls; App: TCP TLS; Addr: 0.0.0.0/0 InboundPassthroughClusterIpv4
0.0.0.0 15006 Addr: 0.0.0.0/0 InboundPassthroughClusterIpv4
0.0.0.0 15006 App: TCP TLS Cluster: inbound|9001|tcp-mqtt-ws|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 15006 ALL Cluster: inbound|9001|tcp-mqtt-ws|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 15006 Trans: tls; App: TCP TLS Cluster: inbound|44053|vmq|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 15006 ALL Cluster: inbound|44053|vmq|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 15006 Trans: tls Cluster: inbound|44053|vmq|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 15006 ALL Cluster: inbound|4369|empd|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 15006 Trans: tls; App: TCP TLS Cluster: inbound|4369|empd|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 15006 Trans: tls Cluster: inbound|4369|empd|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 15006 ALL Cluster: inbound|1883|tcp-mqtt|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 15006 App: TCP TLS Cluster: inbound|1883|tcp-mqtt|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 15010 App: HTTP Route: 15010
0.0.0.0 15010 ALL PassthroughCluster
10.106.166.154 15012 ALL Cluster: outbound|15012||istiod.istio-system.svc.cluster.local
0.0.0.0 15014 App: HTTP Route: 15014
0.0.0.0 15014 ALL PassthroughCluster
0.0.0.0 15021 ALL Inline Route: /healthz/ready*
10.100.213.45 15021 App: HTTP Route: istio-ingressgateway.istio-system.svc.cluster.local:15021
10.100.213.45 15021 ALL Cluster: outbound|15021||istio-ingressgateway.istio-system.svc.cluster.local
0.0.0.0 15090 ALL Inline Route: /stats/prometheus*
10.100.213.45 15443 ALL Cluster: outbound|15443||istio-ingressgateway.istio-system.svc.cluster.local
10.105.193.108 15443 ALL Cluster: outbound|15443||istio-egressgateway.istio-system.svc.cluster.local
0.0.0.0 20001 App: HTTP Route: 20001
0.0.0.0 20001 ALL PassthroughCluster
10.100.213.45 31400 ALL Cluster: outbound|31400||istio-ingressgateway.istio-system.svc.cluster.local
10.107.205.214 44053 App: HTTP Route: vernemq.exposed-with-istio.svc.cluster.local:44053
10.107.205.214 44053 ALL Cluster: outbound|44053||vernemq.exposed-with-istio.svc.cluster.local
istioctl proxy-config endpoints vernemq-c945876f-tvvz7.exposed-with-istio
grep 1883
10.244.243.206:1883 HEALTHY OK outbound|1883||vernemq.exposed-with-istio.svc.cluster.local
127.0.0.1:1883 HEALTHY OK inbound|1883|tcp-mqtt|vernemq.exposed-with-istio.svc.cluster.local
ENDPOINT STATUS OUTLIER CHECK CLUSTER
10.101.200.113:9411 HEALTHY OK zipkin
10.106.166.154:15012 HEALTHY OK xds-grpc
10.211.55.14:6443 HEALTHY OK outbound|443||kubernetes.default.svc.cluster.local
10.244.243.193:53 HEALTHY OK outbound|53||kube-dns.kube-system.svc.cluster.local
10.244.243.193:9153 HEALTHY OK outbound|9153||kube-dns.kube-system.svc.cluster.local
10.244.243.195:53 HEALTHY OK outbound|53||kube-dns.kube-system.svc.cluster.local
10.244.243.195:9153 HEALTHY OK outbound|9153||kube-dns.kube-system.svc.cluster.local
10.244.243.197:15010 HEALTHY OK outbound|15010||istiod.istio-system.svc.cluster.local
10.244.243.197:15012 HEALTHY OK outbound|15012||istiod.istio-system.svc.cluster.local
10.244.243.197:15014 HEALTHY OK outbound|15014||istiod.istio-system.svc.cluster.local
10.244.243.197:15017 HEALTHY OK outbound|443||istiod.istio-system.svc.cluster.local
10.244.243.197:15053 HEALTHY OK outbound|853||istiod.istio-system.svc.cluster.local
10.244.243.198:8080 HEALTHY OK outbound|80||istio-egressgateway.istio-system.svc.cluster.local
10.244.243.198:8443 HEALTHY OK outbound|443||istio-egressgateway.istio-system.svc.cluster.local
10.244.243.198:15443 HEALTHY OK outbound|15443||istio-egressgateway.istio-system.svc.cluster.local
10.244.243.199:8080 HEALTHY OK outbound|80||istio-ingressgateway.istio-system.svc.cluster.local
10.244.243.199:8443 HEALTHY OK outbound|443||istio-ingressgateway.istio-system.svc.cluster.local
10.244.243.199:15021 HEALTHY OK outbound|15021||istio-ingressgateway.istio-system.svc.cluster.local
10.244.243.199:15443 HEALTHY OK outbound|15443||istio-ingressgateway.istio-system.svc.cluster.local
10.244.243.199:31400 HEALTHY OK outbound|31400||istio-ingressgateway.istio-system.svc.cluster.local
10.244.243.201:3000 HEALTHY OK outbound|3000||grafana.istio-system.svc.cluster.local
10.244.243.202:9411 HEALTHY OK outbound|9411||zipkin.istio-system.svc.cluster.local
10.244.243.202:16686 HEALTHY OK outbound|80||tracing.istio-system.svc.cluster.local
10.244.243.203:9090 HEALTHY OK outbound|9090||kiali.istio-system.svc.cluster.local
10.244.243.203:20001 HEALTHY OK outbound|20001||kiali.istio-system.svc.cluster.local
10.244.243.204:9090 HEALTHY OK outbound|9090||prometheus.istio-system.svc.cluster.local
10.244.243.206:1883 HEALTHY OK outbound|1883||vernemq.exposed-with-istio.svc.cluster.local
10.244.243.206:4369 HEALTHY OK outbound|4369||vernemq.exposed-with-istio.svc.cluster.local
10.244.243.206:8888 HEALTHY OK outbound|8888||vernemq.exposed-with-istio.svc.cluster.local
10.244.243.206:9001 HEALTHY OK outbound|9001||vernemq.exposed-with-istio.svc.cluster.local
10.244.243.206:44053 HEALTHY OK outbound|44053||vernemq.exposed-with-istio.svc.cluster.local
127.0.0.1:1883 HEALTHY OK inbound|1883|tcp-mqtt|vernemq.exposed-with-istio.svc.cluster.local
127.0.0.1:4369 HEALTHY OK inbound|4369|empd|vernemq.exposed-with-istio.svc.cluster.local
127.0.0.1:8888 HEALTHY OK inbound|8888|http-dashboard|vernemq.exposed-with-istio.svc.cluster.local
127.0.0.1:9001 HEALTHY OK inbound|9001|tcp-mqtt-ws|vernemq.exposed-with-istio.svc.cluster.local
127.0.0.1:15000 HEALTHY OK prometheus_stats
127.0.0.1:15020 HEALTHY OK agent
127.0.0.1:44053 HEALTHY OK inbound|44053|vmq|vernemq.exposed-with-istio.svc.cluster.local
unix://./etc/istio/proxy/SDS HEALTHY OK sds-grpc
istioctl proxy-config routes vernemq-c945876f-tvvz7.exposed-with-istio
NOTE: This output only contains routes loaded via RDS.
NAME DOMAINS MATCH VIRTUAL SERVICE
istio-ingressgateway.istio-system.svc.cluster.local:15021 istio-ingressgateway.istio-system /*
istiod.istio-system.svc.cluster.local:853 istiod.istio-system /*
20001 kiali.istio-system /*
15010 istiod.istio-system /*
15014 istiod.istio-system /*
vernemq.exposed-with-istio.svc.cluster.local:4369 vernemq /*
vernemq.exposed-with-istio.svc.cluster.local:44053 vernemq /*
kube-dns.kube-system.svc.cluster.local:9153 kube-dns.kube-system /*
8888 vernemq /*
80 istio-egressgateway.istio-system /*
80 istio-ingressgateway.istio-system /*
80 tracing.istio-system /*
grafana.istio-system.svc.cluster.local:3000 grafana.istio-system /*
9411 zipkin.istio-system /*
9090 kiali.istio-system /*
9090 prometheus.istio-system /*
inbound|8888|http-dashboard|vernemq.exposed-with-istio.svc.cluster.local * /*
inbound|44053|vmq|vernemq.exposed-with-istio.svc.cluster.local * /*
inbound|8888|http-dashboard|vernemq.exposed-with-istio.svc.cluster.local * /*
inbound|4369|empd|vernemq.exposed-with-istio.svc.cluster.local * /*
inbound|44053|vmq|vernemq.exposed-with-istio.svc.cluster.local * /*
* /stats/prometheus*
InboundPassthroughClusterIpv4 * /*
inbound|4369|empd|vernemq.exposed-with-istio.svc.cluster.local * /*
InboundPassthroughClusterIpv4 * /*
* /healthz/ready*
解决方法
首先,我会建议为该舱启用特使日志记录
kubectl exec -it <pod-name> -c istio-proxy -- curl -X POST http://localhost:15000/logging?level=trace
不遵循istio Sidecar日志
kubectl logs <pod-name> -c isito-proxy -f
更新
由于特使代理双方都记录了问题,因此连接有效,但无法建立。
关于端口15006:在istio中,所有流量都通过特使代理(istio-sidecar)进行路由。为此,istio将每个端口的入站映射为15006(意味着从某处到边车的所有传入流量),并为15001映射出端口(意味着从边车到某处)。 这里的更多信息:https://istio.io/latest/docs/ops/diagnostic-tools/proxy-cmd/#deep-dive-into-envoy-configuration
istioctl proxy-config listeners <pod-name>
的配置到目前为止已完成。让我们尝试找出错误。
Istio有时对其配置要求非常严格。为了排除这种情况,您能否先将服务调整为type: ClusterIP
并为mqtt端口添加一个目标端口:
- port: 1883
name: tcp-mqtt
targetPort: 1883
此外,请运行:istioctl proxy-config endpoints <pod-name>
和istioctl proxy-config routes <pod-name>
。
我在通过 istio 网关使用 VerneMQ 时遇到了同样的问题。问题是如果 listener.tcp.default 包含默认值 127.0.0.1:1883,VerneMQ 进程会重置 TCP 连接。我通过使用带有“0.0.0.0:1883”的 DOCKER_VERNEMQ_LISTENER__TCP__DEFAULT 修复了它。
,根据您的配置,我想您将不得不使用ServiceEntry来启用网格中的容器与网格外部的容器之间的通信。
ServiceEntry允许将其他条目添加到Istio的内部服务注册表中,以便网格中自动发现的服务可以访问/路由到这些手动指定的服务。服务条目描述服务的属性(DNS名称,VIP,端口,协议,端点)。这些服务可能在网状网络外部(例如,Web API)或不在网状网络服务注册表中的网状内部内部服务(例如,一组与Kubernetes中的服务进行通信的VM)。另外,还可以通过使用工作负载选择器字段动态选择服务条目的端点。这些端点可以是使用WorkloadEntry对象或Kubernetes容器声明的VM工作负载。在单个服务下同时选择Pod和VM的能力允许将服务从VM迁移到Kubernetes,而不必更改与服务关联的现有DNS名称。
您使用服务条目将条目添加到Istio内部维护的服务注册表中。 添加服务条目后,Envoy代理可以将流量发送到该服务,就像它是您网格中的服务。配置服务条目可让您管理在网格外运行的服务的流量
有关更多信息和更多示例,请访问有关服务条目here和here的istio文档。
让我知道您是否还有其他问题。
,这是我的网关、VirtualService 和 Service 配置,它们可以正常工作
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/selenium/webdriver/common/service.py",line 72,in start
self.process = subprocess.Popen(cmd,env=self.env,File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/subprocess.py",line 951,in __init__
self._execute_child(args,executable,preexec_fn,close_fds,line 1823,in _execute_child
raise child_exception_type(errno_num,err_msg,err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'chromedriver'
During handling of the above exception,another exception occurred:
Traceback (most recent call last):
File "/Users/Jasiu/PycharmProjects/pythonProject/Python project/webscaper.py",line 3,in <module>
browser = webdriver.Chrome()
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/selenium/webdriver/chrome/webdriver.py",line 73,in __init__
self.service.start()
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/selenium/webdriver/common/service.py",line 81,in start
raise WebDriverException(
selenium.common.exceptions.WebDriverException: Message: 'chromedriver' executable needs to be in PATH. Please see https://sites.google.com/a/chromium.org/chromedriver/home