在通过来自两个不同子网的节点生成的 kubernetes 集群中,Pod 无法相互 ping

问题描述

我正在尝试使用带有 3 个主节点和 5 个工作节点的 kubespray 来启动本地 k8 集群。节点 IP 来自 2 个不同的子网。

Ansible 库存:

hosts:
saba-k8-vm-m1:
  ansible_host: 192.168.100.1
  ip: 192.168.100.1
  access_ip: 192.168.100.1
saba-k8-vm-m2:
  ansible_host: 192.168.100.2
  ip: 192.168.100.2
  access_ip: 192.168.100.2
saba-k8-vm-m3:
  ansible_host: 192.168.200.1
  ip: 192.168.200.1
  access_ip: 192.168.200.1
saba-k8-vm-w1:
  ansible_host: 192.168.100.3
  ip: 192.168.100.3
  access_ip: 192.168.100.3
saba-k8-vm-w2:
  ansible_host: 192.168.100.4
  ip: 192.168.100.4
  access_ip: 192.168.100.4
saba-k8-vm-w3:
  ansible_host: 192.168.100.5
  ip: 192.168.100.5
  access_ip: 192.168.100.5
saba-k8-vm-w4:
  ansible_host: 192.168.200.2
  ip: 192.168.200.2
  access_ip: 192.168.200.2
saba-k8-vm-w5:
  ansible_host: 192.168.200.3
  ip: 192.168.200.3
  access_ip: 192.168.200.3


children:
    kube-master:
      hosts:
        saba-k8-vm-m1:
        saba-k8-vm-m2:
        saba-k8-vm-m3:
    kube-node:
      hosts:
        saba-k8-vm-w1:
        saba-k8-vm-w2:
        saba-k8-vm-w3:
        saba-k8-vm-w4:
        saba-k8-vm-w5:

我接下来生成了 dnsutils - kubectl apply -f https://k8s.io/examples/admin/dns/dnsutils.yaml 这是在 w1 工人上。它能够查找 svc 名称(我在 w2 上创建了 elasticsearch pod)

root@saba-k8-vm-m1:/opt/bitnami# kubectl get svc -n kube-system
    NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
    coredns                     ClusterIP   10.233.0.3      <none>        53/UDP,53/TCP,9153/TCP   6d3h
        
root@saba-k8-vm-m1:/opt/bitnami# kubectl exec -it dnsutils sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
/ #

/ # nslookup elasticsearch-elasticsearch-data.lilac-efk.svc.cluster.local. 10.233.0.3
Server:         10.233.0.3
Address:        10.233.0.3#53

Name:   elasticsearch-elasticsearch-data.lilac-efk.svc.cluster.local
Address: 10.233.49.187

接下来我在 w5(.200 子网)上生成了相同的 dnsutils pod。 nslookup 在这方面失败了。

root@saba-k8-vm-m1:/opt/bitnami# kubectl exec -it dnsutils sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
/ #
/ # ^C
/ # nslookup elasticsearch-elasticsearch-data.lilac-efk.svc.cluster.local 10.233.0.3
;; connection timed out; no servers could be reached
    
/ # exit
command terminated with exit code 1

来自在 w5 上运行的 nodelocaldns 的日志:

 [ERROR] plugin/errors: 2 elasticsearch-elasticsearch-data.lilac-efk.lilac-efk.svc.cluster.local. AAAA: dial tcp 10.233.0.3:53: i/o timeout
 [ERROR] plugin/errors: 2 elasticsearch-elasticsearch-data.lilac-efk.lilac-efk.svc.cluster.local. A: dial tcp 10.233.0.3:53: i/o timeout
    

从 dnsutils 容器,我无法通过覆盖网络访问另一个子网上的 coredns pod IP。集群是使用 Calico 生成的。

 root@saba-k8-vm-m1:/opt/bitnami# kubectl get pods -n kube-system -o wide | grep coredns
    pod/coredns-dff8fc7d-98mbw                        1/1     Running   3          6d2h    10.233.127.4    saba-k8-vm-m2   <none>           <none>
    pod/coredns-dff8fc7d-cwbhd                        1/1     Running   7          6d2h    10.233.74.7     saba-k8-vm-m1   <none>           <none>
    pod/coredns-dff8fc7d-h4xdd                        1/1     Running   0          2m19s   10.233.82.6     saba-k8-vm-m3   <none>           <none>
        
 root@saba-k8-vm-m1:/opt/bitnami# kubectl exec -it dnsutils sh
 kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
 / # ping 10.233.82.6
 PING 10.233.82.6 (10.233.82.6): 56 data bytes
 64 bytes from 10.233.82.6: seq=0 ttl=62 time=0.939 ms
 64 bytes from 10.233.82.6: seq=1 ttl=62 time=0.693 ms
 ^C
 --- 10.233.82.6 ping statistics ---
 2 packets transmitted,2 packets received,0% packet loss
 round-trip min/avg/max = 0.693/0.816/0.939 ms
 / # ping 10.233.74.7
 PING 10.233.74.7 (10.233.74.7): 56 data bytes
 ^C
 --- 10.233.74.7 ping statistics ---
 4 packets transmitted,0 packets received,100% packet loss
 / # ping 10.233.127.4
 PING 10.233.127.4 (10.233.127.4): 56 data bytes
 ^C
 --- 10.233.127.4 ping statistics ---
 2 packets transmitted,100% packet loss

kube_service_addresses: 10.233.0.0/18 kube_pods_subnet: 10.233.64.0/18

由于这种行为,在所有 5 个工作程序上作为守护程序运行的 fluentd 位于 CrashLoopBack 中,因为它无法解析 elasticsearch svc 名称。

我错过了什么?任何帮助表示赞赏。

解决方法

感谢@laimison 给我指点。

发布我所有的观察,所以它可以对某人有用。

在 M1,

root@saba-k8-vm-m1:~# ip r | grep tunl
10.233.72.0/24 via 192.168.100.5 dev tunl0 proto bird onlink
10.233.102.0/24 via 192.168.100.4 dev tunl0 proto bird onlink
10.233.110.0/24 via 192.168.100.3 dev tunl0 proto bird onlink
10.233.127.0/24 via 192.168.100.2 dev tunl0 proto bird onlink

root@saba-k8-vm-m1:~# sudo calicoctl.sh node status
Calico process is running.
IPv4 BGP status
+---------------+-------------------+-------+------------+-------------+
| PEER ADDRESS  |     PEER TYPE     | STATE |   SINCE    |    INFO     |
+---------------+-------------------+-------+------------+-------------+
| 192.168.100.2 | node-to-node mesh | up    | 2021-04-06 | Established |
| 192.168.200.1 | node-to-node mesh | start | 2021-04-06 | Passive     |
| 192.168.100.3 | node-to-node mesh | up    | 2021-04-06 | Established |
| 192.168.100.4 | node-to-node mesh | up    | 2021-04-06 | Established |
| 192.168.100.5 | node-to-node mesh | up    | 2021-04-06 | Established |
| 192.168.200.2 | node-to-node mesh | start | 2021-04-06 | Passive     |
| 192.168.200.3 | node-to-node mesh | start | 2021-04-06 | Passive     |
+---------------+-------------------+-------+------------+-------------+
IPv6 BGP status
No IPv6 peers found.

在 M3 上,

lilac@saba-k8-vm-m3:~$ ip r | grep tunl
10.233.85.0/24 via 192.168.200.3 dev tunl0 proto bird onlink
10.233.98.0/24 via 192.168.200.2 dev tunl0 proto bird onlink

lilac@saba-k8-vm-m3:~$ sudo calicoctl.sh node status
Calico process is running.
IPv4 BGP status
+---------------+-------------------+-------+------------+--------------------------------+
| PEER ADDRESS  |     PEER TYPE     | STATE |   SINCE    |              INFO              |
+---------------+-------------------+-------+------------+--------------------------------+
| 192.168.100.1 | node-to-node mesh | start | 2021-04-06 | Active Socket: Connection      |
|               |                   |       |            | reset by peer                  |
| 192.168.100.2 | node-to-node mesh | start | 2021-04-06 | Active Socket: Connection      |
|               |                   |       |            | closed                         |
| 192.168.100.3 | node-to-node mesh | start | 2021-04-06 | Active Socket: Connection      |
|               |                   |       |            | closed                         |
| 192.168.100.4 | node-to-node mesh | start | 2021-04-06 | Active Socket: Connection      |
|               |                   |       |            | closed                         |
| 192.168.100.5 | node-to-node mesh | start | 2021-04-06 | Active Socket: Connection      |
|               |                   |       |            | closed                         |
| 192.168.200.2 | node-to-node mesh | up    | 2021-04-06 | Established                    |
| 192.168.200.3 | node-to-node mesh | up    | 2021-04-06 | Established                    |
+---------------+-------------------+-------+------------+--------------------------------+
IPv6 BGP status
No IPv6 peers found.

在 M1 上,192.168.200.2 和 192.168.200.3 是被动的。在 M3 上,我注意到 Active Socket: Connection for all .100 IPs。这表明M3正在尝试建立BGP连接,但无法通过。

我能够从 M3 telnet 192.168.100.x 179

通过在 M1 上运行 /usr/local/bin/calicoctl.sh node diags 检查 calico pod 日志和节点转储,我可以看到

bird: BGP: Unexpected connect from unknown address 10.0.x.x (port 53107)

10.0.x.x 是托管 .200 台虚拟机的服务器的管理 IP。它正在执行源 NAT。

我添加了这个规则:

-A POSTROUTING ! -d 192.168.0.0/16 -j SNAT --to-source 10.0.x.x

解决了这个问题。

root@saba-k8-vm-m1:/tmp/calico050718821/diagnostics/logs# /usr/local/bin/calicoctl.sh node status
Calico process is running.

IPv4 BGP status
+---------------+-------------------+-------+----------+-------------+
| PEER ADDRESS  |     PEER TYPE     | STATE |  SINCE   |    INFO     |
+---------------+-------------------+-------+----------+-------------+
| 192.168.100.2 | node-to-node mesh | up    | 08:08:38 | Established |
| 192.168.200.1 | node-to-node mesh | up    | 08:09:15 | Established |
| 192.168.100.3 | node-to-node mesh | up    | 08:09:24 | Established |
| 192.168.100.4 | node-to-node mesh | up    | 08:09:02 | Established |
| 192.168.100.5 | node-to-node mesh | up    | 08:09:47 | Established |
| 192.168.200.2 | node-to-node mesh | up    | 08:08:55 | Established |
| 192.168.200.3 | node-to-node mesh | up    | 08:09:37 | Established |
+---------------+-------------------+-------+----------+-------------+

IPv6 BGP status
No IPv6 peers found.

我尝试过的其他事情:

我更新了所有节点的 ipipMode。这不能解决问题,但有助于提高性能。

sudo /usr/local/bin/calicoctl.sh patch ippool default-pool -p '{"spec":{"ipipMode": "CrossSubnet"}}'
Successfully patched 1 'IPPool' resource

我参考了 calico/node is not ready: BIRD is not ready: BGP not established 并设置了 interface=ens3,尽管这是我的 VM 上唯一的接口。同样,并不能解决问题,但在 calico 节点上有多个接口时会有所帮助。

相关问答

错误1:Request method ‘DELETE‘ not supported 错误还原:...
错误1:启动docker镜像时报错:Error response from daemon:...
错误1:private field ‘xxx‘ is never assigned 按Alt...
报错如下,通过源不能下载,最后警告pip需升级版本 Requirem...