Microk8s,MetalLB,ingress-nginx-如何路由外部流量? 入口层

问题描述

Kubernetes / Ubuntu新手在这里!

我正在使用单个Raspberry Pi设置一个k8s集群(希望将来有更多)。我正在使用microk8s v1.18.8和Ubuntu Server 20.04.1 LTS (GNU/Linux 5.4.0-1018-raspi aarch64)

我正在尝试在端口80上访问我的k8s服务之一,但无法正确设置它。我还设置了用于访问服务的静态IP地址,并将流量从路由器路由到服务的IP地址。

我想知道我在做错什么,或者是否有更好的方法来做我想做的事情!

我要执行的步骤:

  1. 我已经跑过microk8s enable dns metallb。我给了DHPC服务器(192.168.0.90-192.168.0.99)未处理的MetalLB IP地址。
  2. 我已经通过运行ingress-nginx安装了kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.35.0/deploy/static/provider/baremetal/deploy.yaml。这将为NodePort创建一个ingress-nginx-controller服务,该服务不适用于MetalLB。如here所述,我通过运行spec.type将服务的NodePortLoadBalancer编辑为kubectl edit service ingress-nginx-controller -n ingress-nginx。然后,MetalLB将IP 192.168.0.90分配给服务。
  3. 然后我应用以下配置文件:
apiVersion: v1
kind: Service
metadata:
  name: wow-ah-api-service
  namespace: develop
spec:
  selector:
    app: wow-ah-api
  ports:
    - protocol: TCP
      port: 80
      targetPort: 3000
---
apiVersion: apps/v1
kind: Deployment
metadata:
  # Unique key of the Deployment instance
  name: wow-ah-api
  namespace: develop
spec:
  # 3 Pods should exist at all times.
  replicas: 3
  selector:
    matchLabels:
      app: wow-ah-api
  template:
    metadata:
      namespace: develop
      labels:
        # Apply this label to pods and default
        # the Deployment label selector to this value
        app: wow-ah-api
    spec:
      imagePullSecrets:
        - name: some-secret
      containers:
        - name: wow-ah-api
          # Run this image
          image: some-image
          imagePullPolicy: Always
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: wow-ah-api-ingress
  namespace: develop
spec:
  backend:
    serviceName: wow-ah-api-service
    servicePort: 3000

这些是我看到的一些输出:

microk8s kubectl get all --all-namespaces

NAMESPACE        NAME                                            READY   STATUS      RESTARTS   AGE
develop          pod/wow-ah-api-6c4bff88f9-2x48v                 1/1     Running     4          4h21m
develop          pod/wow-ah-api-6c4bff88f9-ccw9z                 1/1     Running     4          4h21m
develop          pod/wow-ah-api-6c4bff88f9-rd6lp                 1/1     Running     4          4h21m
ingress-nginx    pod/ingress-nginx-admission-create-mnn8g        0/1     Completed   0          4h27m
ingress-nginx    pod/ingress-nginx-admission-patch-x5r6d         0/1     Completed   1          4h27m
ingress-nginx    pod/ingress-nginx-controller-7896b4fbd4-nglsd   1/1     Running     4          4h27m
kube-system      pod/coredns-588fd544bf-576x5                    1/1     Running     4          4h26m
metallb-system   pod/controller-5f98465b6b-hcj9g                 1/1     Running     4          4h23m
metallb-system   pod/speaker-qc9pc                               1/1     Running     4          4h23m

NAMESPACE       NAME                                         TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)                      AGE
default         service/kubernetes                           ClusterIP      10.152.183.1     <none>         443/TCP                      21h
develop         service/wow-ah-api-service                   ClusterIP      10.152.183.88    <none>         80/TCP                       4h21m
ingress-nginx   service/ingress-nginx-controller             LoadBalancer   10.152.183.216   192.168.0.90   80:32151/TCP,443:30892/TCP   4h27m
ingress-nginx   service/ingress-nginx-controller-admission   ClusterIP      10.152.183.41    <none>         443/TCP                      4h27m
kube-system     service/kube-dns                             ClusterIP      10.152.183.10    <none>         53/UDP,53/TCP,9153/TCP       4h26m

NAMESPACE        NAME                     DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                 AGE
metallb-system   daemonset.apps/speaker   1         1         1       1            1           beta.kubernetes.io/os=linux   4h23m

NAMESPACE        NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
develop          deployment.apps/wow-ah-api                 3/3     3            3           4h21m
ingress-nginx    deployment.apps/ingress-nginx-controller   1/1     1            1           4h27m
kube-system      deployment.apps/coredns                    1/1     1            1           4h26m
metallb-system   deployment.apps/controller                 1/1     1            1           4h23m

NAMESPACE        NAME                                                  DESIRED   CURRENT   READY   AGE
develop          replicaset.apps/wow-ah-api-6c4bff88f9                 3         3         3       4h21m
ingress-nginx    replicaset.apps/ingress-nginx-controller-7896b4fbd4   1         1         1       4h27m
kube-system      replicaset.apps/coredns-588fd544bf                    1         1         1       4h26m
metallb-system   replicaset.apps/controller-5f98465b6b                 1         1         1       4h23m

NAMESPACE       NAME                                       COMPLETIONS   DURATION   AGE
ingress-nginx   job.batch/ingress-nginx-admission-create   1/1           27s        4h27m
ingress-nginx   job.batch/ingress-nginx-admission-patch    1/1           29s        4h27m

microk8s kubectl get ingress --all-namespaces

NAMESPACE   NAME                 CLASS    HOSTS   ADDRESS         PORTS   AGE
develop     wow-ah-api-ingress   <none>   *       192.168.0.236   80      4h23m

我一直在想这可能与我的iptables配置有关,但是我不确定如何配置它们以与microk8一起使用。

sudo iptables -L

Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
KUBE-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes service portals */
KUBE-EXTERNAL-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes externally-visible service portals */
KUBE-FIREWALL  all  --  anywhere             anywhere            

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
KUBE-FORWARD  all  --  anywhere             anywhere             /* kubernetes forwarding rules */
KUBE-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes service portals */
ACCEPT     all  --  10.1.0.0/16          anywhere             /* generated for MicroK8s pods */
ACCEPT     all  --  anywhere             10.1.0.0/16          /* generated for MicroK8s pods */
ACCEPT     all  --  10.1.0.0/16          anywhere            
ACCEPT     all  --  anywhere             10.1.0.0/16         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
KUBE-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes service portals */
KUBE-FIREWALL  all  --  anywhere             anywhere            

Chain KUBE-EXTERNAL-SERVICES (1 references)
target     prot opt source               destination         

Chain KUBE-FIREWALL (2 references)
target     prot opt source               destination         
DROP       all  --  anywhere             anywhere             /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000
DROP       all  -- !localhost/8          localhost/8          /* block incoming localnet connections */ ! ctstate RELATED,ESTABLISHED,DNAT

Chain KUBE-FORWARD (1 references)
target     prot opt source               destination         
DROP       all  --  anywhere             anywhere             ctstate INVALID
ACCEPT     all  --  anywhere             anywhere             /* kubernetes forwarding rules */ mark match 0x4000/0x4000
ACCEPT     all  --  anywhere             anywhere             /* kubernetes forwarding conntrack pod source rule */ ctstate RELATED,ESTABLISHED
ACCEPT     all  --  anywhere             anywhere             /* kubernetes forwarding conntrack pod destination rule */ ctstate RELATED,ESTABLISHED

Chain KUBE-KUBELET-CANARY (0 references)
target     prot opt source               destination         

Chain KUBE-PROXY-CANARY (0 references)
target     prot opt source               destination         

Chain KUBE-SERVICES (3 references)
target     prot opt source               destination 

更新#1

metallb ConfigMapmicrok8s kubectl edit ConfigMap/config -n metallb-system

apiVersion: v1
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 192.168.0.90-192.168.0.99
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","data":{"config":"address-pools:\n- name: default\n  protocol: layer2\n  addresses:\n  - 192.168.0.90-192.168.0.99\n"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"config","namespace":"metallb-system"}}
  creationTimestamp: "2020-09-19T21:18:45Z"
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:data:
        .: {}
        f:config: {}
      f:metadata:
        f:annotations:
          .: {}
          f:kubectl.kubernetes.io/last-applied-configuration: {}
    manager: kubectl
    operation: Update
    time: "2020-09-19T21:18:45Z"
  name: config
  namespace: metallb-system
  resourceVersion: "133422"
  selfLink: /api/v1/namespaces/metallb-system/configmaps/config
  uid: 774f6a73-b1e1-4e26-ba73-ef71bc2e1060

感谢您能给我任何帮助!

解决方法

简短答案:

  1. 您仅需要(可能有一个)IP地址。您必须可以从Microk8s机器上ping它。
  2. 这是错误。删除此步骤

长答案示例:

清洁Microk8s。仅一个公共IP(或本地计算机IP。在您的用例中,我将使用192.168.0.90)。

您如何测试?例如

curl -H "Host: blue.nginx.example.com" http://PUBLIC_IP

从机器外部。

运行测试。它必须失败。

启用microk8s dns和入口

microk8s.enable dns ingress

运行测试。失败了吗?

如果是相同的错误,则:您需要metallb

  • 具有Internet公共IP

    microk8s.enable metallb:$(curl ipinfo.io/ip)-$(curl ipinfo.io/ip)

  • 使用LAN IP 192.168.0.90

    microk8s.enable metallb:192.168.0.90-192.168.0.90

再次运行测试

如果“测试未返回” 503或404,则:您无法执行下一步。也许您遇到网络问题或防火墙过滤器。

入口层

我们的测试到达了Microk8s Ingress控制器。他不知道该怎么办并返回404错误(有时是503)。

没关系。继续!

我将使用https://youtu.be/A_PjjCM1eLA?t=984 16:24的示例

[Kube 32]在kubernetes裸机金属集群上设置Traefik入口

设置kubectl别名

alias kubectl=microk8s.kubectl
部署应用
kubectl create -f https://raw.githubusercontent.com/justmeandopensource/kubernetes/master/yamls/ingress-demo/nginx-deploy-main.yaml
kubectl create -f https://raw.githubusercontent.com/justmeandopensource/kubernetes/master/yamls/ingress-demo/nginx-deploy-blue.yaml
kubectl create -f https://raw.githubusercontent.com/justmeandopensource/kubernetes/master/yamls/ingress-demo/nginx-deploy-green.yaml
在内部群集网络中公开应用。默认情况下为ClusterIP。
kubectl expose deploy nginx-deploy-main --port 80
kubectl expose deploy nginx-deploy-blue --port 80
kubectl expose deploy nginx-deploy-green --port 80

运行测试。还行不通...

入口规则示例:如何按主机名传递

配置主机nginx.example.com,blue.nginx.example.com和green.nginx.example.com并将请求分发到公开的部署:

kubectl create -f https://raw.githubusercontent.com/justmeandopensource/kubernetes/master/yamls/ingress-demo/ingress-resource-2.yaml

运行此测试:

curl -H "Host: blue.nginx.example.com" http://PUBLIC_IP

现在您将收到类似

的答复
<h1>I am <font color=blue>BLUE</font></h1>

您可以玩

curl -H "Host: nginx.example.com" http://PUBLIC_IP
curl -H "Host: blue.nginx.example.com" http://PUBLIC_IP
curl -H "Host: green.nginx.example.com" http://PUBLIC_IP

结论:

  • 我们只有1个IP地址和多个主机。
  • 我们使用同一端口提供3种不同的服务。
  • 使用Ingress完成请求分配。
,

刚刚开始使用 MicroK8s - 它似乎很有前途。在梳理了信息站点和文档之后;能够使用 Traefik Ingress Controller(带有自定义资源定义和 Ingress 路由)实现裸机演示; Linkerd 服务网格;和 metallb 负载平衡器。这是在运行 Ubuntu 20.04 的 VirtualBox 来宾 VM 上完成的;此 github 链接中还包含将 Metallb 提供给访客 VM 的 Traefik Ingress Controller 外部 IP 公开的“方式”。见https://github.com/msb1/microk8s-traefik-linkerd-whoami

更喜欢此实现而不是 Youtube 链接中显示的内容,因为它包含工作服务网格并使用 Ingress 的自定义资源定义(这是 Traefik 独有的,也是继续使用 Traefik 而非其他 Ingress 控制器的原因之一)。>

希望这对其他人有所帮助 - 应该能够在此演示(当前重点)之后使用 MicroK8s 构建出色的部署。

相关问答

依赖报错 idea导入项目后依赖报错,解决方案:https://blog....
错误1:代码生成器依赖和mybatis依赖冲突 启动项目时报错如下...
错误1:gradle项目控制台输出为乱码 # 解决方案:https://bl...
错误还原:在查询的过程中,传入的workType为0时,该条件不起...
报错如下,gcc版本太低 ^ server.c:5346:31: 错误:‘struct...