k8s 1.2之k8s安装和搭建集群

1.实验,一台master主机和两台work主机搭建k8s集群

三台机器都需要开启虚拟化,master主机的内存和cpu数多点 master至少2核数,node节点至少1核数   以下操作是三台机器都需要操作相同步骤   分别在三台服务器 ssh-keygen ssh-copy-id work1

ssh-copy-id master

去/root/.ssh目录下有个authorized_keys文件就是免密登录

3.关闭交换分区,提升性能

#临时关闭  swapoff -a

 

 

#永久关闭vim /etc/fstab  注释掉

/dev/mapper/centos-swap swap swap defaults 0 0

 

4.为什么要关闭swap交换分区?

Swap是交换分区,如果机器内存不够,会使用swap分区,但是swap分区的性能较低,k8s设计的时候为了能提升性能认是不允许使用交换分区的。Kubeadm初始化的时候会检测swap是否关闭,如果没关闭,那就初始化失败。如果不想要关闭交换分区,安装k8s的时候可以指定--ignore-preflight-errors=Swap来解决。   5.修改内核参数,添加网桥过滤器和IP路由转发,这样看s集群就可以通过路由进行转发  
[root@master sysctl.d]# cat /etc/sysctl.d/kubernetes.conf 
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
net.ipv4.ip_forward=1
6.重新加载配置
[root@master sysctl.d]# sysctl -p
vm.max_map_count = 262144
net.ipv4.ip_forward = 1

7.加载网桥过滤模块

[root@master sysctl.d]# modprobe br_netfilter

8。查看网桥过滤器是否成功开启

[root@master sysctl.d]# lsmod |grep br_netfilter
br_netfilter           22256  0 
bridge                151336  1 br_netfilter

9.配置ipvs功能,安装ipset和ipvsadm

yum install -y ipvsadm ipset

 10.添加需要加载的模块写入脚本文件

[root@master yum.repos.d]# cat /etc/sysconfig/modules/ipvs.modules 
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_sh
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- nf_conntrack_ipv4

对脚本添加执行权限

[root@master ]# chmod +x  /etc/sysconfig/modules/ipvs.modules

启动脚本   /etc/sysconfig/modules/ipvs.modules

 11.查看对应模块是否启动成功

[root@master yum.repos.d]# lsmod | grep ip_vs
ip_vs_wrr 12697 0
ip_vs_rr 12600 0
ip_vs_sh 12688 0
ip_vs 145458 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack 139264 7 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4
libcrc32c 12644 4 xfs,ip_vs,nf_nat,nf_conntrack

12.kubernetes集群部署

三台机器都安装docker-ce   kubelet   kubeadm  kubectl

(1)安装docker-ce源

要安装docker需要去阿里云搜docker-ce的源,再用yum 安装docker-ce就可以

kubelet   kubeadm  kubectl可以用Yum list  kubelet搜索可安装的软件包

[root@master ~]# yum list kubelet
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.aliyun.com
 * extras: mirrors.aliyun.com
 * updates: mirrors.aliyun.com
已安装的软件包
kubelet.x86_64 1.24.2-0 @kubernetes

(2)安装kubelet    kubeadm   kubectl   先不启动kubelet

yum install -y kubeadm  kubelet kubectl 
Kubeadm:  kubeadm是一个工具,用来初始化k8s集群的 kubelet:   安装在集群所有节点上,用于启动Pod的 kubectl:   通过kubectl可以部署和管理应用,查看各种资源,创建、删除和更新各种组件

(3)查看docker版本

[root@master ~]# docker --version
Docker version 20.10.15, build fd82621

 (4)使用kubeadm初始化k8s集群

先查看下部署集群所需要的镜像,可以看到下面都是镜像

[root@work1 ~]# kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.24.3
k8s.gcr.io/kube-controller-manager:v1.24.3
k8s.gcr.io/kube-scheduler:v1.24.3
k8s.gcr.io/kube-proxy:v1.24.3
k8s.gcr.io/pause:3.7
k8s.gcr.io/etcd:3.5.3-0
k8s.gcr.io/coredns/coredns:v1.8.6

  可以用二进制方法下载这几个容器,依次操作这几个镜像,注意:kubeadm认从k8s.grc.io拉取镜像,但是k8s.gcr.io访问不到,所以需要指定从registry.aliyuncs.com/google_containers仓库拉取镜像。

[root@work1 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.24.3                                     #从阿里云下载镜像

[root@work1 ~]# docker tag  registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.24.3 k8s.gcr.io/kube-scheduler:v1.24.3   #给镜像打标签k8s.gcr.io/kube-scheduler:v1.24.3
 [root@work1 ~]# docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.24.3                                     #删除不需要用的阿里云镜像文件

或者用for循环下载这几个镜像并打标签,把原镜像删除

[root@master ~]# for imageName in ${images[@]};do

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
done

(5)查看镜像是否下载好了

[root@work2 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.17.4 6dec7cfde1e5 2 years ago 116MB
k8s.gcr.io/kube-apiserver v1.17.4 2e1ba57fe95a 2 years ago 171MB
k8s.gcr.io/kube-controller-manager v1.17.4 7f997fcf3e94 2 years ago 161MB
k8s.gcr.io/kube-scheduler v1.17.4 5db16c1c7aff 2 years ago 94.4MB
k8s.gcr.io/coredns 1.6.5 70f311871ae1 2 years ago 41.6MB
k8s.gcr.io/etcd 3.4.3-0 303ce5db0e90 2 years ago 288MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 4 years ago 742kB

 

 (6)kubernetes集群初始化

Master节点创建集群(该操作只在master主机执行)

可以用kubeadm --help查看帮助信息

[root@master ~]# kubeadm init --kubernetes-version=v1.24.3 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --apiserver-advertise-address=192.168.213.4 --image-repository registry.aliyuncs.com/google_containers

注释:--kubernetes-version=v1.17.4 \                  # kubernetes版本
--pod-network-cidr=10.244.0.0/16 #k8s给pod节点分配的ip地址
--service-cidr=10.96.0.0/12 \ #给客户访问的备选地址
--apiserver-advertise-address=192.168.213.4 #master的ip

 

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should Now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.213.8:6443 --token 1bl657.59pad6tz14nvhp3j \
    --discovery-token-ca-cert-hash sha256:61fb5e8ca294bea610601f26535cc0f5c991185c665b4d842adcffc909c5d417 

根据提示操作

[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]#   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]#   sudo chown $(id -u):$(id -g) $HOME/.kube/config
可以去https://kubernetes.io/docs/concepts/cluster-administration/addons/网址下载.yaml文件也可以wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
 执行文件
[root@master ~]# kubectl apply -f /root/kube-flannel.yml 
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

在node子节点执行

[root@work1 ~]# kubeadm join 192.168.213.3:6443 --token eafk7y.fe1nzff9ptjs3tuk \
>     --discovery-token-ca-cert-hash sha256:363741efccddbabf7f93d50bd0914dfd8d059909306a8542b0e06a4172264d8f 
W0720 09:10:26.480050    2844 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

查看下集群节点状态,可以看到是notready状态,执行下网络文件就可以

(7)安装kubernetes的网络插件 https://kubernetes.io/docs/concepts/cluster-administration/addons/

kubernetes支持多种网络插件,如:flannel、calico、canal等(面试常问flannel跟calico的区别?)

只在master节点安装flannel插件即可,该插件使用的是DaemonSet控制器,该控制器会在每个节点上都运行

[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady master 3m16s v1.17.4
work1 NotReady <none> 17s v1.17.4
work2 NotReady <none> 32s v1.17.4
work3 NotReady <none> 32s v1.17.4

root@master ~]# kubectl apply -f kube-flannel.yml 
namespace/kube-flannel unchanged
clusterrole.rbac.authorization.k8s.io/flannel unchanged
clusterrolebinding.rbac.authorization.k8s.io/flannel unchanged
serviceaccount/flannel unchanged
configmap/kube-flannel-cfg unchanged
daemonset.apps/kube-flannel-ds unchanged
[root@master ~]# kubectl get nodes
NAME     STATUS     ROLES    AGE     VERSION
master   Ready      master   7m22s   v1.17.4
work1    Ready      worker   4m23s   v1.17.4
work2    Ready      worker   4m38s   v1.17.4
work3    Ready      worker   4m38s   v1.17.4

 

相关文章

Redis Cluster 提供了一种运行 Redis 安装的方法,其中数据&...
创建GitLab源码项目并上传示例代码 1. 创建GitLab源码项目 本...
1. 引言 在如今的技术世界中,随着微服务架构的广泛应用和云...
先看看结果有多轻量吧 官方文档:https://grafana.com/docs/...
一、安装环境说明 硬件要求 内存:2GB或更多RAM CPU: 2核CPU...
CEPH 简介 不管你是想为云平台提供Ceph 对象存储和/或 Ceph ...