Kubernetes之(四)kubeadm部署集群

释放双眼,带上耳机,听听看~!

目录

  • Kubernetes之(四)kubeadm部署集群

  • 1、部署前准备

    • 2、集群初始化

Kubernetes之(四)kubeadm部署集群

kubeadm是Kubernetes项目自带的集群构建工具,它负责执行构建一个最小化的可用集群以及将其启动等的必要基本步骤,简单来讲,kubeadm是Kubernetes集群全生命周期的管理工具,可用于实现集群的部署、升级/降级及拆除。
kubeadm集成了kubeadminit和kubeadmjoin等工具程序,其中kubeadminit用于集群的快速初始化,初始化,其核心功能是部署Master节点的各个组件,而kubeadmjoin则用于将节点快速加入到指定集群中,它们是创建Kubernetes集群最佳实践的“快速路径”。另外,kubeadmtoken可于集群构建后管理用于加入集群时使用的认证令牌(token),而kubeadmreset命令的功能则是删除集群构建过程中生成的文件以重置回初始状态。kubeadm还支持管理初始引导认证令牌(BootstrapToken),完成待加入的新节点首次联系APIServer时的身份认证(基于共享密钥)。另外,它们还支持管理集群版本的升级和降级操作。
使用kubeadm部署集群有以下几大优势:

  • 简单易用,kubeadm可完成集群的部署、升级和拆除操作,对新手用户非常友好。
  • 使用领域广,支持将集群部署于裸机、VMware、AWS、Azure、GCE及更多环境的主机上,并且部署过程基本一致。
  • 富有弹性:1.11版本及其以后版本的kubeadm支持阶段式部署,管理员可分为多个步骤独立操作。
  • 生产环境可用,kubeadm遵循以最佳实践的方式部署Kubernetes集群,它强制启用RBAC,设定Master各组件间以及API Server与kubelet之间进行认证及安全通信,并锁定了kubelet API等。

1、部署前准备

实验环境准备

master
10.0.10
Centos 7.5 1804
10.244.0.0/16
10.96.0.0/12
2G
node01
10.0.11
Centos 7.5 1804
10.244.0.0/16
10.96.0.0/12
2G
node02
10.0.12
Centos 7.5 1804
10.244.0.0/16
10.96.0.0/12
2G

kubernetes官方要求内存最低2G起.个人建议4G起好一些
所有机器运行


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
1#关闭防火墙和selinux
2systemctl stop firewalld &&systemctl disable firewalld &&setenforce 0
3#关闭swap分区
4[root@master ~]# swapoff -a
5#配置syctl.conf核内核参数
6[root@master ~]# cat youhua.sh    
7#!/bin/sh
8echo "* soft nofile 190000" >> /etc/security/limits.conf
9echo "* hard nofile 200000" >> /etc/security/limits.conf
10echo "* soft nproc 252144" >> /etc/security/limits.conf
11echo "* hadr nproc 262144" >> /etc/security/limits.conf
12
13tee /etc/sysctl.conf <<-'EOF'
14# System default settings live in /usr/lib/sysctl.d/00-system.conf.
15# To override those settings, enter new settings here, or in an /etc/sysctl.d/<name>.conf file
16#
17# For more information, see sysctl.conf(5) and sysctl.d(5).
18
19net.ipv4.tcp_tw_recycle = 0
20net.ipv4.ip_local_port_range = 10000 61000
21net.ipv4.tcp_syncookies = 1
22net.ipv4.tcp_fin_timeout = 30
23net.ipv4.ip_forward = 1
24net.core.netdev_max_backlog = 2000
25net.ipv4.tcp_mem = 131072  262144  524288
26net.ipv4.tcp_keepalive_intvl = 30
27net.ipv4.tcp_keepalive_probes = 3
28net.ipv4.tcp_window_scaling = 1
29net.ipv4.tcp_syncookies = 1
30net.ipv4.tcp_max_syn_backlog = 2048
31net.ipv4.tcp_low_latency = 0
32net.core.rmem_default = 256960
33net.core.rmem_max = 513920
34net.core.wmem_default = 256960
35net.core.wmem_max = 513920
36net.core.somaxconn = 2048
37net.core.optmem_max = 81920
38net.ipv4.tcp_mem = 131072  262144  524288
39net.ipv4.tcp_rmem = 8760  256960  4088000
40net.ipv4.tcp_wmem = 8760  256960  4088000
41net.ipv4.tcp_keepalive_time = 1800
42net.ipv4.tcp_sack = 1
43net.ipv4.tcp_fack = 1
44net.ipv4.tcp_timestamps = 1
45net.ipv4.tcp_syn_retries = 1
46net.bridge.bridge-nf-call-ip6tables = 1
47net.bridge.bridge-nf-call-iptables = 1
48net.bridge.bridge-nf-call-arptables = 1
49EOF
50echo "options nf_conntrack hashsize=819200" >> /etc/modprobe.d/mlx4.conf
51modprobe br_netfilter
52sysctl -p
53[root@master ~]# sh youhua.sh
54

准备工作


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
1**配置host解析**
2[root@master ~]# cat /etc/hosts
3127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
4::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
510.0.0.10 master
610.0.0.11 node01
710.0.0.12 node02
8#生成公钥并发送至node01 node02节点
9[root@master ~]# ssh-keygen -t rsa
10Generating public/private rsa key pair.
11Enter file in which to save the key (/root/.ssh/id_rsa):
12Created directory '/root/.ssh'.
13Enter passphrase (empty for no passphrase):
14Enter same passphrase again:
15Your identification has been saved in /root/.ssh/id_rsa.
16Your public key has been saved in /root/.ssh/id_rsa.pub.
17The key fingerprint is:
18SHA256:2sVd/Xh8vuuUOkoy+PP1VHQvsf08JuzLCWBGyfOHQPA root@master
19The key's randomart image is:
20+---[RSA 2048]----+
21|       ...       |
22|        + .    . |
23|         E    o +|
24|        ..+... B+|
25|        S+oo..+ O|
26|       o+.. o  ==|
27|      ...o o + *+|
28|        ..+ =.O o|
29|         .oo.*+=.|
30+----[SHA256]-----+
31[root@master ~]# ssh-copy-id node01
32/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
33The authenticity of host 'node01 (10.0.0.11)' can't be established.
34ECDSA key fingerprint is SHA256:0WjkfswyQXRv+zeS03AF9xLANd4uZtFo0YcY7kGiagA.
35ECDSA key fingerprint is MD5:32:e0:54:7e:8c:a0:1c:59:17:7b:00:3a:71:89:e1:a4.
36Are you sure you want to continue connecting (yes/no)? yes
37/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
38/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
39root@node01's password:
40
41Number of key(s) added: 1
42
43Now try logging into the machine, with:   "ssh 'node01'"
44and check to make sure that only the key(s) you wanted were added.
45
46[root@master ~]# ssh-copy-id node02
47/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
48The authenticity of host 'node02 (10.0.0.12)' can't be established.
49ECDSA key fingerprint is SHA256:0WjkfswyQXRv+zeS03AF9xLANd4uZtFo0YcY7kGiagA.
50ECDSA key fingerprint is MD5:32:e0:54:7e:8c:a0:1c:59:17:7b:00:3a:71:89:e1:a4.
51Are you sure you want to continue connecting (yes/no)? yes
52/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
53/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
54root@node02's password:
55
56Number of key(s) added: 1
57
58Now try logging into the machine, with:   "ssh 'node02'"
59and check to make sure that only the key(s) you wanted were added.
60

配置源


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
1#配置docker-ce的阿里源
2[root@master ~]# wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
3--2019-03-27 14:17:07--  https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
4正在解析主机 mirrors.aliyun.com (mirrors.aliyun.com)... 183.60.159.225, 183.60.159.230, 183.60.159.232, ...
5正在连接 mirrors.aliyun.com (mirrors.aliyun.com)|183.60.159.225|:443... 已连接。
6已发出 HTTP 请求,正在等待回应... 200 OK
7长度:2640 (2.6K) [application/octet-stream]
8正在保存至: “/etc/yum.repos.d/docker-ce.repo”
9
10100%[==============================================>] 2,640       --.-K/s 用时 0s      
11
122019-03-27 14:17:12 (261 MB/s) - 已保存 “/etc/yum.repos.d/docker-ce.repo” [2640/2640])
13#配置Kubernetes的阿里源
14[root@master ~]# cat /etc/yum.repos.d/k8s.repo
15[kubernetes]
16name=Kubernetes
17baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
18gpgcheck=0
19gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
20enabled=1
21#拷贝给node节点
22
23[root@master ~]# cd /etc/yum.repos.d/
24[root@master yum.repos.d]# scp ./docker-ce.repo ./k8s.repo node01:/etc/yum.repos.d/
25docker-ce.repo                                        100% 2640     3.4MB/s   00:00    
26k8s.repo                                              100%  202   311.7KB/s   00:00    
27[root@master yum.repos.d]# scp ./docker-ce.repo ./k8s.repo node02:/etc/yum.repos.d/
28docker-ce.repo                                        100% 2640     2.8MB/s   00:00    
29k8s.repo                                              100%  202   252.1KB/s   00:00  
30#重建yum缓存
31[root@master yum.repos.d]# yum makecache fast
32已加载插件:fastestmirror
33Loading mirror speeds from cached hostfile
34 * base: mirrors.aliyun.com
35 * extras: mirrors.aliyun.com
36 * updates: mirrors.aliyun.com
37base                                                             | 3.6 kB  00:00:00    
38docker-ce-stable                                                 | 3.5 kB  00:00:00    
39extras                                                           | 3.4 kB  00:00:00    
40kubernetes                                                       | 1.4 kB  00:00:00    
41updates                                                          | 3.4 kB  00:00:00    
42(1/2): kubernetes/primary                                        |  47 kB  00:00:05    
43(2/2): updates/7/x86_64/primary_db                               | 3.4 MB  00:00:08    
44kubernetes                                                                      336/336
45元数据缓存已建立
46

安装docker-ce和kubetnetes所需软件
目前 kubernetes官方支持的docker-ce最高版本是18.06,亲测18.09也可以使用,初始化会提示warnning,但是不会报错,运行也无异常;这里kubelet,kubeadm,kubectl使用1.13.1,docker使用18.06。

master
docker-ce 18.06、kubelet 1.13.1、kubeadm 1.13.1、kubectl 1.13.1
node01
docker-ce 18.06、kubelet 1.13.1、kubeadm 1.13.1
node02
docker-ce 18.06、kubelet 1.13.1、kubeadm 1.13.1


1
2
3
4
5
6
7
8
1#master
2[root@master ~]# yum install -y kubelet-1.13.1 kubeadm-1.13.1 kubectl-1.13.1 docker-ce-18.06.3.ce-3.el7
3[root@master ~]# systemctl start docker kubelet &&systemctl enable docker kubelet
4#node01 node02
5[root@node01 ~]# yum install -y kubelet-1.13.1 kubeadm-1.13.1 docker-ce-18.06.3.ce-3.el7
6[root@node01 ~]# systemctl start docker kubelet &&systemctl enable docker kubelet
7#到此,准备工作完成
8

2、集群初始化

所有机器都需要初始化容器执行引擎(如docker或frakti等) 和kubelet。这是因为kubeadm依赖kubelet来启动Master组件,比如kube-apiserver、kube-managercontroller、kube-scheduler、kube-proxy等。
master初始化创建集群


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
1[root@master ~]# kubeadm init \
2--kubernetes-version=v1.13.1 \
3--pod-network-cidr=10.244.0.0/16 \
4--service-cidr=10.96.0.0/12 \
5--image-repository registry.aliyuncs.com/google_containers \
6--apiserver-advertise-address=0.0.0.0 \
7--ignore-preflight-errors=Swap
8[init] Using Kubernetes version: v1.13.1
9[preflight] Running pre-flight checks
10[preflight] Pulling images required for setting up a Kubernetes cluster
11[preflight] This might take a minute or two, depending on the speed of your internet connection
12[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
13[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
14[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
15[kubelet-start] Activating the kubelet service
16[certs] Using certificateDir folder "/etc/kubernetes/pki"
17[certs] Generating "ca" certificate and key
18[certs] Generating "apiserver-kubelet-client" certificate and key
19[certs] Generating "apiserver" certificate and key
20[certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.10]
21[certs] Generating "front-proxy-ca" certificate and key
22[certs] Generating "front-proxy-client" certificate and key
23[certs] Generating "etcd/ca" certificate and key
24[certs] Generating "etcd/peer" certificate and key
25[certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [10.0.0.10 127.0.0.1 ::1]
26[certs] Generating "apiserver-etcd-client" certificate and key
27[certs] Generating "etcd/server" certificate and key
28[certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [10.0.0.10 127.0.0.1 ::1]
29[certs] Generating "etcd/healthcheck-client" certificate and key
30[certs] Generating "sa" key and public key
31[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
32[kubeconfig] Writing "admin.conf" kubeconfig file
33[kubeconfig] Writing "kubelet.conf" kubeconfig file
34[kubeconfig] Writing "controller-manager.conf" kubeconfig file
35[kubeconfig] Writing "scheduler.conf" kubeconfig file
36[control-plane] Using manifest folder "/etc/kubernetes/manifests"
37[control-plane] Creating static Pod manifest for "kube-apiserver"
38[control-plane] Creating static Pod manifest for "kube-controller-manager"
39[control-plane] Creating static Pod manifest for "kube-scheduler"
40[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
41[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
42[apiclient] All control plane components are healthy after 21.543677 seconds
43[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
44[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
45[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "master" as an annotation
46[mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
47[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
48[bootstrap-token] Using token: jffthg.hb83xpoxjcm5vh54
49[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
50[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
51[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
52[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
53[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
54[addons] Applied essential addon: CoreDNS
55[addons] Applied essential addon: kube-proxy
56
57Your Kubernetes master has initialized successfully!
58
59To start using your cluster, you need to run the following as a regular user:
60
61  mkdir -p $HOME/.kube
62  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
63  sudo chown $(id -u):$(id -g) $HOME/.kube/config
64
65You should now deploy a pod network to the cluster.
66Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
67  https://kubernetes.io/docs/concepts/cluster-administration/addons/
68
69You can now join any number of machines by running the following on each node
70as root:
71
72  kubeadm join 10.0.0.10:6443 --token jffthg.hb83xpoxjcm5vh54 --discovery-token-ca-cert-hash sha256:37c446787d8ff4383a50dc5855afb8e307f4d1fff5e5cdbb9235b8af6e49e28f
73

上面命令的选项决定了集群环境的众多特性设定,这些设定对于伺候在集群中部署运行应用程序很重要。
–kubernetes-version:正在使用的kubetnetes程序组建的版本号,需要与kubelet版本相同
–pod-network-cidr: Pod的地址范围,其值为CIDR格式的网络地址;使用flannel网络插件时,其默认地址为10.244.0.0/16.
–service-cidr: Service网络地址范围,,其值为CIDR格式的网络地址;其默认地址为10.96.0.0/16.
–apiserver-advertise-address: APIserver通告给其他组件的IP地址,默认为0.0.0.0,表示节点上所有可用的地址
–ignore-preflight-errors=Swap: 忽略哪些运行时的错误,此时表示忽略swap未关闭导致的错误
–image-repository registry.aliyuncs.com/google_containers 表示选择拉取初始化所需镜像的仓库,kubernetes的仓库在国外,国内拉取可能失败,1.13版本支持该选项选择仓库,此仓库为阿里云的仓库

设定kubectl的配置文件
kubectl是执行kubernetes集群管理的核心工具,默认情况kubectl会从当前用户主目录的隐藏目录.kube下名为config的配置文件读取配置,包括要接入的集群,以及用于集群认证的证书或令牌等。初始化集群时,kubeadm会自动生成一个用于此功能的配置文件/et/kubernetes/admin.conf,将它复制到$HOME/.kube/config,这里master节点root用户为例,生产用应该用普通用户身份进行。


1
2
3
1[root@master ~]# mkdir -p $HOME/.kube
2[root@master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
3

到此,集群的master节点已经基本配置完成,可以使用API server来验证各组件是否正常,必要时可使用kubeadm reset命令重置之后重新初始化集群。
componentstatus 缩写为cs


1
2
3
4
5
6
1[root@master ~]# kubectl get cs
2NAME                 STATUS    MESSAGE              ERROR
3scheduler            Healthy   ok                  
4controller-manager   Healthy   ok                  
5etcd-0               Healthy   {"health": "true"}
6

node加入集群


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
1[root@node01 ~]#  kubeadm join 10.0.0.10:6443 --token jffthg.hb83xpoxjcm5vh54 --discovery-token-ca-cert-hash sha256:37c446787d8ff4383a50dc5855afb8e307f4d1fff5e5cdbb9235b8af6e49e28f
2[preflight] Running pre-flight checks
3        [WARNING Hostname]: hostname "node01" could not be reached
4        [WARNING Hostname]: hostname "node01": lookup node01 on 114.114.114.114:53: no such host
5[discovery] Trying to connect to API Server "10.0.0.10:6443"
6[discovery] Created cluster-info discovery client, requesting info from "https://10.0.0.10:6443"
7[discovery] Requesting info from "https://10.0.0.10:6443" again to validate TLS against the pinned public key
8[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.0.0.10:6443"
9[discovery] Successfully established connection with API Server "10.0.0.10:6443"
10[join] Reading configuration from the cluster...
11[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
12[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
13[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
14[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
15[kubelet-start] Activating the kubelet service
16[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
17[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node01" as an annotation
18
19This node has joined the cluster:
20* Certificate signing request was sent to apiserver and a response was received.
21* The Kubelet was informed of the new secure connection details.
22
23Run 'kubectl get nodes' on the master to see this node join the cluster.
24

master上查询集群状态


1
2
3
4
5
6
1[root@master ~]# kubectl get nodes
2NAME     STATUS     ROLES    AGE    VERSION
3master   NotReady   master   10m    v1.13.1
4node01   NotReady   <none>   114s   v1.13.1
5node02   NotReady   <none>   107s   v1.13.1
6

此时状态为Notready,因为此时没有网络CNI,需要加载falnnel或者calico网络插件,此处以flannel插件为例


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
1[root@master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
2podsecuritypolicy.extensions/psp.flannel.unprivileged created
3clusterrole.rbac.authorization.k8s.io/flannel created
4clusterrolebinding.rbac.authorization.k8s.io/flannel created
5serviceaccount/flannel created
6configmap/kube-flannel-cfg created
7daemonset.extensions/kube-flannel-ds-amd64 created
8daemonset.extensions/kube-flannel-ds-arm64 created
9daemonset.extensions/kube-flannel-ds-arm created
10daemonset.extensions/kube-flannel-ds-ppc64le created
11daemonset.extensions/kube-flannel-ds-s390x created
12#稍等几分钟
13[root@master ~]# kubectl get nodes
14NAME     STATUS   ROLES    AGE   VERSION
15master   Ready    master   39m   v1.13.1
16node01   Ready    <none>   30m   v1.13.1
17node02   Ready    <none>   30m   v1.13.1
18

Kubernetes集群以及部署的插件提供了多种不同的服务 如此前部署过的API Server、kube-dns等。API客户端访问集群时需要事先知道API Server的通告地址,管理员可使用“kubectl cluster-info”命令了解到这些信息:


1
2
3
4
1[root@master ~]# kubectl cluster-info
2Kubernetes master is running at https://10.0.0.10:6443
3KubeDNS is running at https://10.0.0.10:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
4

Kubernetes集群server和client版本信息查询,可使用kubectl version命令查看:


1
2
3
4
1[root@master ~]# kubectl version --short=true
2Client Version: v1.13.1
3Server Version: v1.13.1
4

从集群中移除节点:
运行过程中,若有节点需要从正常运行得集群中移除,则可使用以下步骤:
1) 在master节点使用如下命令排干 当前节点上得Pod资源并移除Node节点:


1
2
3
1~]# kubectl drain node01 --delete-local-data --force --ignire-daemonsets
2~]# kubectl delete node node01
3

2)而后在需要删除得Node上执行如下命令重置系统状态即可完成移除操作:


1
2
1~]# kubeadm reset
2

根据官方说明tonken的默认有效时间为24h,由于时间差,导致这里的token失效,可以使用kubeadm token list查看token,发现之前初始化的tonken已经失效了


1
2
3
4
1[root@master ~]# kubeadm token list  
2TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
3jffthg.hb83xpoxjcm5vh54   7h        2019-03-28T16:03:51+08:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token
4

那么此处需要重新生成token,生成的方法如下:


1
2
1[root@master ~]# kubeadm token create
2

如果没有值–discovery-token-ca-cert-hash,可以通过在master节点上运行以下命令链来获取:


1
2
3
1[root@master ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
2   openssl dgst -sha256 -hex | sed 's/^.* //'
3

此时,再运行kube join命令将新增node加入到集群当中,此处的–discovery-token-ca-cert-hash依旧可以使用初始化时的证书


1
2
1kubeadm join 192.168.56.11:6443 --token 1*** --discovery-token-ca-cert-hash
2

到此kubernetes集群部署完成

给TA打赏
共{{data.count}}人
人已打赏
安全运维

故障复盘的简洁框架-黄金三问

2021-9-30 19:18:23

安全运维

OpenSSH-8.7p1离线升级修复安全漏洞

2021-10-23 10:13:25

个人中心
购物车
优惠劵
今日签到
有新私信 私信列表
搜索