带你玩转kubernetes-k8s(第一篇:k8s kubeadm的安装)

释放双眼,带上耳机,听听看~!

 

          这是我第一次下决心写一个完整的专栏,以后每周晚上12点之前为大家更新相关内容。相信来看这篇文章的朋友们都是有运维相关经验的朋友。

           言规正传,我们看接下来的内容;本人不打算一上来就讲解一大堆的概念,这样会使各位朋友看着很累,看得云里雾里的,摸不着头脑,第一天我就先带领大家把k8s的环境搭建起来。

  一、环境准备及角色规

                  

                


1
2
3
4
5
6
7
8
9
10
11
11.关闭selinux。
22.关闭防火墙。
33.配置hosts,修改hostname
44.时间同步
5yum -y install ntp
6ntptime
7timedatectl
8systemctl enable ntpd
9systemctl restart ntpd.service
10ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
11

二、安装docker

    二标题内的操作需要在三台服务器上均操作

   1.安装yum管理工具及其依赖           


1
2
1yum install -y yum-utils device-mapper-persistent-data lvm2
2

  2.添加yum源


1
2
1 yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
2

   3.查看docker的可用版本

        我们这里使用的是docker-ce-18.09.6版本


1
2
1 yum list docker-ce --showduplicates | sort -r
2

1
2
1 yum -y install docker-ce-18.09.6
2

                  

   4.启动docker服务并激活开机自动启动

 


1
2
1 systemctl start docker && systemctl enable docker
2

   5.配置国内镜像加速器


1
2
1echo '{"registry-mirrors":["https://361z9p6b.mirror.aliyuncs.com"]}' > /etc/docker/daemon.json
2

 

 

三、使用kubeadm安装kubernetes集群

 1.添加k8s的yum源,由于我们无法从谷歌直接下载,我们这里切换成阿里的源(ps:三台虚拟机均安装)


1
2
3
4
5
6
7
8
9
10
1cat <<EOF > /etc/yum.repos.d/kubernetes.repo
2[kubernetes]
3name=Kubernetes
4baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
5enabled=1
6gpgcheck=1
7repo_gpgcheck=1
8gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
9EOF
10

2.临时关闭swap(ps:三台虚拟机均执行)

Kubernetes 1.8 开始要求关闭系统的Swap,如果不关闭,默认配置下 kubelet 将无法启动。可以通过修改 kubelet 的启动参数/etc/sysconfig/kubelet中 –fail-swap-on=false 更改这个限制(4小节的配置文件)


1
2
1swapoff -a  
2

3.安装并启动(我们这里安装k8s1.14.2版本,ps:三台虚拟机均执行)


1
2
3
4
5
6
1yum install -y kubelet-1.14.2 kubeadm-1.14.2 kubectl-1.14.2 --disableexcludes=kubernetes
2#--disableexcludes=kubernetes  禁掉除了这个之外的别的仓库
3
4
5systemctl enable kubelet && systemctl start kubelet
6

4.更改swap限制(ps:三台虚拟机均执行)


1
2
3
1# cat /etc/sysconfig/kubelet
2KUBELET_EXTRA_ARGS=--fail-swap-on=false
3

5.新建init-config.yaml文件定制镜像仓库地址和Pod地址段(ps:仅在k8s-master上创建改文件)

     podSubnet:ip根据自己的喜欢设置


1
2
3
4
5
6
7
8
1# cat init-config.yaml
2apiVersion: kubeadm.k8s.io/v1beta1
3kind: ClusterConfiguration
4imageRepository: docker.io/dustise
5kubernetesVersion: v1.14.0
6networking:
7  podSubnet: "192.168.0.0/16"
8

6.下载k8s-master所需镜像及其依赖(ps:仅在k8s-master操作)

   这个过程需要等待一段时间,具体看网速。


1
2
3
4
5
6
7
8
9
1# kubeadm config images pull --config=init-config.yaml
2[config/images] Pulled docker.io/dustise/kube-apiserver:v1.14.0
3[config/images] Pulled docker.io/dustise/kube-controller-manager:v1.14.0
4[config/images] Pulled docker.io/dustise/kube-scheduler:v1.14.0
5[config/images] Pulled docker.io/dustise/kube-proxy:v1.14.0
6[config/images] Pulled docker.io/dustise/pause:3.1
7[config/images] Pulled docker.io/dustise/etcd:3.3.10
8[config/images] Pulled docker.io/dustise/coredns:1.3.1
9

7.编辑文件/usr/lib/systemd/system/docker.service(ps:在三台虚拟机上均操作)


1
2
1ExecStart=/usr/bin/dockerd --exec-opt native.cgroupdriver=systemd
2

修改文件之后docker.deamon加载该文件,并重新启动docker


1
2
3
1systemctl daemon-reload
2systemctl restart docker
3

8.安装k8s-Master(ps:仅在k8s-master操作)


1
2
1kubeadm init --config=init-config.yaml
2

 

如果执行过程中报以下错误:


1
2
3
4
1[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]:
2
3/proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
4

解决方案执行以下命令即可:


1
2
1 echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables
2

再次进行kubeadm 初始化。

提示如下:

     


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
1[root@k8s-1master ~]#  kubeadm init --config=init-config.yaml
2[init] Using Kubernetes version: v1.14.0
3[preflight] Running pre-flight checks
4[preflight] Pulling images required for setting up a Kubernetes cluster
5[preflight] This might take a minute or two, depending on the speed of your internet connection
6[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
7[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
8[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
9[kubelet-start] Activating the kubelet service
10[certs] Using certificateDir folder "/etc/kubernetes/pki"
11[certs] Generating "etcd/ca" certificate and key
12[certs] Generating "apiserver-etcd-client" certificate and key
13[certs] Generating "etcd/server" certificate and key
14[certs] etcd/server serving cert is signed for DNS names [k8s-1master localhost] and IPs [20.0.40.50 127.0.0.1 ::1]
15[certs] Generating "etcd/peer" certificate and key
16[certs] etcd/peer serving cert is signed for DNS names [k8s-1master localhost] and IPs [20.0.40.50 127.0.0.1 ::1]
17[certs] Generating "etcd/healthcheck-client" certificate and key
18[certs] Generating "ca" certificate and key
19[certs] Generating "apiserver" certificate and key
20[certs] apiserver serving cert is signed for DNS names [k8s-1master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 20.0.40.50]
21[certs] Generating "apiserver-kubelet-client" certificate and key
22[certs] Generating "front-proxy-ca" certificate and key
23[certs] Generating "front-proxy-client" certificate and key
24[certs] Generating "sa" key and public key
25[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
26[kubeconfig] Writing "admin.conf" kubeconfig file
27[kubeconfig] Writing "kubelet.conf" kubeconfig file
28[kubeconfig] Writing "controller-manager.conf" kubeconfig file
29[kubeconfig] Writing "scheduler.conf" kubeconfig file
30[control-plane] Using manifest folder "/etc/kubernetes/manifests"
31[control-plane] Creating static Pod manifest for "kube-apiserver"
32[control-plane] Creating static Pod manifest for "kube-controller-manager"
33[control-plane] Creating static Pod manifest for "kube-scheduler"
34[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
35[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
36[apiclient] All control plane components are healthy after 17.001456 seconds
37[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
38[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
39[upload-certs] Skipping phase. Please see --experimental-upload-certs
40[mark-control-plane] Marking the node k8s-1master as control-plane by adding the label "node-role.kubernetes.io/master=''"
41[mark-control-plane] Marking the node k8s-1master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
42[bootstrap-token] Using token: 3wtp5h.dmuu9sn3bryfg4mj
43[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
44[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
45[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
46[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
47[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
48[addons] Applied essential addon: CoreDNS
49[addons] Applied essential addon: kube-proxy
50
51Your Kubernetes control-plane has initialized successfully!
52
53To start using your cluster, you need to run the following as a regular user:
54
55  mkdir -p $HOME/.kube
56  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
57  sudo chown $(id -u):$(id -g) $HOME/.kube/config
58
59You should now deploy a pod network to the cluster.
60Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
61  https://kubernetes.io/docs/concepts/cluster-administration/addons/
62
63Then you can join any number of worker nodes by running the following on each as root:
64
65kubeadm join 20.0.40.50:6443 --token 3wtp5h.dmuu9sn3bryfg4mj \
66    --discovery-token-ca-cert-hash sha256:833c7a61fe7500401789395477ce874199986209205a145f6c8052d56fd92fe1
67
68

按照提示复制配置文件到用户目录下


1
2
3
4
1mkdir -p $HOME/.kube
2sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
3sudo chown $(id -u):$(id -g) $HOME/.kube/config
4

查看ConfigMap

查看初始化情况

9.安装网络插件weave


1
2
1kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
2

10.配置node加入集群

  在k8s-master上查看添加node的命令


1
2
1kubeadm token create --print-join-command
2

 在k8s-node1,k8s-node2上均执行查看到的命令

查看node状态


1
2
1kubectl get node
2

查看pods安装进度


1
2
1kubectl get pods --all-namespaces
2

 

到这里我们的k8s基本已经部署完成了。

谢谢大家的浏览,如果部署过程中出现什么问题,可以随时留言,谢谢!

明天将带领大家装dashboard仪表盘,部署实例等内容

给TA打赏
共{{data.count}}人
人已打赏
安全运维

故障复盘的简洁框架-黄金三问

2021-9-30 19:18:23

安全运维

OpenSSH-8.7p1离线升级修复安全漏洞

2021-10-23 10:13:25

个人中心
购物车
优惠劵
今日签到
有新私信 私信列表
搜索