目录
-
Kubernetes之(十九)资源指标和集群监控
-
资源指标和资源监控
- metrics-server
-
部署metrics-server
- Prometheus
-
概述
* 部署prometheus
* Grafana数据展示
Kubernetes之(十九)资源指标和集群监控
资源指标和资源监控
一个集群系统管理离不开监控,同样的Kubernetes也需要根据数据指标来采集相关数据,从而完成对集群系统的监控状况进行监测。这些指标总体上分为两个组成:监控集群本身和监控Pod对象,通常一个集群的衡量性指标包括以下几个部分:
- 节点资源状态:主要包括网络带宽、磁盘空间、CPU和内存使用率
- 节点的数量:即时性了解集群的可用节点数量可以为用户计算服务器使用的费用支出提供参考。
- 运行的Pod对象:正在运行的Pod对象数量可以评估可用节点数量是否足够,以及节点故障时是否能平衡负载。
另一个方面,对Pod资源对象的监控需求大概有以下三类:
- Kubernetes指标:监测特定应用程序相关的Pod对象的部署过程、副本数量、状态信息、健康状态、网络等等。
- 容器指标:容器的资源需求、资源限制、CPU、内存、磁盘空间、网络带宽的实际占用情况。
- 应用程序指标:应用程序自身的内建指标,和业务规则相关
metrics-server
在新一代的Kubernetes指标监控体系当中主要由核心指标流水线和监控指标流水线组成:
-
核心指标流水线:是指由kubelet、、metrics-server以及由API server提供的api组成,它们可以为K8S系统提供核心指标,从而了解并操作集群内部组件和程序。其中相关的指标包括CPU的累积使用率、内存实时使用率,Pod资源占用率以及容器磁盘占用率等等。其中核心指标的获取原先是由heapster进行收集,但是在1.11版本之后已经被废弃,从而由新一代的metrics-server所代替对核心指标的汇聚。核心指标的收集是必要的。如下图:
-
监控指标流水线:用于从系统收集各种指标数据并提供给终端用户、存储系统以及HPA。它们包含核心指标以及许多非核心指标,其中由于非核心指标本身不能被Kubernetes所解析,此时就需要依赖于用户选择第三方解决方案。如下图:
一个可以同时使用资源指标API和自定义指标API的组件是HPAv2,其实现了通过观察指标实现自动扩容和缩容。而目前资源指标API的实现主流是metrics-server。
自1.8版本后,容器的cpu和内存资源占用利用率都可以通过客户端指标API直接调用,从而获取资源使用情况,要知道的是API本身并不存储任何指标数据,仅仅提供资源占用率的实时监测数据。
资源指标和其他的API指标并没有啥区别,它是通过API Server的URL路径/apis/metrics.k8s.io/进行存取,只有在k8s集群内部署了metrics-server应用才能只用API,其简单的结构图如下:
MetricsServer基于内存存储,重启后数据将全部丢失,而且它仅能留存最近收集到的指标数据,因此,如果用户期望访问历史数据,就不得不借助于第三方的监控系统(如Prometheus等)。
一般说来,MetricsServer在每个集群中仅会运行一个实例,启动时,它将自动初始化与各节点的连接,因此出于安全方面的考虑,它需要运行于普通节点而非Master主机之上。直接使用项目本身提供的资源配置清单即能轻松完成metrics-server的部署。
部署metrics-server
https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/metrics-server
下载yaml文件
1
2
3
4
5
6
7
8
9
10
11 1[root@master metrics-server]# for n in auth-delegator.yaml auth-reader.yaml metrics-apiservice.yaml metrics-server-deployment.yaml metrics-server-service.yaml resource-reader.yaml;do wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/metrics-server/$n;done
2
3[root@master metrics-server]# ll
4总用量 24
5-rw-r--r-- 1 root root 398 4月 10 10:31 auth-delegator.yaml
6-rw-r--r-- 1 root root 419 4月 10 10:31 auth-reader.yaml
7-rw-r--r-- 1 root root 393 4月 10 10:32 metrics-apiservice.yaml
8-rw-r--r-- 1 root root 3156 4月 10 10:32 metrics-server-deployment.yaml
9-rw-r--r-- 1 root root 336 4月 10 10:32 metrics-server-service.yaml
10-rw-r--r-- 1 root root 801 4月 10 10:32 resource-reader.yaml
11
部署
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85 1#由于镜像及部分设置问题,修改下面这个文件的部分内容
2#metrics-server容器修改镜像地址和command字段,metrics-server-nanny容器中的cpu和内存值
3[root@master metrics-server]# vim metrics-server-deployment.yaml
4......
5 spec:
6 priorityClassName: system-cluster-critical
7 serviceAccountName: metrics-server
8 containers:
9 - name: metrics-server
10 #image: k8s.gcr.io/metrics-server-amd64:v0.3.1
11 image: xiaobai20201/metrics-server:v0.3.1
12 - name: metrics-server
13 #image: k8s.gcr.io/metrics-server-amd64:v0.3.1
14 image: xiaobai20201/metrics-server:v0.3.1
15 command:
16 - /metrics-server
17 - --metric-resolution=30s
18 - --kubelet-insecure-tls
19 - --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP
20 ports:
21 - containerPort: 443
22 name: https
23 protocol: TCP
24 - name: metrics-server-nanny
25 #image: k8s.gcr.io/addon-resizer:1.8.4
26 image: xiaobai20201/addon-resizer:1.8.4
27 resources:
28 limits:
29 cpu: 100m
30 memory: 300Mi
31 requests:
32 cpu: 5m
33 # Specifies the smallest cluster (defined in number of nodes)
34 # resources will be scaled to.
35 memory: 50Mi
36 env:
37 - name: MY_POD_NAME
38 valueFrom:
39 fieldRef:
40 fieldPath: metadata.name
41 - name: MY_POD_NAMESPACE
42 valueFrom:
43 fieldRef:
44 fieldPath: metadata.namespace
45 volumeMounts:
46 - name: metrics-server-config-volume
47 mountPath: /etc/config
48 command:
49 - /pod_nanny
50 - --config-dir=/etc/config
51 - --cpu=100m
52 - --extra-cpu=0.5m
53 - --memory=100Mi
54 - --extra-memory=50Mi
55 - --threshold=5
56 - --deployment=metrics-server-v0.3.1
57 - --container=metrics-server
58 - --poll-period=300000
59 - --estimator=exponential
60 - --minClusterSize=10
61
62[root@master metrics-server]# vim resource-reader.yaml
63#由于启动容器还需要权限获取数据,需要在resource-reader.yaml文件中增加nodes/stats
64
65apiVersion: rbac.authorization.k8s.io/v1
66kind: ClusterRole
67metadata:
68 name: system:metrics-server
69 labels:
70 kubernetes.io/cluster-service: "true"
71 addonmanager.kubernetes.io/mode: Reconcile
72rules:
73- apiGroups:
74 - ""
75 resources:
76 - pods
77 - nodes
78 - nodes/stats
79 - namespaces
80 verbs:
81 - get
82 - list
83 - watch
84
85
部署
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19 1[root@master metrics-server]# kubectl apply -f .
2clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
3rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
4apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
5serviceaccount/metrics-server created
6configmap/metrics-server-config created
7deployment.apps/metrics-server-v0.3.1 created
8service/metrics-server created
9clusterrole.rbac.authorization.k8s.io/system:metrics-server created
10clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
11
12[root@master metrics-server]# kubectl api-versions |grep metrics
13metrics.k8s.io/v1beta1
14
15#检查资源指标API的可用性
16[root@master metrics-server]# kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes"
17{"kind":"NodeMetricsList","apiVersion":"metrics.k8s.io/v1beta1","metadata":{"selfLink":"/apis/metrics.k8s.io/v1beta1/nodes"},"items":[]}
18
19
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85 1#部署成功后可以使用kubectl proxy --port=8080来代理出一个端口
2[root@master metrics-server]# kubectl proxy --port=8080
3Starting to serve on 127.0.0.1:8080
4#使用curl命令可以从api接口查看节点等状态
5[root@master mainfest]# curl http://localhost:8080/apis/metrics.k8s.io/v1beta1
6{
7 "kind": "APIResourceList",
8 "apiVersion": "v1",
9 "groupVersion": "metrics.k8s.io/v1beta1",
10 "resources": [
11 {
12 "name": "nodes",
13 "singularName": "",
14 "namespaced": false,
15 "kind": "NodeMetrics",
16 "verbs": [
17 "get",
18 "list"
19 ]
20 },
21 {
22 "name": "pods",
23 "singularName": "",
24 "namespaced": true,
25 "kind": "PodMetrics",
26 "verbs": [
27 "get",
28 "list"
29 ]
30 }
31 ]
32}
33
34#该组内主要提供nodes和pods的数据
35
36[root@master mainfest]# curl http://localhost:8080/apis/metrics.k8s.io/v1beta1/nodes
37{
38 "kind": "NodeMetricsList",
39 "apiVersion": "metrics.k8s.io/v1beta1",
40 "metadata": {
41 "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes"
42 },
43 "items": [
44 {
45 "metadata": {
46 "name": "node02",
47 "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/node02",
48 "creationTimestamp": "2019-04-10T02:57:21Z"
49 },
50 "timestamp": "2019-04-10T02:57:14Z",
51 "window": "30s",
52 "usage": {
53 "cpu": "41332743n",
54 "memory": "702124Ki"
55 }
56 },
57 {
58 "metadata": {
59 "name": "master",
60 "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/master",
61 "creationTimestamp": "2019-04-10T02:57:21Z"
62 },
63 "timestamp": "2019-04-10T02:57:15Z",
64 "window": "30s",
65 "usage": {
66 "cpu": "156316878n",
67 "memory": "1209616Ki"
68 }
69 },
70 {
71 "metadata": {
72 "name": "node01",
73 "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/node01",
74 "creationTimestamp": "2019-04-10T02:57:21Z"
75 },
76 "timestamp": "2019-04-10T02:57:09Z",
77 "window": "30s",
78 "usage": {
79 "cpu": "47843790n",
80 "memory": "800144Ki"
81 }
82 }
83 ]
84}
85
下面使用kubectl top命令进行查看资源信息:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35 1[root@master metrics-server]# kubectl top nodes
2NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
3master 146m 7% 1187Mi 68%
4node01 45m 4% 782Mi 45%
5node02 36m 3% 683Mi 39%
6
7
8
9[root@master mainfest]# kubectl top pods -n kube-system
10NAME CPU(cores) MEMORY(bytes)
11canal-nbspn 21m 52Mi
12canal-pj6rx 13m 43Mi
13canal-rgsnp 12m 43Mi
14coredns-78d4cf999f-6cb69 2m 10Mi
15coredns-78d4cf999f-tflpn 2m 10Mi
16etcd-master 16m 121Mi
17kube-apiserver-master 31m 517Mi
18kube-controller-manager-master 39m 82Mi
19kube-flannel-ds-amd64-5zrk7 2m 14Mi
20kube-flannel-ds-amd64-pql5n 2m 12Mi
21kube-flannel-ds-amd64-ssd29 2m 14Mi
22kube-proxy-ch4vp 2m 15Mi
23kube-proxy-cz2rf 2m 23Mi
24kube-proxy-kdp7d 4m 21Mi
25kube-scheduler-master 10m 21Mi
26kubernetes-dashboard-6f9998798-klf4t 1m 15Mi
27metrics-server-v0.3.1-65bd5d59b9-xvmns 1m 20Mi
28
29
30
31[root@master metrics-server]# kubectl top pod -l k8s-app=kube-dns --containers=true -n kube-system
32POD NAME CPU(cores) MEMORY(bytes)
33coredns-78d4cf999f-6cb69 coredns 2m 10Mi
34coredns-78d4cf999f-tflpn coredns 2m 10Mi
35
Prometheus
概述
除了前面的资源指标(如CPU、内存)以外,用户或管理员需要了解更多的指标数据,比如Kubernetes指标、容器指标、节点资源指标以及应用程序指标等等。自定义指标API允许请求任意的指标,其指标API的实现要指定相应的后端监视系统。而Prometheus是第一个开发了相应适配器的监控系统。这个适用于Prometheus的Kubernetes Customm Metrics Adapter是属于Github上的k8s-prometheus-adapter项目提供的。其原理图如下:
prometheus本身就是一监控系统,也分为server端和agent端,server端从被监控主机获取数据,而agent端需要部署一个node_exporter,主要用于数据采集和暴露节点的数据,那么 在获取Pod级别或者是mysql等多种应用的数据,也是需要部署相关的exporter。我们可以通过PromQL的方式对数据进行查询,但是由于本身prometheus属于第三方的 解决方案,原生的k8s系统并不能对Prometheus的自定义指标进行解析,就需要借助于k8s-prometheus-adapter将这些指标数据查询接口转换为标准的Kubernetes自定义指标。
Prometheus是一个开源的服务监控系统和时序数据库,其提供了通用的数据模型和快捷数据采集、存储和查询接口。它的核心组件Prometheus服务器定期从静态配置的监控目标或者基于服务发现自动配置的目标中进行拉取数据,新拉取到啊的 数据大于配置的内存缓存区时,数据就会持久化到存储设备当中。Prometheus组件架构图如下:
每个被监控的主机都可以通过专用的exporter程序提供输出监控数据的接口,并等待Prometheus服务器周期性的进行数据抓取。如果存在告警规则,则抓取到数据之后会根据规则进行计算,满足告警条件则会生成告警,并发送到Alertmanager完成告警的汇总和分发。当被监控的目标有主动推送数据的需求时,可以以Pushgateway组件进行接收并临时存储数据,然后等待Prometheus服务器完成数据的采集。
任何被监控的目标都需要事先纳入到监控系统中才能进行时序数据采集、存储、告警和展示,监控目标可以通过配置信息以静态形式指定,也可以让Prometheus通过服务发现的机制进行动态管理。下面是组件的一些解析:
- 监控代理程序:如node_exporter:收集主机的指标数据,如平均负载、CPU、内存、磁盘、网络等等多个维度的指标数据。
- kubelet(cAdvisor):收集容器指标数据,也是K8S的核心指标收集,每个容器的相关指标数据包括:CPU使用率、限额、文件系统读写限额、内存使用率和限额、网络报文发送、接收、丢弃速率等等。
- API Server:收集API Server的性能指标数据,包括控制队列的性能、请求速率和延迟时长等等
- etcd:收集etcd存储集群的相关指标数据
- kube-state-metrics:该组件可以派生出k8s相关的多个指标数据,主要是资源类型相关的计数器和元数据信息,包括制定类型的对象总数、资源限额、容器状态以及Pod资源标签系列等。
Prometheus能够直接把KubernetesAPIServer作为服务发现系统使用进而动态发现和监控集群中的所有可被监控的对象。这里需要特别说明的是,Pod资源需要添加下列注解信息才能被Prometheus系统自动发现并抓取其内建的指标数据。
- prometheus.io/scrape: 用于标识是否需要被采集指标数据,布尔型值,true或false。
- prometheus.io/path: 抓取指标数据时使用的URL路径,一般为/metrics。
- prometheus.io/port :抓取指标数据时使用的套接字端口,如8080。
另外,仅期望Prometheus为后端生成自定义指标时仅部署Prometheus服务器即可,它甚至也不需要数据持久功能。但若要配置完整功能的监控系统,管理员还需要在每个主机上部署node_exporter、按需部署其他特有类型的exporter以及Alertmanager。
部署prometheus
官方地址 :https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/prometheus
由于官方的YAML部署方式需要使用到PVC,这里使用马哥提供的学习类型的部署,具体生产还是需要根据官方的建议进行。
1
2
3
4
5
6 1[root@master metrics]# git clone https://github.com/iKubernetes/k8s-prom.git
2正克隆到 'k8s-prom'...
3remote: Enumerating objects: 49, done.
4remote: Total 49 (delta 0), reused 0 (delta 0), pack-reused 49
5Unpacking objects: 100% (49/49), done.
6
创建名称空间prom
1
2
3
4
5
6 1[root@master metrics]# cd k8s-prom/
2[root@master k8s-prom]# ls
3k8s-prometheus-adapter kube-state-metrics namespace.yaml node_exporter podinfo prometheus README.md
4[root@master k8s-prom]# kubectl apply -f namespace.yaml
5namespace/prom created
6
部署node_exporter
1
2
3
4
5
6
7
8
9
10
11
12
13
14 1[root@master k8s-prom]# cd node_exporter/
2[root@master node_exporter]# kubectl apply -f .
3daemonset.apps/prometheus-node-exporter created
4service/prometheus-node-exporter created
5
6[root@master node_exporter]# kubectl get ds -n prom
7NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
8prometheus-node-exporter 3 3 3 3 3 <none> 100s
9[root@master node_exporter]# kubectl get pods -n prom
10NAME READY STATUS RESTARTS AGE
11prometheus-node-exporter-b2lk5 1/1 Running 0 104s
12prometheus-node-exporter-d4l6v 1/1 Running 0 104s
13prometheus-node-exporter-swngp 1/1 Running 0 104s
14
部署prometheus-server
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29 1[root@master node_exporter]# cd ../prometheus/
2[root@master prometheus]# ll
3总用量 24
4-rw-r--r-- 1 root root 10132 4月 10 11:20 prometheus-cfg.yaml
5-rw-r--r-- 1 root root 1481 4月 10 11:20 prometheus-deploy.yaml
6-rw-r--r-- 1 root root 716 4月 10 11:20 prometheus-rbac.yaml
7-rw-r--r-- 1 root root 278 4月 10 11:20 prometheus-svc.yaml
8[root@master prometheus]# kubectl apply -f .
9configmap/prometheus-config created
10deployment.apps/prometheus-server created
11clusterrole.rbac.authorization.k8s.io/prometheus created
12serviceaccount/prometheus created
13clusterrolebinding.rbac.authorization.k8s.io/prometheus created
14service/prometheus created
15#由于prometheus的yaml内内存limit为2G,此时node节点虚拟机均不满足要求,导致会一直是pending状态,此处进行修改,
16[root@master prometheus]# vim prometheus-deploy.yaml
17 #resources:
18 # limits:
19 # memory: 2Gi
20[root@master prometheus]# kubectl apply -f prometheus-deploy.yaml
21deployment.apps/prometheus-server configured
22
23[root@master prometheus]# kubectl get pods -n prom -w
24NAME READY STATUS RESTARTS AGE
25prometheus-node-exporter-b2lk5 1/1 Running 0 9m30s
26prometheus-node-exporter-d4l6v 1/1 Running 0 9m30s
27prometheus-node-exporter-swngp 1/1 Running 0 9m30s
28prometheus-server-556b8896d6-ld7xj 1/1 Running 0 35s
29
部署后查看日志
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16 1[root@master prometheus]# kubectl logs prometheus-server-556b8896d6-ld7xj -n prom
2level=info ts=2019-04-10T03:33:57.752158604Z caller=main.go:220 msg="Starting Prometheus" version="(version=2.2.1, branch=HEAD, revision=bc6058c81272a8d938c05e75607371284236aadc)"
3level=info ts=2019-04-10T03:33:57.752221598Z caller=main.go:221 build_context="(go=go1.10, user=root@149e5b3f0829, date=20180314-14:15:45)"
4level=info ts=2019-04-10T03:33:57.752240032Z caller=main.go:222 host_details="(Linux 3.10.0-862.el7.x86_64 #1 SMP Fri Apr 20 16:44:24 UTC 2018 x86_64 prometheus-server-556b8896d6-ld7xj (none))"
5level=info ts=2019-04-10T03:33:57.752255713Z caller=main.go:223 fd_limits="(soft=65536, hard=65536)"
6level=info ts=2019-04-10T03:33:57.755420653Z caller=main.go:504 msg="Starting TSDB ..."
7level=info ts=2019-04-10T03:33:57.7620657Z caller=web.go:382 component=web msg="Start listening for connections" address=0.0.0.0:9090
8level=info ts=2019-04-10T03:33:57.7632425Z caller=main.go:514 msg="TSDB started"
9level=info ts=2019-04-10T03:33:57.764611774Z caller=main.go:588 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
10level=info ts=2019-04-10T03:33:57.765669001Z caller=kubernetes.go:191 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
11level=info ts=2019-04-10T03:33:57.76626263Z caller=kubernetes.go:191 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
12level=info ts=2019-04-10T03:33:57.76668914Z caller=kubernetes.go:191 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
13level=info ts=2019-04-10T03:33:57.767331363Z caller=kubernetes.go:191 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
14level=info ts=2019-04-10T03:33:57.768433541Z caller=kubernetes.go:191 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
15level=info ts=2019-04-10T03:33:57.768948262Z caller=main.go:491 msg="Server is ready to receive web requests."
16
此时可以使用NodeIP:30090 进行访问,并可以查看监控,内部已经内置了了一些监控指标
部署kube-state-metrics
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19 1[root@master prometheus]# cd ../kube-state-metrics/
2#修改 kube-state-metrics-deploy.yaml内的image地址
3 image: xiaobai20201/kube-state-metrics-amd64:v1.3.1
4
5[root@master kube-state-metrics]# kubectl apply -f .
6deployment.apps/kube-state-metrics created
7serviceaccount/kube-state-metrics created
8clusterrole.rbac.authorization.k8s.io/kube-state-metrics created
9clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics created
10service/kube-state-metrics created
11
12[root@master kube-state-metrics]# kubectl get pods -n prom
13NAME READY STATUS RESTARTS AGE
14kube-state-metrics-84c69bb8-87l7n 1/1 Running 0 19s
15prometheus-node-exporter-b2lk5 1/1 Running 0 21m
16prometheus-node-exporter-d4l6v 1/1 Running 0 21m
17prometheus-node-exporter-swngp 1/1 Running 0 21m
18prometheus-server-556b8896d6-ld7xj 1/1 Running 0 12m
19
制作证书
由于默认情况下K8S集群都是基于https提供服务,而默认情况k8s-prometheus-adapter是基于http服务,需要提供该K8S服务器CA签署认可的证书,所以需要自制证书
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22 1[root@master kube-state-metrics]# cd /etc/kubernetes/pki/
2[root@master pki]# (umask 077;openssl genrsa -out serving.key)
3Generating RSA private key, 2048 bit long modulus
4.........+++
5..+++
6e is 65537 (0x10001)
7
8[root@master pki]# openssl x509 -req -in serving.csr -CA ./ca.crt -CAkey ./ca.key -CAcreateserial -out serving.crt -days 3650
9Signature ok
10subject=/CN=serving
11Getting CA Private Key
12
13
14[root@master pki]# kubectl create secret generic cm-adapter-serving-certs --from-file=serving.crt=./serving.crt --from-file=serving.key -n prom
15secret/cm-adapter-serving-certs created
16[root@master pki]# kubectl get secret -n prom
17NAME TYPE DATA AGE
18cm-adapter-serving-certs Opaque 2 13s
19default-token-r88nt kubernetes.io/service-account-token 3 37m
20kube-state-metrics-token-4rrqw kubernetes.io/service-account-token 3 14m
21prometheus-token-jdm5f kubernetes.io/service-account-token 3 31m
22
部署k8s-prometheus-adapter
这里自带的custom-metrics-apiserver-deployment.yaml和custom-metrics-config-map.yaml有点问题,需要下载k8s-prometheus-adapter项目中的这2个文件
1
2
3
4 1[root@master k8s-prometheus-adapter]# wget https://raw.githubusercontent.com/DirectXMan12/k8s-prometheus-adapter/master/deploy/manifests/custom-metrics-apiserver-deployment.yaml
2[root@master k8s-prometheus-adapter]# wget https://raw.githubusercontent.com/DirectXMan12/k8s-prometheus-adapter/master/deploy/manifests/custom-metrics-config-map.yaml
3#修改下载文件的内容的namespace为prom
4
执行
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25 1[root@master k8s-prometheus-adapter]# kubectl apply -f .
2clusterrolebinding.rbac.authorization.k8s.io/custom-metrics:system:auth-delegator created
3rolebinding.rbac.authorization.k8s.io/custom-metrics-auth-reader created
4deployment.apps/custom-metrics-apiserver created
5clusterrolebinding.rbac.authorization.k8s.io/custom-metrics-resource-reader created
6serviceaccount/custom-metrics-apiserver created
7service/custom-metrics-apiserver created
8apiservice.apiregistration.k8s.io/v1beta1.custom.metrics.k8s.io created
9clusterrole.rbac.authorization.k8s.io/custom-metrics-server-resources created
10configmap/adapter-config created
11clusterrole.rbac.authorization.k8s.io/custom-metrics-resource-reader created
12clusterrolebinding.rbac.authorization.k8s.io/hpa-controller-custom-metrics created
13
14[root@master k8s-prometheus-adapter]# kubectl get pods -n prom
15NAME READY STATUS RESTARTS AGE
16custom-metrics-apiserver-c86bfc77-dtkcn 1/1 Running 0 58s
17kube-state-metrics-84c69bb8-87l7n 1/1 Running 0 140m
18prometheus-node-exporter-b2lk5 1/1 Running 0 161m
19prometheus-node-exporter-d4l6v 1/1 Running 0 161m
20prometheus-node-exporter-swngp 1/1 Running 0 161m
21prometheus-server-556b8896d6-ld7xj 1/1 Running 0 152m
22
23[root@master k8s-prometheus-adapter]# kubectl api-versions |grep custom
24custom.metrics.k8s.io/v1beta1
25
Grafana数据展示
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89 1[root@master metrics]# vim grafana.yaml
2apiVersion: apps/v1
3kind: Deployment
4metadata:
5 name: monitoring-grafana
6 namespace: prom #修改名称空间
7spec:
8 replicas: 1
9 selector:
10 matchLabels:
11 task: monitoring
12 k8s-app: grafana
13 template:
14 metadata:
15 labels:
16 task: monitoring
17 k8s-app: grafana
18 spec:
19 containers:
20 - name: grafana
21 image: registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-grafana-amd64:v5.0.4
22 ports:
23 - containerPort: 3000
24 protocol: TCP
25 volumeMounts:
26 - mountPath: /etc/ssl/certs
27 name: ca-certificates
28 readOnly: true
29 - mountPath: /var
30 name: grafana-storage
31 env: #这里使用的是原先的heapster的grafana的配置文件,需要注释掉这个环境变量
32 #- name: INFLUXDB_HOST
33 # # value: monitoring-influxdb
34 - name: GF_SERVER_HTTP_PORT
35 value: "3000"
36 - name: GF_AUTH_BASIC_ENABLED
37 value: "false"
38 - name: GF_AUTH_ANONYMOUS_ENABLED
39 value: "true"
40 - name: GF_AUTH_ANONYMOUS_ORG_ROLE
41 value: Admin
42 - name: GF_SERVER_ROOT_URL
43 value: /
44 volumes:
45 - name: ca-certificates
46 hostPath:
47 path: /etc/ssl/certs
48 - name: grafana-storage
49 emptyDir: {}
50---
51apiVersion: v1
52kind: Service
53metadata:
54 labels:
55 kubernetes.io/cluster-service: 'true'
56 kubernetes.io/name: monitoring-grafana
57 name: monitoring-grafana
58 namespace: prom
59spec:
60 type: NodePort
61 ports:
62 - port: 80
63 targetPort: 3000
64 selector:
65 k8s-app: grafana
66
67[root@master metrics]# kubectl apply -f grafana.yaml
68deployment.apps/monitoring-grafana created
69service/monitoring-grafana created
70
71[root@master metrics]# kubectl get pods -n prom
72NAME READY STATUS RESTARTS AGE
73custom-metrics-apiserver-c86bfc77-dtkcn 1/1 Running 0 8m56s
74kube-state-metrics-84c69bb8-87l7n 1/1 Running 0 148m
75monitoring-grafana-dcf785fd8-f7q4g 1/1 Running 0 2m4s
76prometheus-node-exporter-b2lk5 1/1 Running 0 169m
77prometheus-node-exporter-d4l6v 1/1 Running 0 169m
78prometheus-node-exporter-swngp 1/1 Running 0 169m
79prometheus-server-556b8896d6-ld7xj 1/1 Running 0 160m
80
81[root@master metrics]# kubectl get svc -n prom
82NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
83custom-metrics-apiserver ClusterIP 10.107.119.218 <none> 443/TCP 9m16s
84kube-state-metrics ClusterIP 10.103.206.116 <none> 8080/TCP 149m
85monitoring-grafana NodePort 10.109.0.252 <none> 80:30215/TCP 2m23s
86prometheus NodePort 10.101.97.208 <none> 9090:30090/TCP 166m
87prometheus-node-exporter ClusterIP None <none> 9100/TCP 169m
88
89
monitoring-grafana暴露端口为30215
使用浏览器访问 http://10.0.0.10:30215
默认是没有kubernetes的模板的,可以到grafana.com中去下载相关的kubernetes模板。
https://grafana.com/dashboards