k8s解决pod调度不均衡的问题
问题及原因
k8s是通过sceduler来调度pod的,在调度过程中,由于一些原因,会出现调度不均衡的问题,例如:
-
-
- 节点故障
- 新节点被加到集群中
- 节点资源利用不足
-
这些都会导致pod在调度过程中分配不均,例如会造成节点负载过高,引发pod触发OOM等操作造成服务不可用
其中,节点资源利用不足时是最容易出现问题的,例如,设置的requests和limits不合理,或者没有设置requests/limits都会造成调度不均衡
解决办法及分析
在这之前,我们需要先装一个metrics,安装方法可参考:k8s的metrics部署
Scheduler在调度过程中,经过了预选阶段和优选阶段,选出一个可用的node节点,来把pod调度到该节点上。那么在预选阶段和优选阶段是如何选出node节点的呢?
最根本的一个调度策略就是判断节点是否有可分配的资源,我们可以通过以下kubectl describe node node名来查看,现在按照这个调度策略来分析下
查看当前的节点资源占用情况
可以看到,当前的k8s集群共有三个node节点,但是节点的资源分布情况极其不均匀,而实际上,k8s在进行调度时,计算的就是requests的值,不管你limits设置多少,k8s都不关心。所以当这个值没有达到资源瓶颈时,理论上,该节点就会一直有pod调度上去。所以这个时候就会出现调度不均衡的问题。有什么解决办法?
-
-
- 给每一个pod设置requests和limits,如果资源充足,最好将requests和limits设置成一样的,提高Pod的QoS
- 重平衡,采取人为介入或定时任务方式,根据多维度去对当前pod分布做重平衡
-
重平衡工具Descheduler
工具简介
Descheduler 的出现就是为了解决 Kubernetes 自身调度(一次性调度)不足的问题。它以定时任务方式运行,根据已实现的策略,重新去平衡 pod 在集群中的分布。
截止目前,Descheduler 已实现的策略和计划中的功能点如下:
已实现的调度策略
-
-
- RemoveDuplicates 移除重复 pod
- LowNodeUtilization 节点低度使用
- RemovePodsViolatingInterPodAntiAffinity 移除违反pod反亲和性的 pod
- RemovePodsViolatingNodeAffinity
-
路线图中计划实现的功能点
-
-
- Strategy to consider taints and tolerations 考虑污点和容忍
- Consideration of pod affinity 考虑 pod 亲和性
- Strategy to consider pod life time 考虑 pod 生命周期
- Strategy to consider number of pending pods 考虑待定中的 pod 数量
- Integration with cluster autoscaler 与集群自动伸缩集成
- Integration with metrics providers for obtaining real load metrics 与监控工具集成来获取真正的负载指标
- Consideration of Kubernetes’s scheduler’s predicates 考虑 k8s 调度器的预判机制
-
策略介绍
-
-
- RemoveDuplicates 此策略确保每个副本集(RS)、副本控制器(RC)、部署(Deployment)或任务(Job)只有一个 pod 被分配到同一台 node 节点上。如果有多个则会被驱逐到其它节点以便更好的在集群内分散 pod。
- LowNodeUtilization a. 此策略会找到未充分使用的 node 节点并在可能的情况下将那些被驱逐后希望重建的 pod 调度到该节点上。 b. 节点是否利用不足由一组可配置的 阈值(thresholds) 决定。这组阈值是以百分比方式指定了 CPU、内存以及 pod数量 的。只有当所有被评估资源都低于它们的阈值时,该 node 节点才会被认为处于利用不足状态。 c. 同时还存在一个 目标阈值(targetThresholds),用于评估那些节点是否因为超出了阈值而应该从其上驱逐 pod。任何阈值介于 thresholds 和 targetThresholds 之间的节点都被认为资源被合理利用了,因此不会发生 pod 驱逐行为(无论是被驱逐走还是被驱逐来)。 d. 与之相关的还有另一个参数numberOfNodes,这个参数用来激活指定数量的节点是否处于资源利用不足状态而发生 pod 驱逐行为。
- RemovePodsViolatingInterPodAntiAffinity 此策略会确保 node 节点上违反 pod 间亲和性的 pod 被驱逐。比如节点上有 podA 并且 podB 和 podC(也在同一节点上运行)具有禁止和 podA 在同一节点上运行的反亲和性规则,则 podA 将被从节点上驱逐,以便让 podB 和 podC 可以运行。
- RemovePodsViolatingNodeAffinity 此策略会确保那些违反 node 亲和性的 pod 被驱逐。比如 podA 运行在 nodeA 上,后来该节点不再满足 podA 的 node 亲和性要求,如果此时存在 nodeB 满足这一要求,则 podA 会被驱逐到 nodeB 节点上。
-
遵循机制
当 Descheduler 调度器决定于驱逐 pod 时,它将遵循下面的机制:
-
-
- Critical pods (with annotations scheduler.alpha.kubernetes.io/critical-pod) are never evicted 关键 pod(带注释 scheduler.alpha.kubernetes.io/critical-pod)永远不会被驱逐。
- Pods (static or mirrored pods or stand alone pods) not part of an RC, RS, Deployment or Jobs are never evicted because these pods won’t be recreated 不属于RC,RS,部署或作业的Pod(静态或镜像pod或独立pod)永远不会被驱逐,因为这些pod不会被重新创建。
- Pods associated with DaemonSets are never evicted 与 DaemonSets 关联的 Pod 永远不会被驱逐。
- Pods with local storage are never evicted 具有本地存储的 Pod 永远不会被驱逐。
- BestEffort pods are evicted before Burstable and Guaranteed pods QoS 等级为 BestEffort 的 pod 将会在等级为 Burstable 和 Guaranteed 的 pod 之前被驱逐。
-
部署方式
Descheduler 会以 Job 形式在 pod 内运行,因为 Job 具有多次运行而无需人为介入的优势。为了避免被自己驱逐 Descheduler 将会以 关键型 pod 运行,因此它只能被创建建到 kube-system namespace 内。 关于 Critical pod 的介绍请参考:Guaranteed Scheduling For Critical Add-On Pods
要使用 Descheduler,我们需要编译该工具并构建 Docker 镜像,创建 ClusterRole、ServiceAccount、ClusterRoleBinding、ConfigMap 以及 Job。
yaml文件下载地址:https://github.com/kubernetes-sigs/descheduler
git clone https://github.com/kubernetes-sigs/descheduler.git
复制
Run As A Job
kubectl create -f kubernetes/rbac.yaml
kubectl create -f kubernetes/configmap.yaml
kubectl create -f kubernetes/job.yaml
复制
Run As A CronJob
kubectl create -f kubernetes/rbac.yaml
kubectl create -f kubernetes/configmap.yaml
kubectl create -f kubernetes/cronjob.yaml
复制
两种方式,一种是以任务的形式启动,另一种是以计划任务的形式启动,建议以计划任务方式去启动
启动之后,可以来验证下descheduler是否启动成功
# kubectl getpod -n kube-system |grep descheduler
descheduler-job-6qtk2 1/1Running 0158m
复制
再来验证下pod是否分布均匀
可以看到,目前node02这个节点的pod数是20个,相比较其他节点,还是差了几个,那么我们只对pod数量做重平衡的话,可以对descheduler做如下的配置修改
# cat kubernetes/configmap.yaml
---apiVersion:v1
kind:ConfigMap
metadata:name:descheduler-policy-configmap
namespace:kube-system
data:policy.yaml:|apiVersion:"descheduler/v1alpha1"kind:"DeschedulerPolicy"strategies:"RemoveDuplicates":enabled:true"RemovePodsViolatingInterPodAntiAffinity":enabled:true"LowNodeUtilization":enabled:trueparams:nodeResourceUtilizationThresholds:thresholds:#阈值
#"cpu":20#注释掉下面这些关于cpu和内存的配置项
#"memory":20"pods":24#把pod的数值调高一些
targetThresholds:#目标阈值
#"cpu":50#"memory":50"pods":25
复制
修改完成后,重启下即可
kubectl delete-f kubernetes/configmap.yaml
kubectl apply -f kubernetes/configmap.yaml
kubectl delete-f kubernetes/cronjob.yaml
kubectl apply -f kubernetes/cronjob.yaml
复制
然后,看下Descheduler的调度日志
# kubectl logs -n kube-system descheduler-job-9rc9h
I072908:48:45.3616551lownodeutilization.go:151]Node "k8s-node02"is under utilized withusage:api.ResourceThresholds{"cpu":44.375,"memory":24.682000160690105,"pods":22.727272727272727}I072908:48:45.3617721lownodeutilization.go:154]Node "k8s-node03"is over utilized withusage:api.ResourceThresholds{"cpu":49.375,"memory":27.064916842870552,"pods":24.545454545454547}I072908:48:45.3618071lownodeutilization.go:151]Node "k8s-master01"is under utilized withusage:api.ResourceThresholds{"cpu":50,"memory":3.6347778465158265,"pods":8.181818181818182}I072908:48:45.3618281lownodeutilization.go:151]Node "k8s-master02"is under utilized withusage:api.ResourceThresholds{"cpu":40,"memory":0,"pods":5.454545454545454}I072908:48:45.3618631lownodeutilization.go:151]Node "k8s-master03"is under utilized withusage:api.ResourceThresholds{"cpu":40,"memory":0,"pods":5.454545454545454}I072908:48:45.3619771lownodeutilization.go:154]Node "k8s-node01"is over utilized withusage:api.ResourceThresholds{"cpu":46.875,"memory":32.25716687667426,"pods":27.272727272727273}I072908:48:45.3619941lownodeutilization.go:66]Criteria fora node under utilization:CPU:0,Mem:0,Pods:23I072908:48:45.3620161lownodeutilization.go:73]Total number ofunderutilized nodes:4I072908:48:45.3620251lownodeutilization.go:90]Criteria fora node above target utilization:CPU:0,Mem:0,Pods:23I072908:48:45.3620331lownodeutilization.go:92]Total number ofnodes above target utilization:2I072908:48:45.3620511lownodeutilization.go:202]Total capacity to be moved:CPU:0,Mem:0,Pods:55.2I072908:48:45.3620591lownodeutilization.go:203]********Number ofpods evicted from each node:***********I072908:48:45.3620661lownodeutilization.go:210]evicting pods from node "k8s-node01"withusage:api.ResourceThresholds{"cpu":46.875,"memory":32.25716687667426,"pods":27.272727272727273}I072908:48:45.3622361lownodeutilization.go:213]allPods:30,nonRemovablePods:3,bestEffortPods:2,burstablePods:25,guaranteedPods:0I072908:48:45.3622461lownodeutilization.go:217]All pods have priority associated withthem.Evicting pods based on priority
I072908:48:45.3819311evictions.go:102]Evicted pod:"flink-taskmanager-7c7557d6bc-ntnp2"innamespace "default"I072908:48:45.3819671lownodeutilization.go:270]Evicted pod:"flink-taskmanager-7c7557d6bc-ntnp2"I072908:48:45.3819801lownodeutilization.go:283]updated node usage:api.ResourceThresholds{"cpu":46.875,"memory":32.25716687667426,"pods":26.363636363636363}I072908:48:45.3822681event.go:278]Event(v1.ObjectReference{Kind:"Pod",Namespace:"default",Name:"flink-taskmanager-7c7557d6bc-ntnp2",UID:"6a5374de-a204-4d2c-a302-ff09c054a43b",APIVersion:"v1",ResourceVersion:"4945574",FieldPath:""}):type:'Normal'reason:'Descheduled'pod evicted by sigs.k8s.io/descheduler
I072908:48:45.3995671evictions.go:102]Evicted pod:"flink-taskmanager-7c7557d6bc-t2htk"innamespace "default"I072908:48:45.3996131lownodeutilization.go:270]Evicted pod:"flink-taskmanager-7c7557d6bc-t2htk"I072908:48:45.3996261lownodeutilization.go:283]updated node usage:api.ResourceThresholds{"cpu":46.875,"memory":32.25716687667426,"pods":25.454545454545453}I072908:48:45.4005031event.go:278]Event(v1.ObjectReference{Kind:"Pod",Namespace:"default",Name:"flink-taskmanager-7c7557d6bc-t2htk",UID:"bd255dbc-bb05-4258-ac0b-e5be3dc4efe8",APIVersion:"v1",ResourceVersion:"4705479",FieldPath:""}):type:'Normal'reason:'Descheduled'pod evicted by sigs.k8s.io/descheduler
I072908:48:45.4505681evictions.go:102]Evicted pod:"oauth-center-tools-api-645d477bcf-hnb8g"innamespace "default"I072908:48:45.4506031lownodeutilization.go:270]Evicted pod:"oauth-center-tools-api-645d477bcf-hnb8g"I072908:48:45.4506191lownodeutilization.go:283]updated node usage:api.ResourceThresholds{"cpu":45.625,"memory":31.4545002047819,"pods":24.545454545454543}I072908:48:45.4512401event.go:278]Event(v1.ObjectReference{Kind:"Pod",Namespace:"default",Name:"oauth-center-tools-api-645d477bcf-hnb8g",UID:"caba0aa8-76de-4e23-b163-c660df0ba54d",APIVersion:"v1",ResourceVersion:"3800151",FieldPath:""}):type:'Normal'reason:'Descheduled'pod evicted by sigs.k8s.io/descheduler
I072908:48:45.4776051evictions.go:102]Evicted pod:"dazzle-core-api-5d4c899b84-xhlkl"innamespace "default"I072908:48:45.4776361lownodeutilization.go:270]Evicted pod:"dazzle-core-api-5d4c899b84-xhlkl"I072908:48:45.4776491lownodeutilization.go:283]updated node usage:api.ResourceThresholds{"cpu":44.375,"memory":30.65183353288954,"pods":23.636363636363633}I072908:48:45.4779921event.go:278]Event(v1.ObjectReference{Kind:"Pod",Namespace:"default",Name:"dazzle-core-api-5d4c899b84-xhlkl",UID:"ce216892-6c50-4c31-b30a-cbe5c708285e",APIVersion:"v1",ResourceVersion:"3800074",FieldPath:""}):type:'Normal'reason:'Descheduled'pod evicted by sigs.k8s.io/descheduler
I072908:48:45.5237741request.go:557]Throttling request took 141.499557ms,request:POST:https://10.96.0.1:443/api/v1/namespaces/default/events
I072908:48:45.5690731evictions.go:102]Evicted pod:"live-foreignapi-api-7bc679b789-z8jnr"innamespace "default"I072908:48:45.5691051lownodeutilization.go:270]Evicted pod:"live-foreignapi-api-7bc679b789-z8jnr"I072908:48:45.5691191lownodeutilization.go:283]updated node usage:api.ResourceThresholds{"cpu":43.125,"memory":29.84916686099718,"pods":22.727272727272723}I072908:48:45.5691511lownodeutilization.go:236]6pods evicted from node "k8s-node01"withusage map[cpu:43.125memory:29.84916686099718pods:22.727272727272723]I072908:48:45.5691721lownodeutilization.go:210]evicting pods from node "k8s-node03"withusage:api.ResourceThresholds{"cpu":49.375,"memory":27.064916842870552,"pods":24.545454545454547}I072908:48:45.5694181lownodeutilization.go:213]allPods:27,nonRemovablePods:2,bestEffortPods:0,burstablePods:25,guaranteedPods:0I072908:48:45.5694301lownodeutilization.go:217]All pods have priority associated withthem.Evicting pods based on priority
I072908:48:45.6039621event.go:278]Event(v1.ObjectReference{Kind:"Pod",Namespace:"default",Name:"live-foreignapi-api-7bc679b789-z8jnr",UID:"37c698e3-b63e-4ef1-917b-ac6bc1be05e0",APIVersion:"v1",ResourceVersion:"3800113",FieldPath:""}):type:'Normal'reason:'Descheduled'pod evicted by sigs.k8s.io/descheduler
I072908:48:45.6394831evictions.go:102]Evicted pod:"dazzle-contentlib-api-575f599994-khdn5"innamespace "default"I072908:48:45.6395121lownodeutilization.go:270]Evicted pod:"dazzle-contentlib-api-575f599994-khdn5"I072908:48:45.6395251lownodeutilization.go:283]updated node usage:api.ResourceThresholds{"cpu":48.125,"memory":26.26225017097819,"pods":23.636363636363637}I072908:48:45.6454461event.go:278]Event(v1.ObjectReference{Kind:"Pod",Namespace:"default",Name:"dazzle-contentlib-api-575f599994-khdn5",UID:"068aa2ad-f160-4aaa-b25b-f0a9603f9011",APIVersion:"v1",ResourceVersion:"3674763",FieldPath:""}):type:'Normal'reason:'Descheduled'pod evicted by sigs.k8s.io/descheduler
I072908:48:45.7803241evictions.go:102]Evicted pod:"dazzle-datasync-task-577c46668-lltg4"innamespace "default"I072908:48:45.7805441lownodeutilization.go:270]Evicted pod:"dazzle-datasync-task-577c46668-lltg4"I072908:48:45.7805651lownodeutilization.go:283]updated node usage:api.ResourceThresholds{"cpu":46.875,"memory":25.45958349908583,"pods":22.727272727272727}I072908:48:45.7806001lownodeutilization.go:236]4pods evicted from node "k8s-node03"withusage map[cpu:46.875memory:25.45958349908583pods:22.727272727272727]I072908:48:45.7806201lownodeutilization.go:102]Total number ofpods evicted:11
复制
通过这个日志,可以看到
Node “k8s-node01” is over utilized,然后就是有提示evicting pods from node “k8s-node01”,这就说明,Descheduler已经在重新调度了,最终调度结果如下:
本文参与 腾讯云自媒体分享计划 ,欢迎热爱写作的你一起参与!
本文分享自作者个人站点/博客:https://www.dogfei.cn复制
如有侵权,请联系 cloudcommunity@tencent.com 删除。