These are my priority classes
NAME VALUE GLOBAL-DEFAULT AGE
k8-monitoring 1000000 false 4d7h
k8-system 500000 false 4d7h
k8-user 1000 false 4d7h
I am trying out a test for priorities within the confinement of namespace pod quotas, Can some confirm me, if the approach is right. If not please guide me.
apiVersion: v1
kind: Namespace
metadata:
name: priority-test
---
apiVersion: v1
kind: ResourceQuota
metadata:
name: priority-pod-quota
namespace: priority-test
spec:
hard:
pods: "5"
---
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: user-priority
namespace: priority-test
labels:
tier: x3
spec:
# modify replicas according to your case
replicas: 3
selector:
matchLabels:
tier: x3
template:
metadata:
labels:
tier: x3
spec:
priorityClassName: k8-user
containers:
- name: php-redis
image: gcr.io/google_samples/gb-frontend:v3
---
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: system-priority
namespace: priority-test
labels:
tier: x2
spec:
# modify replicas according to your case
replicas: 3
selector:
matchLabels:
tier: x2
template:
metadata:
labels:
tier: x2
spec:
priorityClassName: k8-system
containers:
- name: php-redis
image: gcr.io/google_samples/gb-frontend:v3
---
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: monitoring-priority
namespace: priority-test
labels:
tier: x1
spec:
# modify replicas according to your case
replicas: 3
selector:
matchLabels:
tier: x1
template:
metadata:
labels:
tier: x1
spec:
priorityClassName: monitoring-priority
containers:
- name: php-redis
image: gcr.io/google_samples/gb-frontend:v3
I am running this test in EKS v.1.15 but not getting the priority that is explained as designed. Something tells me if I need to have an another eye looking at it
Should not see this, high priority should be running
NAME DESIRED CURRENT READY AGE
monitoring-priority 3 0 0 17m
system-priority 3 2 2 17m
user-priority 3 3 3 17m
I have also read excellent solution given by Dawid Kruk K8s pod priority & outOfPods
You have defined ResourceQuota
with 5 pods as hard
requirement. This ResourceQuota
is applied at namespace level on all pods regardless of their priority class. Thats why you see 3 pods as current
in user-priority
and 2 pods in current
in system-priority
. Rest of the pods are not able to run because of the limit of 5 pods defined in ResourceQuota
. If you check kubectl get events
you should see 403 FORBIDDEN
error related to resource quota.
ResourceQuota
is an admission controller which will not let pods get into the scheduling queue at all when the quota is reached which is what happening now. So you need to increase the ResourceQuota quota to proceed to the testing of pod priority and preemption.
The right way to test pod priority and preemption is to deploy enough pods to reach a nodes resource capacity and verify if low priority pods are being evicted to schedule high priority pods.