I have total 3 node pools are as follow: 1. database pool - regular node pool 2. content pool - regular node pool 3. content spot pool - spot node pool
Initially, content pool have 0 node count with enabled autoscaler. I have deployed one nginx pod deployment on the content spot pool. which has minimum node count 1 and maximum node count 3. The deployment file for nginx are as follow:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 2
preference:
matchExpressions:
- key: agentpool
operator: In
values:
- contentspotpool
- weight: 1
preference:
matchExpressions:
- key: agentpool
operator: In
values:
- contentpool
When the content spot pool is evicted I want that the pod on the content spot pool are to be shifted on content pool. But, the pod are scheduled on the database pool..!
Can anyone tell me how I can achieve this?..
Also How can I setup a database pool in such way that it refuses all the new pods?
AKS version used - 1.18.14
I decided to provide a Community Wiki answer as there was a similar answer but was deleted by its author.
In this case, you can use Taints and Tolerations:
Taints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes. One or more taints are applied to a node; this marks that the node should not accept any pods that do not tolerate the taints.
You may add a taint
to nodes from the database pool and specify a toleration
that "matches" this taint
only for Pods that can be scheduled on the database pool.
I've created a simple example to illustrate how it may work.
I have only one worker node and I added a specific taint
to this node:
$ kubectl get nodes
NAME STATUS ROLES AGE
database-pool Ready <none> 6m9s
$ kubectl taint nodes database-pool type=database:NoSchedule
node/database-pool tainted
$ kubectl describe node database-pool | grep -i taint
Taints: type=database:NoSchedule
Only Pods
with the following toleration
will be allowed to be scheduled onto the database-pool
node:
tolerations:
- key: "type"
operator: "Equal"
value: "database"
effect: "NoSchedule"
I created two Pods
: web
(does not tolerate the taint
) and web-with-toleration
(tolerates the taint
):
$ cat web.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: web
name: web
spec:
containers:
- image: nginx
name: web
$ cat web-with-toleration.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: web
name: web-with-toleration
spec:
containers:
- image: nginx
name: web
tolerations:
- key: "type"
operator: "Equal"
value: "database"
effect: "NoSchedule"
$ kubectl apply -f web.yml
pod/web created
$ kubectl apply -f web-with-toleration.yml
pod/web-with-toleration created
Finally, we can check which Pod
has been correctly scheduled:
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
web 0/1 Pending 0 6m13s <none> <none>
web-with-toleration 1/1 Running 0 6m8s 10.76.0.14 database-pool
<br>It is possible to use Node affinity and Taints at the same time to have a great control over the placement of Pods
on specific nodes.
Node affinity, is a property of Pods that attracts them to a set of nodes (either as a preference or a hard requirement). Taints are the opposite -- they allow a node to repel a set of pods.