I have created multiple stacks (node groups) within my EKS cluster, and each group runs on a different instance type (for example, one group runs on GPU instances). I have added an entry in mapRoles of aws-auth-cm.yaml file for each of the node groups. Now I would like to deploy some Deployments on another. The deployment files look something like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-1
spec:
replicas: 1
selector:
matchLabels:
component: component-1
template:
metadata:
labels:
component: component-1
spec:
containers:
- name: d1
image: docker-container
ports:
- containerPort: 83
The documentation shows that I can run the standard command kubectl apply. Is there any way to specify the group? Maybe something like
kubectl apply -f server-deployment.yaml -group node-group-1
nodeSelector should work for this as well, as long as you have labeled your Nodes
Examples and more info here: https://eksworkshop.com/beginner/140_assigning_pods/node_selector/ and here https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
Sadly something that you mentioned doesn't exist, but you can read about Affinity and it should solve your problem.
TL;DR you have to add labels or use existing labels on nodes and use these labels to assign pods to correct nodes.
Assuimg you have lable beta.kubernetes.io/instance-type=highmem
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-1
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/instance-typ
operator: In
values:
- highmem
replicas: 1
selector:
matchLabels:
component: component-1
template:
metadata:
labels:
component: component-1
spec:
containers:
- name: d1
image: docker-container
ports:
- containerPort: 83
You can use taints and tolerations to ensure that your pods end up on the right nodes. When you have heterogeneous nodes, this is good practice.
For example, in my deployment, we have 2 classes of nodes, ones which have NVMe SSD attached and ones which don't. They're both tainted differently and the deployments that run on top specify tolerations which ensure that they end up only on the nodes that have that particular taint.
For example, the node would have:
spec:
...
taints:
- effect: NoSchedule
key: role
value: gpu-instance
and a pod that must schedule on one of those nodes must have:
spec:
tolerations:
- effect: NoSchedule
key: role
operator: Equal
value: gpu-instance
Once you have this setup, you can just do a regular kubectl apply
and pods will get targeted onto nodes correctly. Note that this is a more flexible approach than node selectors and labels because it can give you more fine grained control and configurable eviction behavior.