I have wrote a configuration file to run several cronJobs. Each cronjob runs in a separate pod, and all the pods the in the same node.
This causes my a node out of space issue and as a solution I read about NodeAffinity.
I want to add the nodeAffinity to my cronjob, but I am struggling to understand the syntax and what should be under the labelSelector.
Here's what I wrote:
apiVersion: argoproj.io/v1alpha1
kind: CronWorkflow
serviceAccount: argo-events-sa
metadata:
name: nightly-cron
namespace: argo-events
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- web-store
topologyKey: "kubernetes.io/hostname"
schedule: "0 1 * * *"
concurrencyPolicy: "Replace"
startingDeadlineSeconds: 0
workflowSpec:
ttlStrategy:
secondsAfterCompletion: 10800 # 3 hours
workflowTemplateRef:
name: wf-e2e-test
arguments:
parameters:
- name: test_repos
value: |
[
{ "repo": "svc1" },
{ "repo": "svc2" },
{ "repo": "svc3" },
{ "repo": "svc4" },
{ "repo": "svc5" },
{ "repo": "svc6" },
{ "repo": "svc7" }
]
- name: report_name_prefix
value: "nightly-"
For your case, you can use nodeSelector
instead.
Attach label to the node
kubectl label nodes <node-name> <label-key>=<label-value>
e.g. kubectl label nodes my-node app=web-store
Add a nodeSelector field to your pod configuration
apiVersion: batch/v1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
nodeSelector:
app: web-store
Even if nodeSelector
will do the same simply, if you still want to use nodeAffinity
, you should create and use the label for it just like in nodeSelector
example.