I've created a new pod by copy-pasting configuration (values.yaml, requirements.yaml, subchart) from other working pod (nginx app) and changing all names. After redeploying my new pod is indefinitely in status pending, when I describe it it has following event:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling <unknown> default-scheduler 0/19 nodes are available: 19 node(s) had taints that the pod didn't tolerate.
That doesn't tell me much. How can I get more details to learn why scheduling failed exactly?
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "xyz.fullname" . }}
labels:
app: {{ template "xyz.name" . }}
chart: {{ template "xyz.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ template "xyz.name" . }}
release: {{ .Release.Name }}
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
labels:
app: {{ template "xyz.name" . }}
release: {{ .Release.Name }}
spec:
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
{{ toYaml .Values.imagePullSecrets | indent 8 }}
{{- end }}
volumes:
- name: confd
configMap:
name: {{ template "xyz.fullname" . }}
items:
- key: resolver
path: resolver.conf
- name: nginx-config
configMap:
name: {{ template "xyz.fullname" . }}
items:
- key: nginxConf
path: default
containers:
- name: {{ template "xyz.fullname" . }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
volumeMounts:
- mountPath: /etc/nginx/conf.d
name: confd
- mountPath: /etc/nginx/sites-enabled
name: nginx-config
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
resources:
{{ toYaml .Values.resources | indent 12 }}
env:
- name: XYZ_API_URL
value: {{ .Release.Name }}-xyz-api
{{- if .Values.environment }}
{{- range $key, $value := .Values.environment }}
- name: {{ toYaml $key }}
value: {{ toYaml $value }}
{{- end }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
get no --show-labels
NAME STATUS ROLES AGE VERSION LABELS
k8s-app-stg-c1-01 Ready <none> 328d v1.16.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,esxcluster=app,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-app-stg-c1-01,kubernetes.io/os=linux,role=preemptible
k8s-app-stg-c1-02 Ready <none> 328d v1.16.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,esxcluster=app,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-app-stg-c1-02,kubernetes.io/os=linux,role=preemptible
k8s-app-stg-c1-03 Ready <none> 328d v1.16.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,esxcluster=app,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-app-stg-c1-03,kubernetes.io/os=linux,role=preemptible
k8s-app-stg-c1-04 Ready <none> 297d v1.16.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,esxcluster=app,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-app-stg-c1-04,kubernetes.io/os=linux,role=preemptible
k8s-app-stg-c1-05 Ready <none> 297d v1.16.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,esxcluster=app,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-app-stg-c1-05,kubernetes.io/os=linux,role=preemptible
k8s-app-stg-c1-06 Ready <none> 24d v1.16.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,esxcluster=app,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-app-stg-c1-06,kubernetes.io/os=linux,role=preemtible
k8s-bi-stg-c1-01 Ready <none> 212d v1.16.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,esxcluster=bi,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-bi-stg-c1-01,kubernetes.io/os=linux
k8s-ci-stg-c1-01 Ready <none> 60d v1.16.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,esxcluster=ci,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-ci-stg-c1-01,kubernetes.io/os=linux,role=preemtible
k8s-ci-stg-c1-02 Ready <none> 41d v1.16.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,esxcluster=ci,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-ci-stg-c1-02,kubernetes.io/os=linux,role=preemtible
k8s-ci-stg-c1-03 Ready <none> 41d v1.16.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,esxcluster=ci,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-ci-stg-c1-03,kubernetes.io/os=linux,role=preemtible
k8s-ci-stg-c1-04 Ready <none> 41d v1.16.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,esxcluster=ci,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-ci-stg-c1-04,kubernetes.io/os=linux,role=preemtible
k8s-master-stg-c1-01 Ready master 1y v1.16.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master-stg-c1-01,kubernetes.io/os=linux,node-role.kubernetes.io/master=
k8s-master-stg-c1-02 Ready master 1y v1.16.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master-stg-c1-02,kubernetes.io/os=linux,node-role.kubernetes.io/master=
k8s-master-stg-c1-03 Ready master 1y v1.16.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master-stg-c1-03,kubernetes.io/os=linux,node-role.kubernetes.io/master=
k8s-svc-stg-c1-01 Ready <none> 326d v1.16.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,esxcluster=svc,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-svc-stg-c1-01,kubernetes.io/os=linux,role=preemptible
k8s-svc-stg-c1-02 Ready <none> 325d v1.16.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,esxcluster=svc,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-svc-stg-c1-02,kubernetes.io/os=linux,role=preemptible
k8s-svc-stg-c1-03 Ready <none> 325d v1.16.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,esxcluster=svc,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-svc-stg-c1-03,kubernetes.io/os=linux,role=preemptible
k8s-svc-stg-c1-04 Ready <none> 297d v1.16.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,esxcluster=svc,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-svc-stg-c1-04,kubernetes.io/os=linux,role=preemptible
k8s-svc-stg-c1-05 Ready <none> 297d v1.16.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,esxcluster=svc,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-svc-stg-c1-05,kubernetes.io/os=linux,role=preemptible
You can make use of 'esxcluster' OR role=preemptible label to schedule your workload to appropriate nodes. You might need to add node selector and toleration's to the values.yaml that you pass to helm install command
I think that the previous answer by @P-Ekambaram should be elaborated.
Your pod
is in pending status because your nodes are not accepting it due to taints. Taints
allow a node to repel a set of pods. Using tolerations you can schedule pods into node with matching taints.
Best quick example how taints work is simply create single node kubernetes cluster. In this case you are removing a taint
from master node called master:NoSchedule
This will allow you to schedule pods on the master node.
This is also very useful when you want to reserve some sets of nodes for specific purpose so you can schedule pods that require them.
Coming back to your error you have to add tolerations under the spec of your pod that matches those taints in order to have it scheduled.
You can list your taints using this command (jq
is required to use it):
kubectl get nodes -o json | jq '.items[].spec.taints'
The tainted format looks like this:
`<key>=<value>:<effect>`
Where <effect>
tell kubernetes scheduler what should happen to pod that don`t tolerate this taint.
Here is an example how those tolerations look in pod yamls:
tolerations:
- key: "key"
operator: "Equal"
value: "value"
Kubernetes documentation explains well taints and tolerations.