I have a kubernetes set up in AWS with multiple nodes.
while trying to create one of the pods, I get the below error ,
Warning FailedScheduling 4m7s (x2 over 4m7s) default-scheduler 0/15 nodes are available: 11
Insufficient cpu, 12 Insufficient memory, 15 node(s) didn't match node selector.
Warning FailedScheduling 50s (x6 over 4m11s) default-scheduler 0/15 nodes are available: 11
Insufficient cpu, 11 Insufficient memory, 15 node(s) didn't match node selector.
my pod yaml is like below,
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: test-instapi
suite: test
log-topic: logs.app.test.instapi
name: test-instapi
namespace: test-dev
spec:
replicas: 1
selector:
matchLabels:
app: test-instapi
cache-service: hazelcast-instapi
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
annotations:
prometheus.io/scrape_port: "9999"
prometheus.io/should_be_scraped: "true"
labels:
app: test-instapi
cache-service: hazelcast-instapi
log-topic: logs.app.test.instapi
version: latest
spec:
nodeSelector:
beta.kubernetes.io/instance-type: m5.8xlarge
containers:
- image: artifactory.global.standardchartered.com/test/test-fast-api:latest
imagePullPolicy: Always
name: test-instapi
ports:
- containerPort: 8080
name: hazel-mancenter
protocol: TCP
- containerPort: 9999
name: jmxexporter
protocol: TCP
- containerPort: 9000
name: rest
protocol: TCP
resources:
limits:
cpu: "16"
memory: 96Gi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /opt/docker/conf/application.conf
name: config
subPath: application.conf
- mountPath: /opt/docker/conf/application.ini
name: config
subPath: application.ini
- mountPath: /opt/docker/conf/cassandra.conf
name: config
subPath: cassandra.conf
- mountPath: /opt/docker/conf/hazelcast.yaml
name: config
subPath: hazelcast.yaml
- mountPath: /opt/docker/conf/logback.xml
name: config
subPath: logback.xml
- mountPath: /opt/docker/conf/streaming.conf
name: config
subPath: streaming.conf
- mountPath: /opt/docker/conf/routes
name: config
subPath: routes
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 420
name: test-instapi
name: config
***
the version of my kubernetes set up is as below
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3",
GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-
19T11:13:54Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.11",
GitCommit:"d94a81c724ea8e1ccc9002d89b7fe81d58f89ede", GitTreeState:"clean", BuildDate:"2020-03-
12T21:00:06Z", GoVersion:"go1.12.17", Compiler:"gc", Platform:"linux/amd64"}
What am I missing here ? how do I make sure that the pod is assinged to one of the 8xlarge machines ? are there any options of node selector which can fix this issue ?
Starting from kubernetes version 1.17 beta.kubernetes.io/instance-type is deprecated in favor of node.kubernetes.io/instance-type
. So the pod need to use that as nodeSelector
...
spec:
nodeSelector:
node.kubernetes.io/instance-type: m5.8xlarge
...
The Kubelet populates this with the instance type as defined by the cloudprovider
. This will be set only if you are using a cloudprovider
.
It does not look like you are using cloudprovider
i.e EKS. So you need to add the labels to the nodes on your own.
You can check labels on nodes using
kubectl get nodes --show-labels
You can add label on the nodes using
kubectl label nodes <your-node-name> node.kubernetes.io/instance-type=m5.8xlarge