I am trying to set up a Mongo DB
on a specific node on GKE
. I upgraded my current cluster using
gcloud container node-pools create mongo --cluster drogo --num-nodes 1 --region us-east1
It created a new node-pool in the cluster with the name mongo
. I have the following Deployment
, Volume
and Service
file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo
spec:
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
spec:
nodeSelector:
cloud.google.com/gke-nodepool: mongo
containers:
- name: mongo
image: mongo:3.6.17-xenial
ports:
- containerPort: 27017
volumeMounts:
- name: storage
mountPath: /data/db
volumes:
- name: storage
persistentVolumeClaim:
claimName: mongo-pvc
In the above file, I provided the nodeSelector
cloud.google.com/gke-nodepool: mongo (as mentioned here)
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongo-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
apiVersion: v1
kind: Service
metadata:
name: mongo
spec:
selector:
app: mongo
ports:
- port: 27017
targetPort: 27017
When I check my pod error it says:
Can’t schedule this Pod because no nodes match the nodeSelector.
Cannot schedule pods: node(s) had volume node affinity conflict.
What am I doing wrong here? Any help would be appreciable.
This is how I set up Kubernetes Label
into a node pool
I ran gcloud container node-pools describe node --cluster=drogo --region us-east1
and in the response I can see:
autoscaling: {}
config:
diskSizeGb: 20
diskType: pd-standard
imageType: COS_CONTAINERD
labels:
mongo: mongo
machineType: e2-medium
There were two issues with the deployment setup:
The nodeSelector
specified in the Deployment manifest was using wrong label
nodeSelector:
cloud.google.com/gke-nodepool: mongo
Whereas the created node had a label pair mongo: mongo
. Either changing the node label to cloud.google.com/gke-nodepool: mongo
or the deployment nodeSelector to mongo: mongo
works.
The following issue was that the available persistentVolume
lived in AZ us-east1-c
whereas the available node was in us-east1-d
. Therefore kubernetes scheduler couldn't find a match of requested nodeSelector
+ PersistentVolume
within the same AZ. Issue was solved by adding a new node with the same configuration in AZ us-east1-c