mongo db pod on bare metal kubernetese cluster is always pending

5/4/2021

On my bare metal kubernetese cluster, I installed mongo db using helm from bitnami on kubernetese as follows.

helm install mongodb bitnami/mongodb

Immediately I get the following output as a result.

vagrant@kmasterNew:~$ helm install mongodb bitnami/mongodb
NAME: mongodb
LAST DEPLOYED: Tue May  4 12:26:58 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
** Please be patient while the chart is being deployed **

MongoDB(R) can be accessed on the following DNS name(s) and ports from within your cluster:

	mongodb.default.svc.cluster.local

To get the root password run:

	export MONGODB_ROOT_PASSWORD=$(kubectl get secret --namespace default mongodb -o jsonpath="{.data.mongodb-root-password}" | base64 --decode)

To connect to your database, create a MongoDB(R) client container:

	kubectl run --namespace default mongodb-client --rm --tty -i --restart='Never' --env="MONGODB_ROOT_PASSWORD=$MONGODB_ROOT_PASSWORD" --image docker.io/bitnami/mongodb:4.4.5-debian-10-r21 --command -- bash

Then, run the following command:
	mongo admin --host "mongodb" --authenticationDatabase admin -u root -p $MONGODB_ROOT_PASSWORD

To connect to your database from outside the cluster execute the following commands:

	kubectl port-forward --namespace default svc/mongodb 27017:27017 &
	mongo --host 127.0.0.1 --authenticationDatabase admin -p $MONGODB_ROOT_PASSWORD
	

Then upon inspecting the pods, I see that the pod is always pending however long I wait.

vagrant@kmasterNew:~$ kubectl get pod -o wide
NAME                      READY   STATUS    RESTARTS   AGE     IP       NODE     NOMINATED NODE   READINESS GATES
mongodb-d9b6d589c-zpmb6   0/1     Pending   0          9m21s   <none>   <none>   <none>           <none>

What am I missing?

As indicated in the helm install output I run the following command to get the secret.

export MONGODB_ROOT_PASSWORD=$(kubectl get secret --namespace default mongodb -o jsonpath="{.data.mongodb-root-password}" | base64 --decode)

It is successfully executed. And when I do

echo $MONGODB_ROOT_PASSWORD

I get the root password as

rMjjciN8An

As the instructions in the helm install output suggests, I tried to connect to the database by running

kubectl run --namespace default mongodb-client --rm --tty -i --restart='Never' --env="MONGODB_ROOT_PASSWORD=rMjjciN8An" --image docker.io/bitnami/mongodb:4.4.5-debian-10-r21 --command -- bash

mongo admin --host "mongodb" --authenticationDatabase admin -u root -p rMjjciN8An

And I get the following output.

MongoDB shell version v4.4.5
connecting to: mongodb://mongodb:27017/admin?authSource=admin&compressors=disabled&gssapiServiceName=mongodb
Error: couldn't connect to server mongodb:27017, connection attempt failed: SocketException: Error connecting to mongodb:27017 (10.111.99.8:27017) :: caused by :: Connection refused :
connect@src/mongo/shell/mongo.js:374:17
@(connect):2:6
exception: connect failed
exiting with code 1

As you can see the connection attempt failed. This I guess is because the pod itself is in the pending state in the first place.

So to get more info about the pod, I exit out of the mongodb-client pod(I created in the step above), and run the following command.

kubectl get pod -o yaml

And I get the lengthy output.

apiVersion: v1
items:
- apiVersion: v1
  kind: Pod
  metadata:
    creationTimestamp: "2021-05-04T12:26:59Z"
    generateName: mongodb-d9b6d589c-
    labels:
      app.kubernetes.io/component: mongodb
      app.kubernetes.io/instance: mongodb
      app.kubernetes.io/managed-by: Helm
      app.kubernetes.io/name: mongodb
      helm.sh/chart: mongodb-10.15.0
      pod-template-hash: d9b6d589c
    name: mongodb-d9b6d589c-zpmb6
    namespace: default
    ownerReferences:
    - apiVersion: apps/v1
      blockOwnerDeletion: true
      controller: true
      kind: ReplicaSet
      name: mongodb-d9b6d589c
      uid: c99bfa3e-9a8d-425f-acdc-74d8acaba71b
    resourceVersion: "52012"
    uid: 97f77766-f400-424c-9651-9839a7506721
  spec:
    affinity:
      podAntiAffinity:
        preferredDuringSchedulingIgnoredDuringExecution:
        - podAffinityTerm:
            labelSelector:
              matchLabels:
                app.kubernetes.io/component: mongodb
                app.kubernetes.io/instance: mongodb
                app.kubernetes.io/name: mongodb
            namespaces:
            - default
            topologyKey: kubernetes.io/hostname
          weight: 1
    containers:
    - env:
      - name: BITNAMI_DEBUG
        value: "false"
      - name: MONGODB_ROOT_PASSWORD
        valueFrom:
          secretKeyRef:
            key: mongodb-root-password
            name: mongodb
      - name: ALLOW_EMPTY_PASSWORD
        value: "no"
      - name: MONGODB_SYSTEM_LOG_VERBOSITY
        value: "0"
      - name: MONGODB_DISABLE_SYSTEM_LOG
        value: "no"
      - name: MONGODB_DISABLE_JAVASCRIPT
        value: "no"
      - name: MONGODB_ENABLE_JOURNAL
        value: "yes"
      - name: MONGODB_ENABLE_IPV6
        value: "no"
      - name: MONGODB_ENABLE_DIRECTORY_PER_DB
        value: "no"
      image: docker.io/bitnami/mongodb:4.4.5-debian-10-r21
      imagePullPolicy: IfNotPresent
      livenessProbe:
        exec:
          command:
          - mongo
          - --disableImplicitSessions
          - --eval
          - db.adminCommand('ping')
        failureThreshold: 6
        initialDelaySeconds: 30
        periodSeconds: 10
        successThreshold: 1
        timeoutSeconds: 5
      name: mongodb
      ports:
      - containerPort: 27017
        name: mongodb
        protocol: TCP
      readinessProbe:
        exec:
          command:
          - bash
          - -ec
          - |
            mongo --disableImplicitSessions $TLS_OPTIONS --eval 'db.hello().isWritablePrimary || db.hello().secondary' | grep -q 'true'
        failureThreshold: 6
        initialDelaySeconds: 5
        periodSeconds: 10
        successThreshold: 1
        timeoutSeconds: 5
      resources: {}
      securityContext:
        runAsNonRoot: true
        runAsUser: 1001
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
      - mountPath: /bitnami/mongodb
        name: datadir
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: kube-api-access-g5kx8
        readOnly: true
    dnsPolicy: ClusterFirst
    enableServiceLinks: true
    preemptionPolicy: PreemptLowerPriority
    priority: 0
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext:
      fsGroup: 1001
    serviceAccount: mongodb
    serviceAccountName: mongodb
    terminationGracePeriodSeconds: 30
    tolerations:
    - effect: NoExecute
      key: node.kubernetes.io/not-ready
      operator: Exists
      tolerationSeconds: 300
    - effect: NoExecute
      key: node.kubernetes.io/unreachable
      operator: Exists
      tolerationSeconds: 300
    volumes:
    - name: datadir
      persistentVolumeClaim:
        claimName: mongodb
    - name: kube-api-access-g5kx8
      projected:
        defaultMode: 420
        sources:
        - serviceAccountToken:
            expirationSeconds: 3607
            path: token
        - configMap:
            items:
            - key: ca.crt
              path: ca.crt
            name: kube-root-ca.crt
        - downwardAPI:
            items:
            - fieldRef:
                apiVersion: v1
                fieldPath: metadata.namespace
              path: namespace
  status:
    conditions:
    - lastProbeTime: null
      lastTransitionTime: "2021-05-04T12:26:59Z"
      message: '0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.'
      reason: Unschedulable
      status: "False"
      type: PodScheduled
    phase: Pending
    qosClass: BestEffort
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

But I feel that an important clue is at the end of the output.

message: '0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.'

Looks like the some thing wrong with PVC. Now as I look the manifest generated by running

helm get menifest mongodb

I get the manifest as follows.

---
# Source: mongodb/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: mongodb
  namespace: default
  labels:
	app.kubernetes.io/name: mongodb
	helm.sh/chart: mongodb-10.15.0
	app.kubernetes.io/instance: mongodb
	app.kubernetes.io/managed-by: Helm
secrets:
  - name: mongodb
---
# Source: mongodb/templates/secrets.yaml
apiVersion: v1
kind: Secret
metadata:
  name: mongodb
  namespace: default
  labels:
	app.kubernetes.io/name: mongodb
	helm.sh/chart: mongodb-10.15.0
	app.kubernetes.io/instance: mongodb
	app.kubernetes.io/managed-by: Helm
	app.kubernetes.io/component: mongodb
type: Opaque
data:
  mongodb-root-password: "ck1qamNpTjhBbg=="
---
# Source: mongodb/templates/standalone/pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: mongodb
  namespace: default
  labels:
	app.kubernetes.io/name: mongodb
	helm.sh/chart: mongodb-10.15.0
	app.kubernetes.io/instance: mongodb
	app.kubernetes.io/managed-by: Helm
	app.kubernetes.io/component: mongodb
spec:
  accessModes:
	- "ReadWriteOnce"
  resources:
	requests:
	  storage: "8Gi"
---
# Source: mongodb/templates/standalone/svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: mongodb
  namespace: default
  labels:
	app.kubernetes.io/name: mongodb
	helm.sh/chart: mongodb-10.15.0
	app.kubernetes.io/instance: mongodb
	app.kubernetes.io/managed-by: Helm
	app.kubernetes.io/component: mongodb
spec:
  type: ClusterIP
  ports:
	- name: mongodb
	  port: 27017
	  targetPort: mongodb
	  nodePort: null
  selector:
	app.kubernetes.io/name: mongodb
	app.kubernetes.io/instance: mongodb
	app.kubernetes.io/component: mongodb
---
# Source: mongodb/templates/standalone/dep-sts.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mongodb
  namespace: default
  labels:
	app.kubernetes.io/name: mongodb
	helm.sh/chart: mongodb-10.15.0
	app.kubernetes.io/instance: mongodb
	app.kubernetes.io/managed-by: Helm
	app.kubernetes.io/component: mongodb
spec:
  strategy:
	type: RollingUpdate
  selector:
	matchLabels:
	  app.kubernetes.io/name: mongodb
	  app.kubernetes.io/instance: mongodb
	  app.kubernetes.io/component: mongodb
  template:
	metadata:
	  labels:
		app.kubernetes.io/name: mongodb
		helm.sh/chart: mongodb-10.15.0
		app.kubernetes.io/instance: mongodb
		app.kubernetes.io/managed-by: Helm
		app.kubernetes.io/component: mongodb
	spec:

	  serviceAccountName: mongodb
	  affinity:
		podAffinity:

		podAntiAffinity:
		  preferredDuringSchedulingIgnoredDuringExecution:
			- podAffinityTerm:
				labelSelector:
				  matchLabels:
					app.kubernetes.io/name: mongodb
					app.kubernetes.io/instance: mongodb
					app.kubernetes.io/component: mongodb
				namespaces:
				  - "default"
				topologyKey: kubernetes.io/hostname
			  weight: 1
		nodeAffinity:

	  securityContext:
		fsGroup: 1001
		sysctls: []
	  containers:
		- name: mongodb
		  image: docker.io/bitnami/mongodb:4.4.5-debian-10-r21
		  imagePullPolicy: "IfNotPresent"
		  securityContext:
			runAsNonRoot: true
			runAsUser: 1001
		  env:
			- name: BITNAMI_DEBUG
			  value: "false"
			- name: MONGODB_ROOT_PASSWORD
			  valueFrom:
				secretKeyRef:
				  name: mongodb
				  key: mongodb-root-password
			- name: ALLOW_EMPTY_PASSWORD
			  value: "no"
			- name: MONGODB_SYSTEM_LOG_VERBOSITY
			  value: "0"
			- name: MONGODB_DISABLE_SYSTEM_LOG
			  value: "no"
			- name: MONGODB_DISABLE_JAVASCRIPT
			  value: "no"
			- name: MONGODB_ENABLE_JOURNAL
			  value: "yes"
			- name: MONGODB_ENABLE_IPV6
			  value: "no"
			- name: MONGODB_ENABLE_DIRECTORY_PER_DB
			  value: "no"
		  ports:
			- name: mongodb
			  containerPort: 27017
		  livenessProbe:
			exec:
			  command:
				- mongo
				- --disableImplicitSessions
				- --eval
				- "db.adminCommand('ping')"
			initialDelaySeconds: 30
			periodSeconds: 10
			timeoutSeconds: 5
			successThreshold: 1
			failureThreshold: 6
		  readinessProbe:
			exec:
			  command:
				- bash
				- -ec
				- |
				  mongo --disableImplicitSessions $TLS_OPTIONS --eval 'db.hello().isWritablePrimary || db.hello().secondary' | grep -q 'true'
			initialDelaySeconds: 5
			periodSeconds: 10
			timeoutSeconds: 5
			successThreshold: 1
			failureThreshold: 6
		  resources:
			limits: {}
			requests: {}
		  volumeMounts:
			- name: datadir
			  mountPath: /bitnami/mongodb
			  subPath:
	  volumes:
		- name: datadir
		  persistentVolumeClaim:
			claimName: mongodb

To summarize, the following are the 5 different kinds of objects the above manifest represent.

kind: ServiceAccount
kind: Secret
kind: PersistentVolumeClaim
kind: Service
kind: Deployment

As we can see there is PersistentVolumeClaim, but no PersistentVolume.

I think, I followed the instructions given here for installing the mongo db chart on kubernetese.

mongo db installation on kubernetese using helm

There is nothing there mentioning about PersistentVolume. Am I missing something here? Do I have to somehow create a Persistant Volume myself?

So the questions are

  1. Why is the pod in pending state indefinitely?
  2. Why is there no Persistant Volume object created(I checked with the command kubectl get pv --all-namespaces)

  3. Finally what baffles me is when try to get logs, I see nothing!!

   vagrant@kmasterNew:~$ kubectl get pods
   NAME                      READY   STATUS    RESTARTS   AGE
   mongodb-d9b6d589c-zpmb6   0/1     Pending   0          60m
   vagrant@kmasterNew:~$ kubectl logs mongodb-d9b6d589c-zpmb6
   vagrant@kmasterNew:~$ 
-- VivekDev
kubernetes
kubernetes-helm
mongodb

1 Answer

5/5/2021

Moving this out of comments, as I was able to reproduce it on kubernetes cluster setup using kubeadm.

1 - It's pending because it doesn't have persistent volumes to proceed. Can be checked with:

kubectl get pvc output is:

kubectl get pvc
NAME      STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mongodb   Pending                                                     8s

Then kubectl describe pvc mongodb

Name:          mongodb
Namespace:     default
StorageClass:  
Status:        Pending
Volume:        
Labels:        app.kubernetes.io/component=mongodb
               app.kubernetes.io/instance=mongodb
               app.kubernetes.io/managed-by=Helm
               app.kubernetes.io/name=mongodb
               helm.sh/chart=mongodb-10.15.0
Annotations:   meta.helm.sh/release-name: mongodb
               meta.helm.sh/release-namespace: default
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Used By:       mongodb-d9b6d589c-7mbf8
Events:
  Type    Reason         Age               From                         Message
  ----    ------         ----              ----                         -------
  Normal  FailedBinding  2s (x8 over 97s)  persistentvolume-controller  no persistent volumes available for this claim and no storage class is set

2 - There are three main prerequisites to start with bitnami/mongodb chart:

  • Kubernetes 1.12+
  • Helm 3.1.0
  • PV provisioner support in the underlying infrastructure

In your case pod can't start because it doesn't have PersistentVolume created. This happens because no provisioner is used. E.g. in clouds or minikube it's automatically handled for you, while for bare metal cluster you should take care of it on your own. Here are two examples how you can do it:

You can check if any storage classes and provisioners are used with:

kubectl get storageclasses

3 - You don't see logs because container didn't even start. You can always refer to troubleshooting pending or crashing pods

-- moonkotte
Source: StackOverflow