Kubernetes Crashloopbackoff With Minikube

11/13/2020

So I am learning about Kubernetes with a guide, I am trying to deploy a MongoDB Pod with 1 replica. This is the deployment config file.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mongodb-deployment
  labels:
    app: mongodb
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mongodb
  template:
    metadata:
      labels:
        app: mongodb
    spec:
      containers:
        - name: mongodb
          image: mongo
          ports:
            - containerPort: 27017
          env:
            - name: MONGO_INITDB_ROOT_USERNAME
              valueFrom:
                secretKeyRef:
                  name: mongodb-secret
                  key: mongo-root-username
            - name: MONGO_INITDB_ROOT_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: mongodb-secret
                  key: mongo-root-password
---
apiVersion: v1
kind: Service
metadata:
  name: mongodb-service
spec:
  selector:
    app: mongodb
  ports:
    - protocol: TCP
      port: 27017
      targetPort: 27017

I also try to deploy a Mongo-Express Pod with almost the same config file, but I keep getting CrashLoopBackOff for both Pods, From the little understanding I have, this is caused by the container failing and restarting in a cycle. I tried going through the events with kubectl get events and I see that a warning with message Back-off restarting failed container keeps occurring. I also tried doing a little digging around and came across a solution that says to add

command: ['sleep']
args: ['infinity']

That fixed the CrashLoopBackOff issue, but when I try to get the logs for the Pod, nothing is displayed on the terminal. Please I need some help and possible explanation as how the command and args seem to fix it, also how do I stop this crash from happening to my Pods and current one, Thank you very much.

-- thatguy
devops
docker
kubernetes
mongodb
node.js

2 Answers

11/13/2020

My advice is to deploy MongoDB as StatefulSet on Kubernetes.

In stateful application, the N-replicas of master nodes manages several worker nodes under a cluster. So, if any master node goes down the other ordinal instances will be active to execute the workflow. The master node instances must be identified as a unique ordinal number known as StatefulSet. See more: mongodb-sts, mongodb-on-kubernetes. Also use Headless service to manage the domain of a Pod. In general understanding of Headless Service, there is no need for LoadBalancer or a kube-proxy to interact directly with Pods but using a Service IP, so the Cluster IP is set to none.

In your case:

apiVersion: v1
kind: Service
metadata:
  name: mongodb
spec:
  clusterIP: None
  selector:
    app: mongodb
  ports:
    - port: 27017

The error:

Also uncaught exception: Error: couldn't add user: Error preflighting normalization: U_STRINGPREP_PROHIBITED_ERROR _getErrorWithCode@src/mongo/shell/utils.js:25:13

indicates that the secret may be missing. Take a look: mongodb-initializating.

In your case secret should look similar:

apiVersion: v1
kind: Secret
metadata:
  name: mongodb-secret
type: Opaque
data:
  mongo-root-username: YWRtaW4=
  mongo-root-password: MWYyZDFlMmU2N2Rm

Remember to configure also a volume for your pods - follow tutorials I have linked above.

-- Malgorzata
Source: StackOverflow

11/13/2020

Deploy mongodb with StatefulSet not as deployment.

Example:

apiVersion: v1 kind: Service metadata: name: mongodb-service labels: name: mongo spec: ports:

  • port: 27017 targetPort: 27017 clusterIP: None selector: role: mongo

apiVersion: apps/v1beta1 kind: StatefulSet metadata: name: mongod spec: serviceName: mongodb-service replicas: 3 template: metadata: labels: role: mongo environment: test replicaset: MainRepSet spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: replicaset operator: In values: - MainRepSet topologyKey: kubernetes.io/hostname terminationGracePeriodSeconds: 10 volumes: - name: secrets-volume secret: secretName: shared-bootstrap-data defaultMode: 256 containers: - name: mongod-container #image: pkdone/mongo-ent:3.4 image: mongo command: - "numactl" - "--interleave=all" - "mongod" - "--wiredTigerCacheSizeGB" - "0.1" - "--bind_ip" - "0.0.0.0" - "--replSet" - "MainRepSet" - "--auth" - "--clusterAuthMode" - "keyFile" - "--keyFile" - "/etc/secrets-volume/internal-auth-mongodb-keyfile" - "--setParameter" - "authenticationMechanisms=SCRAM-SHA-1" resources: requests: cpu: 0.2 memory: 200Mi ports: - containerPort: 27017 volumeMounts: - name: secrets-volume readOnly: true mountPath: /etc/secrets-volume - name: mongodb-persistent-storage-claim mountPath: /data/db volumeClaimTemplates:

  • metadata: name: mongodb-persistent-storage-claim annotations: volume.beta.kubernetes.io/storage-class: "standard" spec: accessModes: "ReadWriteOnce" resources: requests: storage: 1Gi
-- Shashi Kumar
Source: StackOverflow