I'm constantly getting this Crashloopbackoff error when a pod tries to get up. Problem generally occurs in Mongodb pods. However, when I delete the problematic pod it creates itself successfully, you can find necessary kubectl command outputs below. Thanks in advance. kubectl describe pod game-mongodb-rs-mongodb-replicaset-2 command output:
Name: game-mongodb-rs-mongodb-replicaset-2
Namespace: xxxxxxxxxx-database
Priority: 0
Node: xxxxxxxxxx-sandbox-worker02/aaa.bbb.ccc.ddd
Start Time: Fri, 24 Jul 2020 04:19:10 +0000
Labels: app=mongodb-replicaset
controller-revision-hash=game-mongodb-rs-mongodb-replicaset-5cdf769c8
release=game-mongodb-rs
statefulset.kubernetes.io/pod-name=game-mongodb-rs-mongodb-replicaset-2
Annotations: checksum/config: 305b3f0fc0746c5b648686d14caa985a818d739ea09e6a4c399b31f502c87d9f
cni.projectcalico.org/podIP: aaa.bbb.ccc.ddd/32
cni.projectcalico.org/podIPs: aaa.bbb.ccc.ddd/32
Status: Running
IP: aaa.bbb.ccc.ddd
IPs:
IP: aaa.bbb.ccc.ddd
Controlled By: StatefulSet/game-mongodb-rs-mongodb-replicaset
Init Containers:
copy-config:
Container ID: docker://60f1d94ccff87526f488346628577177c460927601b311b2d096e0f9c2b680e0
Image: ranchercharts/busybox:1.29.3
Image ID: docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796
Port: <none>
Host Port: <none>
Command:
sh
Args:
-c
set -e
set -x
cp /configdb-readonly/mongod.conf /data/configdb/mongod.conf
State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 24 Jul 2020 04:19:40 +0000
Finished: Fri, 24 Jul 2020 04:19:40 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/configdb-readonly from config (rw)
/data/configdb from configdir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-42mq5 (ro)
/work-dir from workdir (rw)
install:
Container ID: docker://1b54cf209cb756a2c8dd5bf2f7f9541833e854262e25cdb6fe7eb42c12cbcd17
Image: ranchercharts/unguiculus-mongodb-install:0.7
Image ID: docker-pullable://ranchercharts/unguiculus-mongodb-install@sha256:a3a0154bf476b5a46864a09934457eeea98c4e7f240c8e71044fce91dc4dbb8b
Port: <none>
Host Port: <none>
Args:
--work-dir=/work-dir
State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 24 Jul 2020 04:19:42 +0000
Finished: Fri, 24 Jul 2020 04:19:42 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-42mq5 (ro)
/work-dir from workdir (rw)
bootstrap:
Container ID: docker://2a667087d1591a91d42c2acdaa2c331bf83c5bdddb1eb74bcbef1d32cb4bdb27
Image: ranchercharts/mongo:3.6
Image ID: docker-pullable://ranchercharts/mongo@sha256:1459c57632dbe16aa3f58ab989b8862e7fe2659b2b8730c65d30c31d27d0066d
Port: <none>
Host Port: <none>
Command:
/work-dir/peer-finder
Args:
-on-start=/init/on-start.sh
-service=game-mongodb-rs-mongodb-replicaset
State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 24 Jul 2020 04:19:44 +0000
Finished: Fri, 24 Jul 2020 04:19:51 +0000
Ready: True
Restart Count: 0
Environment:
POD_NAMESPACE: xxxxxxxxxx-database (v1:metadata.namespace)
REPLICA_SET: rs0
TIMEOUT: 900
Mounts:
/data/configdb from configdir (rw)
/data/db from datadir (rw)
/init from init (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-42mq5 (ro)
/work-dir from workdir (rw)
Containers:
mongodb-replicaset:
Container ID: docker://f67fbdb1e2d155d87ec9d9c55f575654202d674ea10a0238d0e6738bbe1d077f
Image: ranchercharts/mongo:3.6
Image ID: docker-pullable://ranchercharts/mongo@sha256:1459c57632dbe16aa3f58ab989b8862e7fe2659b2b8730c65d30c31d27d0066d
Port: 27017/TCP
Host Port: 0/TCP
Command:
mongod
Args:
--config=/data/configdb/mongod.conf
--dbpath=/data/db
--replSet=rs0
--port=27017
--bind_ip=0.0.0.0
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 100
Started: Wed, 29 Jul 2020 03:07:31 +0000
Finished: Wed, 29 Jul 2020 03:07:31 +0000
Ready: False
Restart Count: 1083
Liveness: exec [mongo --eval db.adminCommand('ping')] delay=30s timeout=5s period=10s #success=1 #failure=3
Readiness: exec [mongo --eval db.adminCommand('ping')] delay=5s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/data/configdb from configdir (rw)
/data/db from datadir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-42mq5 (ro)
/work-dir from workdir (rw)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
datadir:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: datadir-game-mongodb-rs-mongodb-replicaset-2
ReadOnly: false
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: game-mongodb-rs-mongodb-replicaset-mongodb
Optional: false
init:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: game-mongodb-rs-mongodb-replicaset-init
Optional: false
workdir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
configdir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
default-token-42mq5:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-42mq5
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulled 33m (x1077 over 4d22h) kubelet, xxxxxxxxxx-sandbox-worker02 Container image "ranchercharts/mongo:3.6" already present on machine
Warning BackOff 3m7s (x26140 over 3d19h) kubelet, xxxxxxxxxx-sandbox-worker02 Back-off restarting failed container
kubectl logs game-mongodb-rs-mongodb-replicaset-2 command output:
2020-07-29T03:07:31.381+0000 I CONTROL [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=game-mongodb-rs-mongodb-replicaset-2
2020-07-29T03:07:31.381+0000 I CONTROL [initandlisten] db version v3.6.14
2020-07-29T03:07:31.381+0000 I CONTROL [initandlisten] git version: cbef87692475857c7ee6e764c8f5104b39c342a1
2020-07-29T03:07:31.381+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.2g 1 Mar 2016
2020-07-29T03:07:31.381+0000 I CONTROL [initandlisten] allocator: tcmalloc
2020-07-29T03:07:31.381+0000 I CONTROL [initandlisten] modules: none
2020-07-29T03:07:31.381+0000 I CONTROL [initandlisten] build environment:
2020-07-29T03:07:31.381+0000 I CONTROL [initandlisten] distmod: ubuntu1604
2020-07-29T03:07:31.381+0000 I CONTROL [initandlisten] distarch: x86_64
2020-07-29T03:07:31.381+0000 I CONTROL [initandlisten] target_arch: x86_64
2020-07-29T03:07:31.381+0000 I CONTROL [initandlisten] options: { config: "/data/configdb/mongod.conf", net: { bindIp: "0.0.0.0", port: 27017 }, replication: { replSet: "rs0" }, storage: { dbPath: "/data/db" } }
2020-07-29T03:07:31.381+0000 I STORAGE [initandlisten] exception in initAndListen: DBPathInUse: Unable to create/open the lock file: /data/db/mongod.lock (Read-only file system). Ensure the user executing mongod is the owner of the lock file and has the appropriate permissions. Also make sure that another mongod instance is not already running on the /data/db directory, terminating
2020-07-29T03:07:31.381+0000 I CONTROL [initandlisten] now exiting
2020-07-29T03:07:31.381+0000 I CONTROL [initandlisten] shutting down with code:100
kubectl describe pvc datadir-game-mongodb-rs-mongodb-replicaset-2
Name: datadir-game-mongodb-rs-mongodb-replicaset-2
Namespace: xxxxxxxxxx-database
StorageClass: longhorn
Status: Bound
Volume: pvc-74f88284-94ca-4a8a-98a8-afc8276aaa9e
Labels: app=mongodb-replicaset
release=game-mongodb-rs
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: driver.longhorn.io
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 2Gi
Access Modes: RWO
VolumeMode: Filesystem
Mounted By: game-mongodb-rs-mongodb-replicaset-2
Events: <none>
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE account-mongodb Bound pvc-3efd0ea0-0a78-44f9-8f14-c18863884ee0 2Gi RWO longhorn 64d admin-mysql Bound pvc-5df04bd0-52fd-4e20-add7-d35f00e7d13a 2Gi RWO longhorn 64d cms-mongodb Bound pvc-8668df66-24f7-4c2c-8344-34bed512db25 2Gi RWO longhorn 64d currency-influxdb-data-currency-influxdb-0 Bound pvc-bfec5613-b3bb-40ad-a381-c19c52b7a0a0 8Gi RWO longhorn 35d currency-influxdb2 Bound pvc-3127b79d-7842-4ae5-acff-f55038db6079 8Gi RWO longhorn 35d data-admin-mariadb-cluster-master-0 Bound pvc-2c43c9b4-0fd7-4f57-a4bb-41bab38d6535 2Gi RWO longhorn 12d data-admin-mariadb-cluster-slave-0 Bound pvc-5b88e0ee-ebba-4036-bee4-66e7732d9a47 2Gi RWO longhorn 12d data-admin-mariadb-cluster-slave-1 Bound pvc-2579bc42-c4b0-4271-8cd5-3138678b5be7 2Gi RWO longhorn 12d data-admin-mariadb-master-0 Bound pvc-2fd8ac2b-1ec4-4567-9f5c-4bcdbbd77b73 1Gi RWO longhorn 13d data-admin-mariadb-rs-master-0 Bound pvc-6c8d323b-ed01-4f08-b8de-7e1583dc568a 2Gi RWO longhorn 13d data-admin-mariadb-rs-slave-0 Bound pvc-37de9d2f-9ae1-4bf9-8ade-9e7a53a63dbd 2Gi RWO longhorn 13d data-admin-mariadb-rs-slave-1 Bound pvc-ae899713-6ec6-4a50-addd-26c5a2d6c52f 2Gi RWO longhorn 13d data-admin-mariadb-slave-0 Bound pvc-8b0213dd-c893-499d-a849-0a4aa9c2e7c6 1Gi RWO longhorn 13d data-admin-mariadb-slave-1 Bound pvc-682a3d47-9c5e-450f-a1c2-4bfeeb88ea18 1Gi RWO longhorn 13d data-user-mariadb-cluster-master-0 Bound pvc-2df955ab-873b-4802-b3f8-92699b0ed70f 8Gi RWO longhorn 12d data-user-mariadb-cluster-slave-0 Bound pvc-fc0fd146-41bf-4fce-8241-31471a3ace24 8Gi RWO longhorn 12d data-user-mariadb-cluster-slave-1 Bound pvc-bbae306b-c8eb-4436-89da-b278001e208d 8Gi RWO longhorn 12d data-user-mariadb-master-0 Bound pvc-37a368e5-4cce-473c-984d-01cfa4d2823a 8Gi RWO longhorn 13d data-user-mariadb-slave-0 Bound pvc-2f6d963f-35eb-4f32-a9e5-723a2353dbf8 8Gi RWO longhorn 13d data-user-mariadb-slave-1 Bound pvc-af3a5572-563e-49a4-8cd5-9cd4306b3b9d 8Gi RWO longhorn 13d datadir-account-mongodb-rs-mongodb-replicaset-0 Bound pvc-6cf2aaaf-a024-413d-9f01-8190bf92a255 2Gi RWO longhorn 13d datadir-account-mongodb-rs-mongodb-replicaset-1 Bound pvc-04e57232-6438-4e1c-9c84-534c8b5c3d1e 2Gi RWO longhorn 13d datadir-account-mongodb-rs-mongodb-replicaset-2 Bound pvc-9d5e431e-6461-494c-a41b-bcb14e5b0196 2Gi RWO longhorn 13d datadir-cms-mongodb-rs-mongodb-replicaset-0 Bound pvc-5859c00f-5d36-4d0f-88c2-871ba4687015 2Gi RWO longhorn 13d datadir-cms-mongodb-rs-mongodb-replicaset-1 Bound pvc-ce289ed3-e1bd-413c-845e-efa24a2d7325 2Gi RWO longhorn 13d datadir-cms-mongodb-rs-mongodb-replicaset-2 Bound pvc-8b2b040d-ee21-4fc8-afb8-733b553d1a47 2Gi RWO longhorn 13d datadir-game-mongodb-rs-mongodb-replicaset-0 Bound pvc-f9e45fe2-f0d5-4653-bdec-c51160224c13 2Gi RWO longhorn 19d datadir-game-mongodb-rs-mongodb-replicaset-1 Bound pvc-bb918533-5aa6-450e-93de-9e9952213fc4 2Gi RWO longhorn 19d datadir-game-mongodb-rs-mongodb-replicaset-2 Bound pvc-74f88284-94ca-4a8a-98a8-afc8276aaa9e 2Gi RWO longhorn 19d datadir-game-provider-mongodb-rs-mongodb-replicaset-0 Bound pvc-3ba540d5-a399-4e35-9d09-16e1e09b9d31 2Gi RWO longhorn 13d datadir-game-provider-mongodb-rs-mongodb-replicaset-1 Bound pvc-24a0c6e6-dde4-4b01-b6f3-ec287f92961f 2Gi RWO longhorn 13d datadir-game-provider-mongodb-rs-mongodb-replicaset-2 Bound pvc-c0645909-4cb2-4730-9143-655711e3adaf 2Gi RWO longhorn 13d datadir-logger-mongodb-mongodb-replicaset-0 Bound pvc-cc6aa705-2e84-4a2a-9c14-53525d5b0299 4Gi RWO longhorn 55d datadir-logger-mongodb-mongodb-replicaset-1 Bound pvc-eab44c76-0f8d-4059-afa7-0694399fe094 4Gi RWO longhorn 55d datadir-logger-mongodb-mongodb-replicaset-2 Bound pvc-dd92f594-250a-4aa4-a2aa-2286196b63a7 4Gi RWO longhorn 55d datadir-payment-mongodb-rs-mongodb-replicaset-0 Bound pvc-c8f0d4d2-396a-49fe-8a39-2f23f771c17b 2Gi RWO longhorn 13d datadir-payment-mongodb-rs-mongodb-replicaset-1 Bound pvc-d48518fa-fab8-40aa-91cf-ec782c125458 2Gi RWO longhorn 13d datadir-payment-mongodb-rs-mongodb-replicaset-2 Bound pvc-f0860cf0-2ba9-4ea3-8b79-ea5ac6dd051a 2Gi RWO longhorn 13d datadir-payment-provider-mongodb-rs-mongodb-replicaset-0 Bound pvc-87a15e18-969c-43eb-bac5-3a9e515db0bc 2Gi RWO longhorn 13d datadir-payment-provider-mongodb-rs-mongodb-replicaset-1 Bound pvc-74a0b6f5-929e-4894-901c-8792b424823e 2Gi RWO longhorn 13d datadir-payment-provider-mongodb-rs-mongodb-replicaset-2 Bound pvc-53e6b2e0-86be-4e53-a7c7-b2d315623651 2Gi RWO longhorn 13d datadir-report-mongodb-mongodb-replicaset-0 Bound pvc-32ced54a-b9c0-4d08-97e7-023c3ad3ed9d 4Gi RWO longhorn 35d datadir-report-mongodb-mongodb-replicaset-1 Bound pvc-33f04787-c208-4c61-8701-dc289abdc59b 4Gi RWO longhorn 35d datadir-report-mongodb-mongodb-replicaset-2 Bound pvc-77603bd5-f2d6-4578-9c42-159da8899767 4Gi RWO longhorn 35d datadir-transaction-mongodb-mongodb-replicaset-0 Bound pvc-dfba2184-c312-4c09-be36-8c56b2e23a43 4Gi RWO longhorn 62d datadir-transaction-mongodb-mongodb-replicaset-1 Bound pvc-eca84f3f-0a32-4146-9cdf-d38ce23f13c5 4Gi RWO longhorn 62d datadir-transaction-mongodb-mongodb-replicaset-2 Bound pvc-1cddf3be-e784-4e31-bb91-a99c784e0a75 4Gi RWO longhorn 62d game-mongodb Bound pvc-e843ce38-8f5c-4b65-bfe3-0722ff7c8c15 2Gi RWO longhorn 33d game-provider-mongodb Bound pvc-b45e4bf4-591b-4a32-8653-a7dc702135fa 2Gi RWO longhorn 62d mysql-data-admin-mysql-rs-pxc-0 Bound pvc-894cb5a9-9efe-4929-b4bf-f74e9f222c09 2Gi RWO longhorn 13d mysql-data-admin-percona-rs-pxc-0 Bound pvc-d12126b5-74ef-45da-8619-fdfbdfa10624 2Gi RWO longhorn 13d mysql-data-xxxxxxxxxx-admin-rs-pxc-0 Bound pvc-ec3aa8a3-7c7f-44d0-b8ad-69ec33c05d7e 2Gi RWO longhorn 13d mysql-data-xxxxxxxxxx-admin-rs-pxc-1 Bound pvc-75000123-9665-4691-8b6d-73998e15b232 2Gi RWO longhorn 13d mysql-data-xxxxxxxxxx-admin-rs-pxc-2 Bound pvc-72526ddc-ad4d-4b2b-88c0-86c3031dd87a 2Gi RWO longhorn 13d payment-mongodb Bound pvc-cdca0a65-3a8b-40d3-80d7-89019eef8a04 2Gi RWO longhorn 62d payment-provider-mongodb Bound pvc-a8b89ed6-34a1-4932-bad5-ed5608407497 2Gi RWO longhorn 62d
kubectl describe pv pvc-9d5e431e-6461-494c-a41b-bcb14e5b0196
Name: pvc-9d5e431e-6461-494c-a41b-bcb14e5b0196
Labels: <none>
Annotations: pv.kubernetes.io/provisioned-by: driver.longhorn.io
Finalizers: [kubernetes.io/pv-protection external-attacher/driver-longhorn-io]
StorageClass: longhorn
Status: Bound
Claim: xxxxxxxxxx-database/datadir-account-mongodb-rs-mongodb-replicaset-2
Reclaim Policy: Delete
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 2Gi
Node Affinity: <none>
Message:
Source:
Type: CSI (a Container Storage Interface (CSI) volume source)
Driver: driver.longhorn.io
VolumeHandle: pvc-9d5e431e-6461-494c-a41b-bcb14e5b0196
ReadOnly: false
VolumeAttributes: baseImage=
fromBackup=
numberOfReplicas=2
staleReplicaTimeout=30
storage.kubernetes.io/csiProvisionerIdentity=1594625809090-8081-driver.longhorn.io
Events: <none>