I used helm stable charts to install mongodb in my AWS kubernetes cluster, when i run helm install mongodb for the first time, no issues all pod runs and i am able to access the db too.
however, when i run helm install mongodb second time with new release name , pod logs shows that mongodb running successfully, how ever the status shows otherwise..
request-form-mongo-mongodb-7f8478854-t2g8z 1/1 Running 0 3m
scheduled-task-mongo-mongodb-8689677f67-tzhr9 0/1 CrashLoopBackOff 4 2m
when i checked the describe pod logs for the error pod,
everything seems fine, but the last two lines are with this warning.
Normal Created 7m (x4 over 8m) kubelet, ip-172-20-38-19.us-west-2.compute.internal Created container
Normal Started 7m (x4 over 8m) kubelet, ip-172-20-38-19.us-west-2.compute.internal Started container
Warning FailedSync 7m (x6 over 8m) kubelet, ip-172-20-38-19.us-west-2.compute.internal Error syncing pod
Warning BackOff 2m (x26 over 8m) kubelet, ip-172-20-38-19.us-west-2.compute.internal Back-off restarting failed container
What could be the problem, and how to resolve this?
Yes, We can deploy multiple instances of mongodb on the same cluster using helm package.
The above issue is due to not allocating enough resource for my pv(persistant volume), issue got resolved when i dedicated min of 1Gi of memory for my pv and created respected pvc.
Once allocating enough resource i installed mongo db using helm successfully.