How to debug why it's status is CrashLoopBackOff?
I am not using minikube , working on Aws Kubernetes instance.
I followed this tutorial. https://github.com/mkjelland/spring-boot-postgres-on-k8s-sample
When I do
kubectl create -f specs/spring-boot-app.yml
and check status by
kubectl get pods
it gives
spring-boot-postgres-sample-67f9cbc8c-qnkzg 0/1 CrashLoopBackOff 14 50m
Below Command
kubectl describe pods spring-boot-postgres-sample-67f9cbc8c-qnkzg
gives
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 3m18s (x350 over 78m) kubelet, ip-172-31-11-87 Back-off restarting failed container
Command kubectl get pods --all-namespaces gives
NAMESPACE NAME READY STATUS RESTARTS AGE
default constraintpod 1/1 Running 1 88d
default postgres-78f78bfbfc-72bgf 1/1 Running 0 109m
default rcsise-krbxg 1/1 Running 1 87d
default spring-boot-postgres-sample-667f87cf4c-858rx 0/1 CrashLoopBackOff 4 110s
default twocontainers 2/2 Running 479 89d
kube-system coredns-86c58d9df4-kr4zj 1/1 Running 1 89d
kube-system coredns-86c58d9df4-qqq2p 1/1 Running 1 89d
kube-system etcd-ip-172-31-6-149 1/1 Running 8 89d
kube-system kube-apiserver-ip-172-31-6-149 1/1 Running 1 89d
kube-system kube-controller-manager-ip-172-31-6-149 1/1 Running 1 89d
kube-system kube-flannel-ds-amd64-4h4x7 1/1 Running 1 89d
kube-system kube-flannel-ds-amd64-fcvf2 1/1 Running 1 89d
kube-system kube-proxy-5sgjb 1/1 Running 1 89d
kube-system kube-proxy-hd7tr 1/1 Running 1 89d
kube-system kube-scheduler-ip-172-31-6-149 1/1 Running 1 89d
Command kubectl logs spring-boot-postgres-sample-667f87cf4c-858rx doesn't print anything.
I was able to reproduce the scenario. Seems there is a connectivity issue between the app and Postgres DB. So the app failed to initiate. Please find the logs below it might help you.
$ kubectl get po
NAME READY STATUS RESTARTS AGE
spring-boot-postgres-sample-5d7c85d98b-qwvjr 0/1 CrashLoopBackOff 19 1h
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'entityManagerFactory' defined in class path resource [org/springframework/boot/autoconfigure/orm/jpa/HibernateJpaAutoConfiguration.class]: Invocation of init method failed; nested exception is org.hibernate.service.spi.ServiceException: Unable to create requested service [org.hibernate.engine.jdbc.env.spi.JdbcEnvironment]
2019-05-23 10:53:01.889 ERROR 1 --- [ main] o.a.tomcat.jdbc.pool.ConnectionPool : Unable to create initial connections of pool.
org.postgresql.util.PSQLException: Connection to :5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:262) ~[postgresql-9.4.1212.jre7.jar!/:9.4.1212.jre7]
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:51) ~[postgresql-9.4.1212.jre7.jar!/:9.4.1212.jre7]
Why don't you...
run a dummy container (run an endless sleep command)
kubectl exec -it bash
Run the program directly and have a look at the logs directly.
Its an easier form of debugging on K8s.
First of all I fixed by postgres deployment, there was some error of "pod has unbound PersistentVolumeClaims" , so i fixed that error by this post pod has unbound PersistentVolumeClaims
So now my postgres deployment is running.
kubectl logs spring-boot-postgres-sample-67f9cbc8c-qnkzg doesn't print anything, it means there is something wrong in config file. kubectl describe pod spring-boot-postgres-sample-67f9cbc8c-qnkzg stating that container is terminated and reason is completed, I fixed it by running container infinity time by adding
# Just sleep forever
command: [ "sleep" ]
args: [ "infinity" ]
So now my deployment is running. But now i Exposed my service by
kubectl expose deployment spring-boot-postgres-sample --type=LoadBalancer --port=8080
but can't able to get External-Ip , so I did
kubectl patch svc <svc-name> -n <namespace> -p '{"spec": {"type": "LoadBalancer", "externalIPs":["172.31.71.218"]}}'
So I get my external-Ip as "172.31.71.218"
But now the problem is curl http://172.31.71.218:8080/ getting timeout
Anything i did wrong?
Here is my deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: spring-boot-postgres-sample
namespace: default
spec:
replicas: 1
template:
metadata:
name: spring-boot-postgres-sample
labels:
app: spring-boot-postgres-sample
spec:
containers:
- name: spring-boot-postgres-sample
command: [ "/bin/bash", "-ce", "tail -f /dev/null" ]
env:
- name: POSTGRES_USER
valueFrom:
configMapKeyRef:
name: postgres-config
key: postgres_user
- name: POSTGRES_PASSWORD
valueFrom:
configMapKeyRef:
name: postgres-config
key: postgres_password
- name: POSTGRES_HOST
valueFrom:
configMapKeyRef:
name: hostname-config
key: postgres_host
image: <mydockerHUbaccount>/spring-boot-postgres-on-k8s:v1
Here is my postgres.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
namespace: default
data:
postgres_user: postgresuser
postgres_password: password
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres
spec:
template:
metadata:
labels:
app: postgres
spec:
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: postgres-pv-claim
containers:
- image: postgres
name: postgres
env:
- name: POSTGRES_USER
valueFrom:
configMapKeyRef:
name: postgres-config
key: postgres_user
- name: POSTGRES_PASSWORD
valueFrom:
configMapKeyRef:
name: postgres-config
key: postgres_password
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
ports:
- containerPort: 5432
name: postgres
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
---
apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
type: ClusterIP
ports:
- port: 5432
selector:
app: postgres
Here How i got host-config map
kubectl create configmap hostname-config --from-literal=postgres_host=$(kubectl get svc postgres -o jsonpath="{.spec.clusterIP}")