I am trying to deploy an app to Kubernetes cluster via Helm charts. Every time I try to deploy the app I get
"Liveness probe failed: Get http://172.17.0.7:80/: dial tcp 172.17.0.7:80: connect: connection refused" and "Readiness probe failed: Get http://172.17.0.7:80/: dial tcp 172.17.0.7:80: connect: connection refused"
.
This is my deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "mychart.fullname" . }}
labels:
{{- include "mychart.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
{{- include "mychart.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "mychart.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "mychart.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: nikovlyubomir/docker-spring-boot:latest
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
initialDelaySeconds: 200
httpGet:
path: /
port: 80
readinessProbe:
initialDelaySeconds: 200
httpGet:
path: /
port: http
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
I read that possible solution might be adding more initialDelaySecond in both probes, but still this did not resolve my issue.
Any opinion?
Connection refused means the container is not listening on port 80. Also when you setup a http readiness probe or liveness probe as below
apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: liveness-http
spec:
containers:
- name: liveness
image: k8s.gcr.io/liveness
args:
- /server
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 3
periodSeconds: 3
To perform a probe, the kubelet sends an HTTP GET request to the server that is running in the Container and listening on port 80. If the handler for the server’s /
path returns a success code, the kubelet considers the Container to be alive and healthy. If the handler returns a failure code, the kubelet kills the Container and restarts it.
So you do not have a handler in your code which returns success code for path /
. Since it's a spring boot app , assuming you have spring boot actuator dependency in pom you can change the path to /actuator/health
which should solve the issue.
Since I can pull the image I did a try
$ docker run -d nikovlyubomir/docker-spring-boot:latest
9ac42a1228a610ae424217f9a2b93cabfe1d3141fe49e0665cc71cb8b2e3e0fd
I got logs
$ docker logs 9ac
...
2020-03-08 02:02:30.552 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 1993 (http) with context path ''
Seems the application starts on port 1993, not 80
then I check the port and connection in container:
$ docker exec -ti 9ac bash
root@9ac42a1228a6:/# curl localhost:1993
{"timestamp":"2020-03-08T02:03:12.104+0000","status":404,"error":"Not Found","message":"No message available","path":"/"}
root@9ac42a1228a6:/# curl localhost:1993/actuator/health
{"timestamp":"2020-03-08T02:04:01.348+0000","status":404,"error":"Not Found","message":"No message available","path":"/actuator/health"}
root@9ac42a1228a6:/# curl localhost:80
curl: (7) Failed to connect to localhost port 80: Connection refused
root@9ac42a1228a6:/# curl localhost:80/actuator/health
curl: (7) Failed to connect to localhost port 80: Connection refused
So make sure the check path /
or other is properly set and the port 80
or 1993
is ready.