I am implementing RollingUpdate and Readiness/Liveness Probes into a Django deployment.
I created an /healthz
endpoint which simply returns OK and 200 as response code.
The endpoint is manually working as expected. However when kubernetes is trying to reach that endpoint, it times out. Peridocialy.
Readiness probe failed: Get http://10.40.2.14:8080/v1/healthz/: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
But from the Django access log I can clearly see that it is indeed querying this endpoint periodicaly and it is retuning bytes of data and 200 as response.
[api-prod-64bdff8d4-lcbtf api-prod] [pid: 14|app: 0|req: 34/86] 10.40.2.1 () {30 vars in 368 bytes} [Wed Jul 3 12:10:18 2019] GET /v1/healthz/ => generated 15 bytes in 3 msecs (HTTP/1.1 200) 5 headers in 149 bytes (1 switches on core 0)
[api-prod-64bdff8d4-lcbtf api-prod] [pid: 13|app: 0|req: 11/87] 10.40.2.1 () {30 vars in 368 bytes} [Wed Jul 3 12:10:52 2019] GET /v1/healthz/ => generated 15 bytes in 2 msecs (HTTP/1.1 200) 5 headers in 149 bytes (1 switches on core 0)
This is my yaml file
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: api-prod
labels:
app: api-prod
spec:
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 1
template:
metadata:
labels:
app: api-prod
spec:
# Minikube Local Pull Secrets
imagePullSecrets:
- name: gcr-json-key
containers:
- name: api-prod
image: gcr.io/example/api-prod
imagePullPolicy: IfNotPresent
readinessProbe:
httpGet:
path: /v1/healthz/
port: 8080
initialDelaySeconds: 10
periodSeconds: 60
successThreshold: 1
livenessProbe:
httpGet:
path: /v1/healthz/
port: 8080
initialDelaySeconds: 10
periodSeconds: 60
successThreshold: 1
env:
# [START cloudsql_secrets]
- name: DATABASE_USER
valueFrom:
secretKeyRef:
name: cloudsql-prod
key: username
- name: DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-prod
key: password
# [END cloudsql_secrets]
ports:
- containerPort: 8080
# [START proxy_container]
- image: gcr.io/cloudsql-docker/gce-proxy:1.05
name: cloudsql-proxy
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=redacted:somewhere:somedb=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-oauth-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: ssl-certs
mountPath: /etc/ssl/certs
- name: cloudsql
mountPath: /cloudsql
# [END proxy_container]
# [START volumes]
volumes:
- name: cloudsql-oauth-credentials
secret:
secretName: cloudsql-oauth-credentials
- name: ssl-certs
hostPath:
path: /etc/ssl/certs
- name: cloudsql
emptyDir:
# [END volumes]
# [END kubernetes_deployment]
---
apiVersion: v1
kind: Service
metadata:
name: api-prod
labels:
app: api-prod
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
selector:
app: api-prod
# [END service]
Where did the request goes?