I have an application running in Kubernetes as a StatefulSet
that starts 2 pods. It has configured a liveness probe and a readiness probe.
The liveness probe
call a simple /health
endpoint that responds when the server is done loading
The readiness probe
, wait for some start-up job to complete. The job can take several minutes in some cases, and only when it finish the api of the application is ready to start accepting requests.
Even when the api is not available my app also run side jobs that don't depend on it, and I expect them to be done while the startup is happening too.
Is it possible to force Kubernetes deployment to complete and deploy 2 pods, even when the readiness probe is still not passing?
From the docs I get that the only effect of a readiness probe not passing is that the current pod won't be included as available in the loadbalancer service (which is actually the only effect that I want).
If the readiness probe fails, the endpoints controller removes the Pod's IP address from the endpoints of all Services that match the Pod.
However I am also seeing that the deployment never finishes, since pod 1 readiness probe is not passing and pod 2 is never created.
kubectl rollout restart statefulset/pod
kubectl get pods
NAME READY STATUS RESTARTS AGE
pod-0 1/2 Running 0 28m
If the readiness probe failure, always prevent the deployment, Is there other way to selectively expose only ready pods in the loadbalancer, while not marking them as Unready during the deployment?
Thanks in advance!
Is it possible to force kubernetes deployment to complete and deploy 2 pods, even when the readiness probe is still not passing?
Assuming it's meant statefulSet
instead of deployment
as object, the answer is no, it's not possible by design, most important is second point:
When the nginx example above is created, three Pods will be deployed in the order web-0, web-1, web-2. web-1 will not be deployed before web-0 is Running and Ready, and web-2 will not be deployed until web-1 is Running and Ready
StatefulSets - Deployment and scaling guaranties
If the readiness probe failure, always prevent the deployment, Is there other way to selectively expose only ready pods in the load balancer, while not marking them as Unready during the deployment?
This is by design, pods are added to service endpoints once they are in ready
state.
Some kind of potential workaround can be used, at least in simple example it does work, however you should try and evaluate if this approach will suit your case, this is fine to use as initial deployment.
statefulSet
can be started without readyness
probe included, this way statefulSet
will start pods one by one when previous is run and ready
, liveness
may need to set up initialDelaySeconds
so kubernetes won't restart the pod thinking it's unhealthy. Once statefulSet
is fully run and ready, you can add readyness
probe to the statefulSet
.
When readyness
probe is added, kubernetes will restart all pods again starting from the last one and your application will need to start again.
Idea is to start all pods and they will be able to serve requests +- at the same time, while with readyness
probe applied, only one pod will start in 5 minutes for instance, next pod will take 5 minutes more and so on.
Simple example to see what's going on based on nginx
webserver and sleep 30
command which makes kubernetes think when readyness
probe is setup that pod is not ready
.
1. Apply headless service
2. Comment readyness
probe in statefulSet
and apply manifest
3. Observe that all pods are created right after previous pod is running and ready
4. Uncomment readyness
probe and apply manifest
5. Kubernetes will recreate all pods starting from the last one waiting this time readyness
probe to complete and flag a pod as running and ready
.
Very convenient to use this command to watch for progress:
watch -n1 kubectl get pods -o wide
nginx-headless-svc.yaml
:
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
nginx-statefulset.yaml
:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
serviceName: "nginx"
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
name: web
command: ["/bin/bash", "-c"]
args: ["sleep 30 ; echo sleep completed ; nginx -g \"daemon off;\""]
readinessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 1
periodSeconds: 5
Thanks to @jesantana for this much easier solution.
If all pods have to be scheduled at once and it's not necessary to wait for pods readyness, .spec.podManagementPolicy
can be set to Parallel
. Pod Management Policies