I don't know what to test or how to test to make sure that my application doesn't go into an unrecoverable state when deployments are being upgraded (new versions are rolled out). I understand that Kubernetes deployment provides rolling upgrade, which means the old pod will not be killed until the new pod is ready. But I would still assume that there will be gRPC requests lost during the transition. Is there anyway that I could test it?
To make your deployment image upgrade full proof and with 0
downtime, you need two things in your deployment file readiness probe
and rollingUpdate
Strategy.
readiness probe
is a check that Kubernetes does in order to make sure that your pod is ready to send traffic to it. Until it is not ready, Kubernetes will not use your pod. Easy! In our case, it looks like this:
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5
periodSeconds: 5
successThreshold: 1
We are basically telling Kubernetes to send an http get request to the path / every five seconds and if it is successful, mark the pod ready and start sending traffic to it.
Another thing you should know is RollingUpdate strategy
, it looks like this:
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
maxSurge: 1
It basically tells Kubernetes that there should be zero unavailable pods while deploying (maxUnavailable: 0
) and there should be one new pod at a time (maxSurge: 1
).
So your deployment yaml should look like:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myapp-deployment
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
maxSurge: 1
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: prafull/myapp:1
imagePullPolicy: Always
ports:
- containerPort: 80
protocol: TCP
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5
periodSeconds: 5
successThreshold: 1
If you now upgrade your image using kubectl apply -f deployment.yaml
there will be no downtime for your requests.