I have a webapp
running on Kubernetes on 2 pods.
I edit my deployment with a new image version, from webapp:v1
to webapp:v2
.
I figure an issue during the rolling out...
podA is v2
podB is still v1
html is served from podA
with a <link> to styles.css
styles.css is served from podB
with v1 styles
=> html v2 + css v1 =
How can I be guaranteed, that all subsequent requests will be served from the same pod, or a pod with the same version that the html served?
How can I be guaranteed, that all subsequent requests will be served from the same pod, or a pod with the same version that the html served?
Even if you do this, you will still have problems. Especially if your app is a single-page application. Consider this:
index.html
v1index.html
v1styles.css
. The user gets styles.css
v2. Boom, you're mixing versions, fail.I've run into this issue in production, and it's a pain to solve. In my experience, the best solution is:
styles.css
-> styles-v1.css
, or a hash of the file contents styles-39cf1a0b.css
). Many tools such as webpack, gulp, etc can do this automatically.index.html
is not tagged, but it does reference the other resources with the right tag.index.html
can still get them succesfully.With this, the above scenario now works fine!
index.html
v1index.html
, but leaves all the js/css in place, adding new ones with the new version suffix.index.html
v1styles-v1.css
, which loads successfully and matches the index.html version. No version mixing = good!index.html
v2, which points to the new styles-v2.css
, etc. Still no version mixing!Doing this with kubernetes is a bit tricky, you need to make your image build process take the files from a few older images and include them inside the new image, which is a bit strange.
Another solution is to stop serving your html/css/js from a pod, and serve it from blob storage instead. (Amazon S3, Google Cloud Storage, etc). This way, a deployment is just copying all the files, which get merged with the old files, giving you the desired behavior.
Feels like your issues is related to label and selectors ... It is unlikely behavior you described (unless the selector itself is not accurated for your needs).
Let's take by example this flow (ingress is disposable in that explanation just to picture it the access to the app):
Ingress -> Service -> [endpoints] -> Pods
At the item 2, I think you are using a selector for a label that exists on both versions, for instance app: webapp
, if you simply add a new label for your pods containing the version
, then you can change your service to select only pods on the specified version (version: v1
|| version: v2
), in this way you will not have anymore the inconsistence reported.
This looks like it's not a material for rolling upgrade. This can not be solved by kubernetes it self (assuming it's purest minimal form).
That said, if you for example use nginx ingress controller, you could look at https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md#session-affinity to keep user on the same upstream as much as possible.
Here is the nice article about Deployment update strategies, and you can consider using Blue/Green deployment, not Ramped.
Ramped is slow rollout, which after patching deployment with new image will create new replicasets and until it reaches desired replica count, it will terminate old replicatests pods slowly, then it is normal you can live this versioning trouble meanwhile rolling update.
Blue/Green, unlike ramped strategy, Service for new version, will be changed once it is confirmed that new version is healthy. Here you can find example deployment for this strategy
Hope it helps!