k8s ramped deployment -- css from same pod

6/4/2019

I have a webapp running on Kubernetes on 2 pods.

I edit my deployment with a new image version, from webapp:v1 to webapp:v2.

I figure an issue during the rolling out...

podA is v2
podB is still v1

html is served from podA
with a <link> to styles.css

styles.css is served from podB
with v1 styles

=> html v2 + css v1 = 

How can I be guaranteed, that all subsequent requests will be served from the same pod, or a pod with the same version that the html served?

-- abernier
continuous-deployment
kubernetes

4 Answers

6/9/2019

How can I be guaranteed, that all subsequent requests will be served from the same pod, or a pod with the same version that the html served?

Even if you do this, you will still have problems. Especially if your app is a single-page application. Consider this:

  • User enters your website, gets index.html v1
  • You release webapp:v2. After a few minutes, all the pods are running v2.
  • The user still has the webapp opened, with index.html v1
  • The user navigates in the app. This needs to load styles.css. The user gets styles.css v2. Boom, you're mixing versions, fail.

I've run into this issue in production, and it's a pain to solve. In my experience, the best solution is:

  • Tag all the resources (css, js, imgs, etc) with a version suffix (eg styles.css -> styles-v1.css, or a hash of the file contents styles-39cf1a0b.css). Many tools such as webpack, gulp, etc can do this automatically.
  • index.html is not tagged, but it does reference the other resources with the right tag.
  • When deploying, do not delete the resources for older versions, just merge them with the newest ones. Make sure clients that have an old index.html can still get them succesfully.
  • Delete old resources after a few versions, or better, after a period of time passes (maybe 1 week?).

With this, the above scenario now works fine!

  • User enters your website, gets index.html v1
  • You release webapp:v2. This replaces index.html, but leaves all the js/css in place, adding new ones with the new version suffix.
  • The user still has the webapp opened, with index.html v1
  • The user navigates in the app. This needs to load styles-v1.css, which loads successfully and matches the index.html version. No version mixing = good!
  • Next time the user reloads the page, they get index.html v2, which points to the new styles-v2.css, etc. Still no version mixing!

Doing this with kubernetes is a bit tricky, you need to make your image build process take the files from a few older images and include them inside the new image, which is a bit strange.

Another solution is to stop serving your html/css/js from a pod, and serve it from blob storage instead. (Amazon S3, Google Cloud Storage, etc). This way, a deployment is just copying all the files, which get merged with the old files, giving you the desired behavior.

-- Dirbaio
Source: StackOverflow

6/11/2019

Feels like your issues is related to label and selectors ... It is unlikely behavior you described (unless the selector itself is not accurated for your needs).

Let's take by example this flow (ingress is disposable in that explanation just to picture it the access to the app):

Ingress -> Service -> [endpoints] -> Pods

  1. Ingress will route to the defined service;
  2. The service has a type and selector which is the rule to generate the endpoint based on labels on your the pods you want to route the requests;
  3. The endpoint will be then represent the internal IP address of your pods.

At the item 2, I think you are using a selector for a label that exists on both versions, for instance app: webapp, if you simply add a new label for your pods containing the version, then you can change your service to select only pods on the specified version (version: v1 || version: v2), in this way you will not have anymore the inconsistence reported.

-- gonzalesraul
Source: StackOverflow

6/5/2019

This looks like it's not a material for rolling upgrade. This can not be solved by kubernetes it self (assuming it's purest minimal form).

That said, if you for example use nginx ingress controller, you could look at https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md#session-affinity to keep user on the same upstream as much as possible.

-- Radek 'Goblin' Pieczonka
Source: StackOverflow

6/5/2019

Here is the nice article about Deployment update strategies, and you can consider using Blue/Green deployment, not Ramped.

Ramped is slow rollout, which after patching deployment with new image will create new replicasets and until it reaches desired replica count, it will terminate old replicatests pods slowly, then it is normal you can live this versioning trouble meanwhile rolling update.

Blue/Green, unlike ramped strategy, Service for new version, will be changed once it is confirmed that new version is healthy. Here you can find example deployment for this strategy

Hope it helps!

-- coolinuxoid
Source: StackOverflow