Frontend App shows a blank page if I scale up kubernetes deployment to 3

2/2/2019

I have a frontend application that works perfectly fine when I have just one instance of the application running in a kubernetes cluster. But when I scale up the deployment to have 3 replicas it shows a blank page on the first load and then after the refresh, it loads the page. As soon as I scale down the app to 1, it starts loading fine again. Here is the what the console prints in the browser.

hub.xxxxx.me/:1 Refused to execute script from 'https://hub.xxxxxx.me/static/js/main.5a4e61df.js' because its MIME type ('text/html') is not executable, and strict MIME type checking is enabled.

enter image description here Adding the screenshot as well. Any ideas what might be the case. I know it is an infrastructure issue since it happens only when I scale the application.

One thing I noticed is that 2 pods have a different js file that the other pod.

2 pods have this file - build/static/js/main.b6aff941.js

The other pod has this file - build/static/js/main.5a4e61df.js

I think the mismatch is causing the problem. Any Idea how to fix this mismatch issue so that the pods always have the same build?

-- Anshul Tripathi
kops
kubernetes
kubernetes-ingress
reactjs

1 Answer

2/3/2019

I think the mismatch is causing the problem. Any Idea how to fix this mismatch issue so that the pods always have the same build?

Yes, this is actually pretty common in a build where those resources change like that. You actually won't want to use the traditional rolling-update mechanism, because your deployment is closer to a blue-green one: only one "family" of Pods should be in service at a time, else the html from Pod 1 is served but the subsequent request for the javascript from Pod 2 is 404

There is also the pretty grave risk of a browser having a cached copy of the HTML, but kubernetes can't -- by itself -- help you with that.

One pretty reasonable solution is to scale the Deployment to one replica, do the image patch, wait for the a-ok, then scale them back up, so there is only one source of truth for the application running in the cluster at a time. A rollback would look very similar: scale 1, rollback the deployment, scale up

An alternative mechanism would be to use label patching, to atomically switch the Service (and presumably thus the Ingress) over to the new Pods all at once, but that would require having multiple copies of the application in the cluster at the same time, which for a front-end app is likely more trouble than it's worth.

-- mdaniel
Source: StackOverflow