I'm experiencing an issue where an image I'm running as part of a Kubernetes deployment is behaving differently from the expected and consistent behavior of the same image run with docker run <...>
. My understanding of the main purpose of containerizing a project is that it will always run the same way, regardless of the host environment (ignoring the influence of the user and of outside data. Is this wrong?
Without going into too much detail about my specific problem (since I feel the solution may likely be far too specific to be of help to anyone else on SO, and because I've already detailed it here), I'm curious if someone can detail possible reasons to look into as to why an image might run differently in a Kubernetes environment than locally through Docker.
The general answer of why they're different is resources, but the real answer is that they should both be identical given identical resources.
Kubernetes uses docker
for its container runtime, at least in most cases I've seen. There are some other runtimes (cri-o
and rkt
) that are less widely adopted, so using those may also contribute to variance in how things work.
On your local docker
it's pretty easy to mount things like directories (volumes) into the image, and you can populate the directory with some content. Doing the same thing on k8s
is more difficult, and probably involves more complicated mappings, persistent volumes or an init container.
Running docker
on your laptop and k8s
on a server somewhere may give you different hardware resources:
The last one is most likely what you're seeing, flask
is probably looking up the core count for both systems and seeing two different values, and so it runs two different thread / worker counts.