Kubernetes service seems to go to multiple containers, despite only one running container

3/23/2020

I have built a small, single user, internal service that stores data in a single JSON blob on disk (it uses tinydb) -- thus the service is not designed to be run on multiple nodes to ensure data consistency. Unfortunately, when I send API requests I get back inconsistent results -- it appears the API is writing to different on-disk files and thus returning inconsistent results (if I call the API twice for a list of objects, it will return one of two different versions).

I deployed the service to Google Cloud (put it into a container, pushed to gcr.io). I created a cluster with a single node and deployed the docker image to the cluster. I then created a service to expose port 80. (Followed the tutorial here: https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app)

I confirmed that only a single node and single pod was running:

kubectl get pods
NAME                        READY   STATUS    RESTARTS   AGE
XXXXX-2-69db8f8765-8cdkd   1/1     Running   0          28m
kubectl get nodes
NAME                                       STATUS   ROLES    AGE   VERSION
gke-cluster-1-default-pool-4f369c90-XXXX   Ready    <none>   28m   v1.14.10-gke.24

I also tried to check if multiple containers might be running in the pod, but only one container of my app seems to be running (my app is the first one, with the XXXX):

kubectl get pods --all-namespaces
NAMESPACE     NAME                                                        READY   STATUS    RESTARTS   AGE
default       XXXXX-69db8f8765-8cdkd                                   1/1     Running   0          31m
kube-system   event-exporter-v0.2.5-7df89f4b8f-x6v9p                      2/2     Running   0          31m
kube-system   fluentd-gcp-scaler-54ccb89d5-p9qgl                          1/1     Running   0          31m
kube-system   fluentd-gcp-v3.1.1-bmxnh                                    2/2     Running   0          31m
kube-system   heapster-gke-6f86bf7b75-pvf45                               3/3     Running   0          29m
kube-system   kube-dns-5877696fb4-sqnw6                                   4/4     Running   0          31m
kube-system   kube-dns-autoscaler-8687c64fc-nm4mz                         1/1     Running   0          31m
kube-system   kube-proxy-gke-cluster-1-default-pool-4f369c90-7g2h         1/1     Running   0          31m
kube-system   l7-default-backend-8f479dd9-9jsqr                           1/1     Running   0          31m
kube-system   metrics-server-v0.3.1-5c6fbf777-vqw5b                       2/2     Running   0          31m
kube-system   prometheus-to-sd-6rgsm                                      2/2     Running   0          31m
kube-system   stackdriver-metadata-agent-cluster-level-7bd5779685-nbj5n   2/2     Running   0          30m

Any thoughts on how to fix this? I know "use a real database" is a simple answer, but the app is pretty lightweight and does not need that complexity. Our company uses GCloud + Kubernetes so I want to stick with this infrastructure.

-- user491880
docker
google-cloud-platform
google-kubernetes-engine
kubernetes

1 Answer

3/23/2020

Files written inside the container (i.e. not to a persistent volume of some kind) will disappear when then container is restarted for any reason. In fact you should really have the file permissions set up to prevent writing to files in the image except maybe /tmp or similar. You should use a GCE disk persistent volume and it will probably work better :)

-- coderanger
Source: StackOverflow