Kind Kubernetes cluster doesn't have container logs

8/26/2021

I have installed a Kubernetes cluster using kind k8s as it was easier to setup and run in my local VM. I also installed Docker separately. I then created a docker image for Spring boot application I built for printing messages to the stdout. It was then added to kind k8s local registry. Using this newly created local image, I created a deployment in the kubernetes cluster using kubectl apply -f config.yaml CLI command. Using similar method I've also deployed fluentd hoping to collect logs from /var/log/containers that would be mounted to fluentD container.

I noticed /var/log/containers/ symlink link doesn't exist. However there is /var/lib/docker/containers/ and it has folders for some containers that were created in the past. None of the new container IDs doesn't seem to exist in /var/lib/docker/containers/ either.

I can see logs in the console when I run kubectl logs pod-name even though I'm unable to find the logs in the local storage.

Following the answer in another thread given by stackoverflow member, I was able to get some information but not all.

I have confirmed Docker is configured with json logging driver by running the following command. docker info | grep -i logging

When I run the following command (found in the thread given above) I can get the image ID. kubectl get pod pod-name -ojsonpath='{.status.containerStatuses[0].containerID}'

However I cannot use it to inspect the docker image using docker inspect as Docker is not aware of such image which I assume it due to the fact it is managed by kind control plane.

Appreciate if the experts in the forum can assist to identify where the logs are written and recreate the /var/log/containers symbolink link to access the container logs.

-- Jason Nanay
containers
fluentd
kind
kubernetes
logging

1 Answer

8/26/2021

It's absolutely normal that your local installed Docker doesn't have containers running in pod created by kind Kubernetes. Let me explain why.

First, we need to figure out, why kind Kubernetes actually needs Docker. It needs it not for running containers inside pods. It needs Docker to create container which will be Kubernetes node - and on this container you will have pods which will have containers that are you looking for.

kind is a tool for running local Kubernetes clusters using Docker container “nodes”.

So basically the layers are : your VM -> container hosted on yours VM's docker which is acting as Kubernetes node -> on this container there are pods -> in those pods are containers.

In kind quickstart section you can find more detailed information about image used by kind:

This will bootstrap a Kubernetes cluster using a pre-built node image. Prebuilt images are hosted atkindest/node, but to find images suitable for a given release currently you should check the release notes for your given kind version (check with kind version) where you'll find a complete listing of images created for a kind release.

Back to your question, let's find missing containers!

On my local VM, I setup kind Kubernetes and I have installed kubectl tool Then, I created an example nginx-deployment. By running kubectl get pods I can confirm pods are working.

Let's find container which is acting as node by running docker ps -a:

CONTAINER ID   IMAGE                  COMMAND                  CREATED          STATUS          PORTS                        NAMES
1d2892110866   kindest/node:v1.21.1   "/usr/local/bin/entr…"   50 minutes ago   Up 49 minutes   127.0.0.1:43207->6443/tcp   kind-control-plane

Okay, now we can exec into it and find containers. Note that kindest/node image is not using docker as the container runtime but crictl.

Let's exec into node: docker exec -it 1d2892110866 sh:

# ls
bin  boot  dev  etc  home  kind  lib  lib32  lib64  libx32  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var
# 

Now we are in node - time to check if containers are here:

# crictl ps -a
CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
135c7ad17d096       295c7be079025       47 minutes ago      Running             nginx                     0                   4e5092cab08f6
ac3b725061e12       295c7be079025       47 minutes ago      Running             nginx                     0                   6ecda41b665da
a416c226aea6b       295c7be079025       47 minutes ago      Running             nginx                     0                   17aa5c42f3512
455c69da57446       296a6d5035e2d       57 minutes ago      Running             coredns                   0                   4ff408658e04a
d511d62e5294d       e422121c9c5f9       57 minutes ago      Running             local-path-provisioner    0                   86b8fcba9a3bf
116b22b4f1dcc       296a6d5035e2d       57 minutes ago      Running             coredns                   0                   9da6d9932c9e4
2ebb6d302014c       6de166512aa22       57 minutes ago      Running             kindnet-cni               0                   6ef310d8e199a
2a5e0a2fbf2cc       0e124fb3c695b       57 minutes ago      Running             kube-proxy                0                   54342daebcad8
1b141f55ce4b2       0369cf4303ffd       57 minutes ago      Running             etcd                      0                   32a405fa89f61
28c779bb79092       96a295389d472       57 minutes ago      Running             kube-controller-manager   0                   2b1b556aeac42
852feaa08fcc3       94ffe308aeff9       57 minutes ago      Running             kube-apiserver            0                   487e06bb5863a
36771dbacc50f       1248d2d503d37       58 minutes ago      Running             kube-scheduler            0                   85ec6e38087b7

Here they are. You can also notice that there are other container which are acting as Kubernetes Components.

For further debugging containers I would suggest reading documentation about debugging Kubernetes nodes with crictl.

Please also note that on your local VM there is file ~/.kube/config which has information needed for kubectl to communicate between your VM and the Kubernetes cluster (in case of kind Kubernetes - docker container running locally).

Hope It will help you. Feel free to ask any question.

EDIT - ADDED INFO HOW TO SETUP MOUNT POINTS

Answering question from comment about mounting directory from node to local VM. We need to setup "Extra Mounts". Let's create a definition needed for kind Kubernetes:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  # add a mount from /path/to/my/files on the host to /files on the node
  extraMounts:
  - hostPath: /tmp/logs/
    containerPath: /var/log/pods
    # optional: if set, the mount is read-only.
    # default false
    readOnly: false
    # optional: if set, the mount needs SELinux relabeling.
    # default false
    selinuxRelabel: false
    # optional: set propagation mode (None, HostToContainer or Bidirectional)
    # see https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation
    # default None
    propagation: Bidirectional

Note that I'm using /var/log/pods instead of /var/log/containers/ - it is because on the cluster created by kind Kubernetes containers directory has only symlinks to logs in pod directory.

Save this yaml, for example as cluster-with-extra-mount.yaml , then create a cluster using this (create a directory /tmp/logs before applying this command!):

kind create cluster --config=/tmp/cluster-with-extra-mount.yaml

Then all containers logs will be in /tmp/logs on your VM.

-- Mikolaj S.
Source: StackOverflow