Why am I not seeing the worker node in my cluster?

7/12/2021

I'm running a cluster with kind - one worker node.

However when I do kubectl get nodes I can't see the node, but instead I see 'kind control plane' - which makes no sense to me, control plane is a node??

The worker node must be running, because I can do kubectl exec --stdin --tty <name of the pod> /bin/sh and see inside of the container that's running my app.

Is this some weird WSL2 interaction? Or I'm simply doing something wrong?

-- LemonadeJoe
kind
kubernetes

1 Answer

7/12/2021

control-plane is just a name. If you just run kind create cluster, its default is to create a single-node cluster with the name control-plane. From your description, everything is working properly.

One of kind's core features is the ability to run a "multi-node" cluster, but all locally in containers. If you want to test your application's behavior if, for example, you drain its pods from a node, you can run a kind cluster with one control-plane node (running etcd, the API server, and other core Kubernetes processes) and three worker nodes; let the application start up, then kubectl drain worker-1 and watch what happens. The documentation also notes that this is useful if you're developing on Kubernetes itself and need a "multi-node" control plan to test HA support.

-- David Maze
Source: StackOverflow