I've built 3 nodes on linux academy. On the control plane I can see 3 nodes running. On either of the two worker nodes I try to run kubectl get nodes
. Initially I was prompted that KUBERNETES_MASTER
was not set.
Inside the worker nodes, I've tried setting this to the server value found in /kube/config
in master. So in worker node: export KUBERNETES_MASTER=https://1.2.3.4:6443
. When I try this and then try again kubectl get nodes
I get Unable to connect to the server: x509: certificate signed by unknown authority
I've also tried setting to export KUBERNETES_MASTER=https://kubernetes.default.svc
in the worker nodes. When I try this and then try kubectl get nodes
I get Unable to connect to the server: dial tcp: lookup kubernetes.default.svc on 127.0.0.53:53: no such host
Any idea what I'm doing wrong?
We usually save configurations of kubectl
to a file at ~/.kube/config
, which contains both the master endpoint and certificate. You can easily copy it from masters.
And, the FQDN of kubernetes.default.svc
is kubernetes.default.svc.cluster.local
assuming your cluster domain is cluster.local
. This domain name is set for workloads that are deployed in clusters and need to access the API server. Such that, the domain name is designed to be resolved only in clusters by kube-dns
.
For processes out of clusters, the hostname or IP address/VIP is often used as the endpoint of the API server.
You can only use cluster DNS names from inside pods, not from the nodes directly. As for the cert issue, your kube config file will generally include the CA used for API TLS.