I'm using two virtual machine with operating system Centos 8
master-node:
kubeadm init
node-1:
kubeadm join
node-1
joined successfully, and got the result run "kubectl get nodes"
.
but running kubectl get nodes
got response "The connection to the server localhost:8080 was refused - did you specify the right host or port?"
I've checked my config using command kubectl config view
and I got a result:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null
I've ls /etc/kubernetes/
and it show kubelet.conf
only
From what I see you are trying to use kubectl on worker node after successfull kubeadm join
.
kubeadm init
is genereting kubeadmin credentials/config files that are used to connect to the cluster and you were expecting that kubeadm join
will also create simmilar credentials so you can run kubectl commands from worker node. kubeadm join
command is not placing any admin credentials on worker nodes (where applications are running; for security reasons).
If you want it there you need to copy it from master to worker manually (or create a new ones).
Based on the writing, once kubeadm init
is completed the master node is initialized and components are set.
Running kubeadm join
on worker node would join this node to previous master.
After this step if you're running kubectl get nodes
on master and encountering the above mentioned issue then its because cluster config is missing for kubectl.
The default config will be /etc/kubernetes/admin.conf
which can set to environmental variables with key as KUBECONFIG
.
Or simplest way would be to copy this file into .kube folder.
cp -f /etc/kubernetes/admin.conf ~/.kube/config