I am having trouble to bring up my pods at my local K8s. It is installed on Ubuntu 18.04 (1 Master VM, 1 Node VM).
Kubernetes-Master:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:20:10Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:09:08Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
Kubernetes-Slave:/var/lib/kubelet/pki$ kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:20:10Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
I noticed the following (slave = worker node):
Kubernetes-Master:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubernetes-master NotReady master 62d v1.17.0
kubernetes-slave NotReady <none> 62d v1.17.0
By checking the node:
Kubernetes-Master:~$ kubelet
F1223 10:25:38.045551 20431 server.go:253] error reading /var/lib/kubelet/pki/kubelet.key, certificate and key must be supplied as a pair
Kubernetes-Slave:/var/lib/kubelet/pki$ kubelet
F1223 10:20:14.651684 3558 server.go:253] error reading /var/lib/kubelet/pki/kubelet.key, certificate and key must be supplied as a pair
Both VMs were down for a few days. After booting one pod didn't start. One restart later, all pods stayed down:
Kubernetes-Master:~$ kubectl get all -o wide -n gitbucket
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/gitbucket-svc ClusterIP 10.97.69.199 <none> 8080/TCP 67m app=gitbucket
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/gitbucket 0/1 0 0 67m gitbucket gitbucket/gitbucket:latest app=gitbucket
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/gitbucket-67cc5686df 1 0 0 67m gitbucket gitbucket/gitbucket:latest app=gitbucket,pod-template-hash=67cc5686df
Any idea what's going on?
I think I found the issue. It is related to a change at CSInode when switching from Kubernetes 1.16 to 1.17. I had a scheduled patch running (Ubuntu Landscape) after upgrading my memory, which migrated from 1.16 to 1.17. Details can be found here: Worker start to fail CSINodeIfo: error updating CSINode annotation
Upgrade details are documented here (works): https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
If you utilize ISTIO:
Istio (1.3.3 in my case) will block the upgrade. If you would like to execut the upgrade to Kubernetes 1.17, the easiest way to proceed is uninstalling istio and re-installing it after your update is completed. I could not find a defined migration-path at istio (only bug or feature discussions). Keep in mind:
You may have problem with node-authorization. Thanks to the Node authorizer kubelet will o perform API operations.
Any request that is successfully authenticated (including an anonymous request) is then authorized. The default authorization mode is AlwaysAllow, which allows all requests- kubelet authorization.
There are many possible reasons to subdivide access to the kubelet API:
To subdivide access to the kubelet API, delegate authorization to the API server:
--authorization-mode=Webhook
and the --kubeconfig
flags the kubelet calls the SubjectAccessReview API on the configured API server to determine whether each request is authorizedMore information you can find here: pki-kubernetes.
Authentication in Kubernetes: auth-kubernetes.