restore kubeconfig from host system

1/2/2020

i set up a k8s cluster and managed to kill my kubeconfig apparently unrecoverably :( I do have access to the nodes though and respectively to the containers running on the controlplance and etcd. Is there any way to retrieve a working kubeconfig from within the cluster?

i used rancher to set up this cluster, unfortunately rancher broke pretty badly, when the ip of the host system changed and the letsencrypt certs ran out.

All deployments are actually running perfectly well, i just don't get access to the cluster anymore :(

this is my kube config:

apiVersion: v1
clusters:
- cluster:
    server: https://[broken-rancher-server]/k8s/clusters/c-7l92q
  name: my-cluster
- cluster:
    certificate-authority-data: UlZMG1VR3VLYUVMT...
    server: https://1.1.1.1:6443
  name: my-cluster-prod-cp-etcd-1
- cluster:
    certificate-authority-data: UlZMG1VR3VLYUVMT...
    server: https://1.1.1.2:6443
  name: my-cluster-prod-cp-etcd-2
contexts:
- context:
    cluster: my-cluster-prod-cp-etcd-1
    user: u-jv5hx
  name: my-cluster-prod-cp-etcd-1
- context:
    cluster: my-cluster-prod-cp-etcd-2
    user: u-jv5hx
  name: my-cluster-prod-cp-etcd-2
current-context: my-cluster
kind: Config
preferences: {}
users:
- name: u-jv5hx
  user:
    token: kubeconfig-u-jv5hx.c-7l92q:z2jjt5wx7xxxxxxxxxxxxxxxxxx7nxhxn6n4q

if i get access to this cluster again i can simply setup a new rancher instance and import that cluster, but for this i need access to it first.

Any hint is greatly appreciated since i pretty much ran out of ideas by now.

-- Peter
kubeconfig
kubernetes
rancher

1 Answer

1/3/2020

RKE v0.1.x or Rancher v2.0.x/v2.1.x custom cluster controlplane node

Oneliner (RKE and Rancher custom cluster)

If you know what you are doing (requires kubectl and jq on the node).

kubectl --kubeconfig $(docker inspect kubelet --format '{{ range .Mounts }}{{ if eq .Destination "/etc/kubernetes" }}{{ .Source }}{{ end }}{{ end }}')/ssl/kubecfg-kube-node.yaml get secret -n kube-system kube-admin -o jsonpath={.data.Config} | base64 -d | sed -e "/^[[:space:]]*server:/ s_:.*_: \"https://127.0.0.1:6443\"_" > kubeconfig_admin.yaml
kubectl --kubeconfig kubeconfig_admin.yaml get nodes

Docker run commands (Rancher custom cluster)

To be executed on nodes with controlplane role, this uses the rancher/rancher-agent image to retrieve the kubeconfig.

1.Get kubeconfig

docker run --rm --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{{ if eq .Destination "/etc/kubernetes" }}{{ .Source }}{{ end }}{{ end }}')/ssl:/etc/kubernetes/ssl:ro --entrypoint bash $(docker inspect $(docker images -q --filter=label=io.cattle.agent=true) --format='{{index .RepoTags 0}}' | tail -1) -c 'kubectl --kubeconfig /etc/kubernetes/ssl/kubecfg-kube-node.yaml get secret -n kube-system kube-admin -o jsonpath={.data.Config} | base64 -d | sed -e "/^[[:space:]]*server:/ s_:.*_: \"https://127.0.0.1:6443\"_"' > kubeconfig_admin.yaml

2.Run kubectl get nodes

docker run --rm --net=host -v $PWD/kubeconfig_admin.yaml:/root/.kube/config --entrypoint bash $(docker inspect $(docker images -q --filter=label=io.cattle.agent=true) --format='{{index .RepoTags 0}}' | tail -1) -c 'kubectl get nodes'

More details for other versions of rancher and script here

-- Arghya Sadhu
Source: StackOverflow