Kubenetes upgrade from 1.8.7 to 1.13.0

12/19/2018

Context

We currently have 3 stable clusters on kubernetes(v1.8.7). These clusters were created by an external team which is no longer available and we have limited documentation. We are trying to upgrade to a higher stable version(v1.13.0). We're aware that we need to upgrade 1 version at a time so 1.8 -> 1.9 -> 1.10 & so on.

Solved Questions

  1. Any pointers on how to upgrade from 1.8 to 1.9 ?
  2. We tried to install kubeadm v1.8.7 & run kubeadm upgrade plan, but it fails with output -

    [preflight] Running pre-flight checks couldn't create a Kubernetes client from file "/etc/kubernetes/admin.conf": failed to load admin kubeconfig [open /etc/kubernetes/admin.conf: no such file or directory]
    we can not find the file admin.conf. Any suggestions on how we can regenerate this or what information would it need ?

New Question

Since we now have the admin.conf file, we installed kubectl,kubeadm and kubelet v 1.9.0 -
apt-get install kubelet=1.9.0-00 kubeadm=1.9.0-00 kubectl=1.9.0-00.

When I run kubeadm upgrade plan v1.9.0
I get

root@k8s-master-dev-0:/home/azureuser# kubeadm upgrade plan v1.9.0
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/health] FATAL: [preflight] Some fatal errors occurred:
        [ERROR APIServerHealth]: the API Server is unhealthy; /healthz didn't return "ok"
        [ERROR MasterNodesReady]: couldn't list masters in cluster: Get https://<k8s-master-dev-0 ip>:6443/api/v1/nodes?labelSelector=node-role.kubernetes.io%2Fmaster%3D: dial tcp <k8s-master-dev-0 ip>:6443: getsockopt: connection refused
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...  

root@k8s-master-dev-0:/home/azureuser# kubectl get pods -n kube-system
NAME                                       READY     STATUS    RESTARTS   AGE
heapster-75f8df9884-nxn2z                  2/2       Running   0          42d
kube-addon-manager-k8s-master-dev-0        1/1       Running   2          1d
kube-addon-manager-k8s-master-dev-1        1/1       Running   4          123d
kube-addon-manager-k8s-master-dev-2        1/1       Running   2          169d
kube-apiserver-k8s-master-dev-0            1/1       Running   100        1d
kube-apiserver-k8s-master-dev-1            1/1       Running   4          123d
kube-apiserver-k8s-master-dev-2            1/1       Running   2          169d
kube-controller-manager-k8s-master-dev-0   1/1       Running   3          1d
kube-controller-manager-k8s-master-dev-1   1/1       Running   4          123d
kube-controller-manager-k8s-master-dev-2   1/1       Running   4          169d
kube-dns-v20-5d9fdc7448-smf9s              3/3       Running   0          42d
kube-dns-v20-5d9fdc7448-vtjh4              3/3       Running   0          42d
kube-proxy-cklcx                           1/1       Running   1          123d
kube-proxy-dldnd                           1/1       Running   4          169d
kube-proxy-gg89s                           1/1       Running   0          169d
kube-proxy-mrkqf                           1/1       Running   4          149d
kube-proxy-s95mm                           1/1       Running   10         169d
kube-proxy-zxnb7                           1/1       Running   2          169d
kube-scheduler-k8s-master-dev-0            1/1       Running   2          1d
kube-scheduler-k8s-master-dev-1            1/1       Running   6          123d
kube-scheduler-k8s-master-dev-2            1/1       Running   4          169d
kubernetes-dashboard-8555bd85db-4txtm      1/1       Running   0          42d
tiller-deploy-6677dc8d46-5n5cp             1/1       Running   0          42d
-- Kshitij Karandikar
kubeadm
kubernetes
upgrade

2 Answers

12/19/2018

Any pointers on how to upgrade from 1.8 to 1.9 ?

Definitely kubeadm

We tried to install kubeadm v1.8.7 & run kubeadm upgrade plan, but it fails with output -

[preflight] Running pre-flight checks couldn't create a Kubernetes client from file "/etc/kubernetes/admin.conf": failed to load admin kubeconfig [open /etc/kubernetes/admin.conf: no such file or directory] we can not find the file admin.conf. Any suggestions on how we can regenerate this or what information would it need ?

kubeadm requires a couple of things:

  1. ConfigMap in-cluster
  2. Authentication / Credentials file

Firstly, I'd check kube-system namespace for a kubeadm-config ConfigMap. If that exists, you should be able to continue relatively painless.

If this doesn't exist, you will need to go ahead and create it.

kubeadm config upload from-flags would be a good starting point. You can specify the kubelet flags from your systemd unit file and it should get you in good shape.

https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-config/#cmd-config-from-flags

Secondly, kubeadm needs a conf file with credentials. I'd imagine there's one of these in your /etc/kubernetes directory somewhere; so poke around.

This file will look like your local kubeconfigs, starting with:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: 
-- Rawkode
Source: StackOverflow

12/19/2018

Lets go by step and first generate the admin.conf file in your cluster: You can generate the admin.conf file using following command:

kubeadm alpha phase kubeconfig admin --cert-dir /etc/kubernetes/pki --kubeconfig-dir /etc/kubernetes/

Now, you can check out my following answer how to upgrade kubernetes cluster by kubeadm (The answer is for 1.10.0 to 1.10.11 but it is applicable also to 1.8 to 1.9, you just need to change the version for the package you download)

how to upgrade kubernetes from v1.10.0 to v1.10.11

Hope this helps.

-- Prafull Ladha
Source: StackOverflow