Upgrade discrepancy for kubeadm

12/21/2018

I upgraded my cluster to 1.13.1 as seen here

[gms@thalia2 ~]$ kubectl get nodes
NAME                  STATUS    ROLES     AGE       VERSION
thalia0               Ready     master    56d       v1.13.1
thalia1               Ready     <none>    18d       v1.13.1
thalia2               Ready     <none>    36m       v1.13.1
thalia3               Ready     <none>    56d       v1.13.1
thalia4               Ready     <none>    17d       v1.13.1

However, when I run kubeadm version on thalia2, I get

[gms@thalia2 ~]$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:14:39Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}

The upgrade on this node did not go smoothly. When I tried it as per Upgrading kubeadm, I got an error that

[gms@thalia2 ~]$ sudo kubeadm upgrade node config --kubelet-version $(kubelet --version | cut -d ' ' -f 2)
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
configmaps "kubelet-config-1.11" is forbidden: User "system:node:thalia2" cannot get resource "configmaps" in API group "" in the namespace "kube-system": no path found to object

To circumvent this, I did a kubeadm reset and reinstalled kubectl and kubadm and then joined my cluster, but 1.11.2 still shows up as the version when I do a kubeadm version.

If I do a kubectl get cm -n kube-system I get

NAME                                 DATA      AGE
calico-config                        2         56d
coredns                              1         6d5h
extension-apiserver-authentication   6         56d
kube-proxy                           2         56d
kubeadm-config                       2         56d
kubelet-config-1.12                  1         56d
kubelet-config-1.13                  1         4h5m

And, if I list installed packages on said node, I get:

gms@thalia2 ~]$ sudo yum list installed kube*
Loaded plugins: enabled_repos_upload, package_upload, priorities, product-id, search-disabled-repos, subscription-manager
Installed Packages
kubeadm.x86_64                                                                                       1.13.1-0                                                                                 @kubernetes
kubectl.x86_64                                                                                       1.13.1-0                                                                                 @kubernetes
kubelet.x86_64                                                                                       1.13.1-0                                                                                 @kubernetes
kubernetes-cni.x86_64                                                                                0.6.0-0                                                                                  @kubernetes

* EDIT 1 * NB: initially the cluster had all been upgraded from 1.11 to 1.12.

This time, I did the 1.12 to 1.13 route, and that is when I got the error noted above on the single node. That is why I tried instead to do the fresh install on the thalia2 node. However, when I do a kubeadm version, I get the wrong version, even though it registers as the right one when I list the nodes.

My cluster works, so not sure what is going on between the version discrepancies.

-- horcle_buzz
kubeadm
kubernetes

1 Answer

12/24/2018

According to Kubernetes (kubeadm) documentation:

Every upgrade process might be a bit different, so we’ve documented each minor upgrade process individually. For more version-specific upgrade guidance, see the following resources:

You can upgrade only from one minor version to the next minor version. That is, you cannot skip versions when you upgrade. For example, you can upgrade only from 1.10 to 1.11, not from 1.9 to 1.11.

If you have followed the instructions, could you put more details to the question about steps you have taken to upgrade and intermediate results.

UPDATE:

Probably, some of Kubernetes components wasn't updated properly.
This workaround helps you to update the components to a certain version:

# Run  the following commands where you have kubectl configured
# Evict scheduled pods from the worker node and cordon it
$ kubectl drain thalia2

# Run the following commands on the node worker node (thalia2)
# Upgrade/Downgrade Kubernetes components
# Suitable for Ubuntu 
$ apt-get install -y kubectl=1.13.1-00 kubeadm=1.13.1-00 kubelet=1.13.1-00

# Suitable for CentOS
$ yum install kubelet-1.13.1-0 kubeadm-1.13.1-0 kubectl-1.13.1-0 --disableexcludes=kubernetes

$ kubeadm upgrade node config --kubelet-version $(kubelet --version | cut -d ' ' -f 2)
$ systemctl restart kubelet

# Run  the following commands where you have kubectl configured
# Enable worker node for pods scheduling.
$ kubectl uncordon thalia2
-- VAS
Source: StackOverflow