Upgrade 1.18.20 to 1.19.21 : On Remaining Control Plan : unsupported or unknown Kubernetes version

7/5/2021

I am performing kubeadm upgrade from 1.18.20 to 1.19.21 , I am now stuck at step on running sudo kubeadm upgrade node at remaining control plane. Need help /suggestion on what to do next.

NOTE: Prior to this upgrade, I did upgrade 1.17.16 to 1.18.20 and there is no such issue.

On 1st control plane, the upgrade looks okay:

[root@tncp-stg-master01 ~]# sudo kubeadm upgrade apply v1.19.12-0
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.19.12-0"
[upgrade/versions] Cluster version: v1.18.20
[upgrade/versions] kubeadm version: v1.19.12
[upgrade/version] FATAL: the --version argument is invalid due to these errors:

        - Specified version to upgrade to "v1.19.12-0" is an unstable version and such upgrades weren't allowed via setting the --allow-*-upgrades flags

Can be bypassed if you pass the --force flag
To see the stack trace of this error execute with --v=5 or higher
[root@tncp-stg-master01 ~]# sudo kubeadm upgrade apply v1.19.12
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.19.12"
[upgrade/versions] Cluster version: v1.18.20
[upgrade/versions] kubeadm version: v1.19.12
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.19.12"...
Static pod: kube-apiserver-tncp-stg-master01.time.com.my hash: 42d728ab72328991b6af3c5238a4708c
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-scheduler-tncp-stg-master01.time.com.my hash: eda45837f8d54b8750b297583fe7441a
[upgrade/etcd] Upgrading to TLS for etcd
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Renewing etcd-server certificate
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-07-0514-41-52/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 27681be8329e7b28f532e45960cfc289
Static pod: etcd-tncp-stg-master01.time.com.my hash: 739364b92b99a8c6e8c092c9385fa5a0
[apiclient] Found 3 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests583076120"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-021-07-05-14-41-52/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-tncp-stg-master01.time.com.my hash: 42d728ab72328991b6af3c5238a4708c
Static pod: kube-apiserver-tncp-stg-master01.time.com.my hash: 42d728ab72328991b6af3c5238a4708c
Static pod: kube-apiserver-tncp-stg-master01.time.com.my hash: 42d728ab72328991b6af3c5238a4708c
Static pod: kube-apiserver-tncp-stg-master01.time.com.my hash: 42d728ab72328991b6af3c5238a4708c
Static pod: kube-apiserver-tncp-stg-master01.time.com.my hash: 42d728ab72328991b6af3c5238a4708c
Static pod: kube-apiserver-tncp-stg-master01.time.com.my hash: 42d728ab72328991b6af3c5238a4708c
Static pod: kube-apiserver-tncp-stg-master01.time.com.my hash: 42d728ab72328991b6af3c5238a4708c
Static pod: kube-apiserver-tncp-stg-master01.time.com.my hash: 42d728ab72328991b6af3c5238a4708c
Static pod: kube-apiserver-tncp-stg-master01.time.com.my hash: 42d728ab72328991b6af3c5238a4708c
Static pod: kube-apiserver-tncp-stg-master01.time.com.my hash: 42d728ab72328991b6af3c5238a4708c
Static pod: kube-apiserver-tncp-stg-master01.time.com.my hash: 42d728ab72328991b6af3c5238a4708c
Static pod: kube-apiserver-tncp-stg-master01.time.com.my hash: 42d728ab72328991b6af3c5238a4708c
Static pod: kube-apiserver-tncp-stg-master01.time.com.my hash: 42d728ab72328991b6af3c5238a4708c
Static pod: kube-apiserver-tncp-stg-master01.time.com.my hash: 42d728ab72328991b6af3c5238a4708c
Static pod: kube-apiserver-tncp-stg-master01.time.com.my hash: 42d728ab72328991b6af3c5238a4708c
Static pod: kube-apiserver-tncp-stg-master01.time.com.my hash: 42d728ab72328991b6af3c5238a4708c
Static pod: kube-apiserver-tncp-stg-master01.time.com.my hash: 42d728ab72328991b6af3c5238a4708c
Static pod: kube-apiserver-tncp-stg-master01.time.com.my hash: 42d728ab72328991b6af3c5238a4708c
Static pod: kube-apiserver-tncp-stg-master01.time.com.my hash: 42d728ab72328991b6af3c5238a4708c
Static pod: kube-apiserver-tncp-stg-master01.time.com.my hash: 42d728ab72328991b6af3c5238a4708c
Static pod: kube-apiserver-tncp-stg-master01.time.com.my hash: 42d728ab72328991b6af3c5238a4708c
Static pod: kube-apiserver-tncp-stg-master01.time.com.my hash: 42d728ab72328991b6af3c5238a4708c
Static pod: kube-apiserver-tncp-stg-master01.time.com.my hash: 709f3d0d7fb7a8c5685693801ff110cb
[apiclient] Found 3 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-mnifests-2021-07-05-14-41-52/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: c5f63b2a4bb9814c91aa387d432bbeef
Static pod: kube-controller-manager-tncp-stg-master01.time.com.my hash: 039553ac73de7e2aebd99a9d9e7d4b1d
[apiclient] Found 3 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-021-07-05-14-41-52/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-tncp-stg-master01.time.com.my hash: eda45837f8d54b8750b297583fe7441a
Static pod: kube-scheduler-tncp-stg-master01.time.com.my hash: eda45837f8d54b8750b297583fe7441a
Static pod: kube-scheduler-tncp-stg-master01.time.com.my hash: eda45837f8d54b8750b297583fe7441a
Static pod: kube-scheduler-tncp-stg-master01.time.com.my hash: eda45837f8d54b8750b297583fe7441a
Static pod: kube-scheduler-tncp-stg-master01.time.com.my hash: eda45837f8d54b8750b297583fe7441a
Static pod: kube-scheduler-tncp-stg-master01.time.com.my hash: eda45837f8d54b8750b297583fe7441a
Static pod: kube-scheduler-tncp-stg-master01.time.com.my hash: eda45837f8d54b8750b297583fe7441a
Static pod: kube-scheduler-tncp-stg-master01.time.com.my hash: eda45837f8d54b8750b297583fe7441a
Static pod: kube-scheduler-tncp-stg-master01.time.com.my hash: eda45837f8d54b8750b297583fe7441a
Static pod: kube-scheduler-tncp-stg-master01.time.com.my hash: eda45837f8d54b8750b297583fe7441a
Static pod: kube-scheduler-tncp-stg-master01.time.com.my hash: eda45837f8d54b8750b297583fe7441a
Static pod: kube-scheduler-tncp-stg-master01.time.com.my hash: eda45837f8d54b8750b297583fe7441a
Static pod: kube-scheduler-tncp-stg-master01.time.com.my hash: eda45837f8d54b8750b297583fe7441a
Static pod: kube-scheduler-tncp-stg-master01.time.com.my hash: eda45837f8d54b8750b297583fe7441a
Static pod: kube-scheduler-tncp-stg-master01.time.com.my hash: eda45837f8d54b8750b297583fe7441a
Static pod: kube-scheduler-tncp-stg-master01.time.com.my hash: eda45837f8d54b8750b297583fe7441a
Static pod: kube-scheduler-tncp-stg-master01.time.com.my hash: eda45837f8d54b8750b297583fe7441a
Static pod: kube-scheduler-tncp-stg-master01.time.com.my hash: eda45837f8d54b8750b297583fe7441a
Static pod: kube-scheduler-tncp-stg-master01.time.com.my hash: eda45837f8d54b8750b297583fe7441a
Static pod: kube-scheduler-tncp-stg-master01.time.com.my hash: eda45837f8d54b8750b297583fe7441a
Static pod: kube-scheduler-tncp-stg-master01.time.com.my hash: eda45837f8d54b8750b297583fe7441a
Static pod: kube-scheduler-tncp-stg-master01.time.com.my hash: eda45837f8d54b8750b297583fe7441a
Static pod: kube-scheduler-tncp-stg-master01.time.com.my hash: eda45837f8d54b8750b297583fe7441a
Static pod: kube-scheduler-tncp-stg-master01.time.com.my hash: b719a6c7edf46f2cbff4eef358c5f633
[apiclient] Found 3 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.19.12". Enjoy!

But , when I went to perform the following steps on remaining control plane, hitting the following errors:

[root@tncp-stg-master02 pki]# sudo kubeadm upgrade node
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade] Upgrading your Static Pod-hosted control plane instance to version "v1.19.12"...
Static pod: kube-apiserver-tncp-stg-master02.time.com.my hash: 2977ce343053936f861ccbdc9fbcabce
Static pod: kube-controller-manager-tncp-stg-master02.time.com.my hash: 251b3efdde07f767bb4a8380c1dc04bb
Static pod: kube-scheduler-tncp-stg-master02.time.com.my hash: eda45837f8d54b8750b297583fe7441a
[upgrade/etcd] Upgrading to TLS for etcd
error execution phase control-plane: couldn't complete the static pod upgrade: failed to retrieve an etcd version for the target Kubernetes version: unsupported or unkown Kubernetes version(1.19.12)
To see the stack trace of this error execute with --v=5 or higher
[root@tncp-stg-master02 pki]#

if running with --v=5:

[root@tncp-stg-master02 pki]# sudo kubeadm upgrade node --v=5
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade] Upgrading your Static Pod-hosted control plane instance to version "v1.19.12"...
Static pod: kube-apiserver-tncp-stg-master02.time.com.my hash: 2977ce343053936f861ccbdc9fbcabce
Static pod: kube-controller-manager-tncp-stg-master02.time.com.my hash: 251b3efdde07f767bb4a8380c1dc04bb
Static pod: kube-scheduler-tncp-stg-master02.time.com.my hash: eda45837f8d54b8750b297583fe7441a
I0705 16:59:21.578428   17158 etcd.go:108] etcd endpoints read from pods: https://10.210.117.31:2379,https://10.210.117.32:2379,https://10.210.117.33:2379
I0705 16:59:21.596763   17158 etcd.go:167] etcd endpoints read from etcd: https://10.210.117.32:2379,https://10.210.117.33:2379,https://10.210.117.31:2379
I0705 16:59:21.596880   17158 etcd.go:126] update etcd endpoints: https://10.210.117.32:2379,https://10.210.117.33:2379,https://10.210.117.31:2379
[upgrade/etcd] Upgrading to TLS for etcd
unsupported or unknown Kubernetes version(1.19.12)
k8s.io/kubernetes/cmd/kubeadm/app/constants.EtcdSupportedVersion
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/constants/constants.go:452
k8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade.performEtcdStaticPodUpgrade
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade/staticpods.go:284
k8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade.StaticPodControlPlane
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade/staticpods.go:455
k8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade.PerformStaticPodUpgrade
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade/staticpods.go:606
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/upgrade/node.runControlPlane.func1
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/upgrade/node/controlplane.go:77
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:234
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:422
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade.NewCmdNode.func1
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade/node.go:72
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:826
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:914
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:864
k8s.io/kubernetes/cmd/kubeadm/app.Run
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
        _output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
        /usr/local/go/src/runtime/proc.go:203
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1357
failed to retrieve an etcd version for the target Kubernetes version
k8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade.performEtcdStaticPodUpgrade
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade/staticpods.go:286
k8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade.StaticPodControlPlane
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade/staticpods.go:455
k8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade.PerformStaticPodUpgrade
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade/staticpods.go:606
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/upgrade/node.runControlPlane.func1
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/upgrade/node/controlplane.go:77
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:234
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:422
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade.NewCmdNode.func1
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade/node.go:72
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:826
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:914
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:864
k8s.io/kubernetes/cmd/kubeadm/app.Run
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
        _output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
        /usr/local/go/src/runtime/proc.go:203
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1357
couldn't complete the static pod upgrade
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/upgrade/node.runControlPlane.func1
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/upgrade/node/controlplane.go:78
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:234
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:422
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade.NewCmdNode.func1
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade/node.go:72
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:826
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:914
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:864
k8s.io/kubernetes/cmd/kubeadm/app.Run
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
        _output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
        /usr/local/go/src/runtime/proc.go:203
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1357
error execution phase control-plane
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:235
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:422
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade.NewCmdNode.func1
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade/node.go:72
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:826
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:914
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:864
k8s.io/kubernetes/cmd/kubeadm/app.Run
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
        _output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
        /usr/local/go/src/runtime/proc.go:203
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1357
-- Yee Fang Toh
kubeadm
kubernetes

1 Answer

7/5/2021

Please ensure you have the same version of kubeadm installed on all nodes before you run kubeadm upgrade node.

The unsupported or unknown Kubernetes version error happens when kubeadm is not able to map a kubernetes patch version to an etcd version, see sources.

Every kubeadm version only supports a certain number of kubernetes patch releases. For example, current kubeadm master only supports patch releases from .13 to .23 (sources)

If you run the same kubeadm version as you did in the node that worked the error should not happen.

-- whites11
Source: StackOverflow