kubectl version returns error

8/24/2017

I am trying to install a Kubernetes cluster on CentOS 7.3 servers. After some progress I got stuck on getting installing CNI plugin. To install plugin I need to pass a parameter which extracted from "kubectl version" command output. However command gets error when getting the required information, Server version:

[root@bigdev1 ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.0", GitCommit:"fff5156092b56e6bd60fff75aad4dc9de6b6ef37", GitTreeState:"clean", BuildDate:"2017-03-28T16:36:33Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Error from server (NotFound): the server could not find the requested resource

Actually I started using default documentation (https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/) with version kubeadm 1.7.3 (and Docker 17) but got stuck on a check:

[root@bigdev1 ~]# kubeadm init --pod-network-cidr=10.244.0.0/16
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.4
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.03.1-ce. Max validated version: 1.12
[preflight] Starting the kubelet service
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated CA certificate and key.
[certificates] Generated API server certificate and key.
[certificates] API Server serving cert is signed for DNS names [bigdev1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.109.20]
[certificates] Generated API server kubelet client certificate and key.
[certificates] Generated service account token signing key and public key.
[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[apiclient] Created API client, waiting for the control plane to become ready

(waits here forever)

Then I decreased Docker version to 1.12.6 and kubernetes version to 1.6.0 After modifying kubeadm config. Also stopped passing cidr parameter to kubeadm init.

I will be glad if you can give any suggestions to get cleared of this issue or give the result of below command:

kubectl version | base64 | tr -d '\n'

Thanks in advance.

-- Sedat Kestepe
centos7
kubectl
kubernetes
linux

1 Answer

8/24/2017

not sure which document your following. I would recommend using the kubeadm to configure the cluster.

https://kubernetes.io/docs/setup/independent/install-kubeadm/

-- sfgroups
Source: StackOverflow