We had setup kubernetes 1.10.1 on CoreOS with three nodes. Setup is successfull
NAME STATUS ROLES AGE VERSION
node1.example.com Ready master 19h v1.10.1+coreos.0
node2.example.com Ready node 19h v1.10.1+coreos.0
node3.example.com Ready node 19h v1.10.1+coreos.0
NAMESPACE NAME READY STATUS RESTARTS AGE
default pod-nginx2-689b9cdffb-qrpjn 1/1 Running 0 16h
kube-system calico-kube-controllers-568dfff588-zxqjj 1/1 Running 0 18h
kube-system calico-node-2wwcg 2/2 Running 0 18h
kube-system calico-node-78nzn 2/2 Running 0 18h
kube-system calico-node-gbvkn 2/2 Running 0 18h
kube-system calico-policy-controller-6d568cc5f7-fx6bv 1/1 Running 0 18h
kube-system kube-apiserver-x66dh 1/1 Running 4 18h
kube-system kube-controller-manager-787f887b67-q6gts 1/1 Running 0 18h
kube-system kube-dns-79ccb5d8df-b9skr 3/3 Running 0 18h
kube-system kube-proxy-gb2wj 1/1 Running 0 18h
kube-system kube-proxy-qtxgv 1/1 Running 0 18h
kube-system kube-proxy-v7wnf 1/1 Running 0 18h
kube-system kube-scheduler-68d5b648c-54925 1/1 Running 0 18h
kube-system pod-checkpointer-vpvg5 1/1 Running 0 18h
But when i tries to see the logs of any pods kubectl gives the following error:
kubectl logs -f pod-nginx2-689b9cdffb-qrpjn error: You must be logged in to the server (the server has asked for the client to provide credentials ( pods/log pod-nginx2-689b9cdffb-qrpjn))
And also trying to get inside of the pods (using EXEC command of kubectl) gives following error:
kubectl exec -ti pod-nginx2-689b9cdffb-qrpjn bash error: unable to upgrade connection: Unauthorized
Kubelet Service File :
Description=Kubelet via Hyperkube ACI
[Service]
EnvironmentFile=/etc/kubernetes/kubelet.env
Environment="RKT_RUN_ARGS=--uuid-file-save=/var/run/kubelet-pod.uuid \
--volume=resolv,kind=host,source=/etc/resolv.conf \
--mount volume=resolv,target=/etc/resolv.conf \
--volume var-lib-cni,kind=host,source=/var/lib/cni \
--mount volume=var-lib-cni,target=/var/lib/cni \
--volume var-log,kind=host,source=/var/log \
--mount volume=var-log,target=/var/log"
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/checkpoint-secrets
ExecStartPre=/bin/mkdir -p /etc/kubernetes/inactive-manifests
ExecStartPre=/bin/mkdir -p /var/lib/cni
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/run/kubelet-pod.uuid
ExecStart=/usr/lib/coreos/kubelet-wrapper \
--kubeconfig=/etc/kubernetes/kubeconfig \
--config=/etc/kubernetes/config \
--cni-conf-dir=/etc/kubernetes/cni/net.d \
--network-plugin=cni \
--allow-privileged \
--lock-file=/var/run/lock/kubelet.lock \
--exit-on-lock-contention \
--hostname-override=node1.example.com \
--node-labels=node-role.kubernetes.io/master \
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/run/kubelet-pod.uuid
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
KubeletConfiguration File
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
staticPodPath: "/etc/kubernetes/manifests"
clusterDomain: "cluster.local"
clusterDNS: [ "10.3.0.10" ]
nodeStatusUpdateFrequency: "5s"
clientCAFile: "/etc/kubernetes/ca.crt"
We have also specified "--kubelet-client-certificate" and "--kubelet-client-key" flags into kube-apiserver.yaml files:
- --kubelet-client-certificate=/etc/kubernetes/secrets/apiserver.crt
- --kubelet-client-key=/etc/kubernetes/secrets/apiserver.key
So what we are missing here? Thanks in advance :)
In general, many different .kube/config file errors will trigger this error message. In my case it was that I simply specified the wrong cluster name in my config file (and spent MANY hours trying to debug it).
When I specified the wrong cluster name, I received 2 requests for MFA token codes, followed by the error: You must be logged in to the server (the server has asked for the client to provide credentials)
message.
Example:
# kubectl create -f scripts/aws-auth-cm.yaml
Assume Role MFA token code: 123456
Assume Role MFA token code: 123456
could not get token: AccessDenied: MultiFactorAuthentication failed with invalid MFA one time pass code.
Looks like you misconfigured kublet:
You missed the --client-ca-file
flag in your Kubelet Service File
That’s why you can get some general information from master, but can’t get access to nodes.
This flag is responsible for certificate; without this flag, you can not get access to the nodes.