kubernetes HA setup with kubeadm - fail to start scheduler and controller

11/8/2018

I attempt to build an HA cluster using kubeadm, here is my configuration:

kind: MasterConfiguration
kubernetesVersion: v1.11.4
apiServerCertSANs:
- "aaa.xxx.yyy.zzz"
api:
    controlPlaneEndpoint: "my.domain.de:6443"
    apiServerExtraArgs:
      apiserver-count: 3
etcd:
  local:
    image: quay.io/coreos/etcd:v3.3.10
    extraArgs:
      listen-client-urls: "https://127.0.0.1:2379,https://$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4):2379"
      advertise-client-urls: "https://$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4):2379"
      listen-peer-urls: "https://$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4):2380"
      initial-advertise-peer-urls: "https://$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4):2380"
      initial-cluster-state: "new"
      initial-cluster-token: "kubernetes-cluster"
      initial-cluster: ${CLUSTER}
      name: $(hostname -s)
  localEtcd:
    serverCertSANs:
      - "$(hostname -s)"
      - "$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)"
    peerCertSANs:
      - "$(hostname -s)"
      - "$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)"
networking:
    podSubnet: "${POD_SUBNET}/${POD_SUBNETMASK}"
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: foobar.fedcba9876543210
  ttl: 24h0m0s
  usages:
  - signing
  - authentication

I run this on all three nodes, and I get the nodes starting. After joining calico, it seems that everything is fine, I even added one worker successfully:

ubuntu@master-2-test2:~$ kubectl get nodes
NAME             STATUS    ROLES     AGE       VERSION
master-1-test2   Ready     master    1h        v1.11.4
master-2-test2   Ready     master    1h        v1.11.4
master-3-test2   Ready     master    1h        v1.11.4
node-1-test2     Ready     <none>    1h        v1.11.4

Looking at the control plane, everything looks fine.

curl https://192.168.0.125:6443/api/v1/nodes works from both the masters and the worker node. All pods are running:

ubuntu@master-2-test2:~$ sudo kubectl get pods -n kube-system
NAME                                     READY     STATUS    RESTARTS   AGE
calico-node-9lnk8                        2/2       Running   0          1h
calico-node-f7dkk                        2/2       Running   1          1h
calico-node-k7hw5                        2/2       Running   17         1h
calico-node-rtrvb                        2/2       Running   3          1h
coredns-78fcdf6894-6xgqc                 1/1       Running   0          1h
coredns-78fcdf6894-kcm4f                 1/1       Running   0          1h
etcd-master-1-test2                      1/1       Running   0          1h
etcd-master-2-test2                      1/1       Running   1          1h
etcd-master-3-test2                      1/1       Running   0          1h
kube-apiserver-master-1-test2            1/1       Running   0          40m
kube-apiserver-master-2-test2            1/1       Running   0          58m
kube-apiserver-master-3-test2            1/1       Running   0          36m
kube-controller-manager-master-1-test2   1/1       Running   0          17m
kube-controller-manager-master-2-test2   1/1       Running   1          17m
kube-controller-manager-master-3-test2   1/1       Running   0          17m
kube-proxy-5clt4                         1/1       Running   0          1h
kube-proxy-d2tpz                         1/1       Running   0          1h
kube-proxy-q6kjw                         1/1       Running   0          1h
kube-proxy-vn6l7                         1/1       Running   0          1h
kube-scheduler-master-1-test2            1/1       Running   1          24m
kube-scheduler-master-2-test2            1/1       Running   0          24m
kube-scheduler-master-3-test2            1/1       Running   0          24m

But trying to start a pod, nothing happens:

~$ kubectl get deployments
NAME      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
nginx     1         0         0            0           32m

I turned into looking into the scheduler and controller, and to my dismay there are a lot of errors, the controller floods with:

E1108 00:40:36.638832       1 reflector.go:205] k8s.io/kubernetes/pkg/controller/garbagecollector/graph_builder.go:124: Failed to list <nil>: Unauthorized
E1108 00:40:36.639161       1 reflector.go:205] k8s.io/kubernetes/pkg/controller/garbagecollector/graph_builder.go:124: Failed to list <nil>: Unauthorized

and sometimes with:

 garbagecollector.go:649] failed to discover preferred resources: Unauthorized

E1108 00:40:36.639356       1 reflector.go:205] k8s.io/kubernetes/pkg/controller/garbagecollector/graph_builder.go:124: Failed to list <nil>: Unauthorized
E1108 00:40:36.640568       1 reflector.go:205] k8s.io/kubernetes/pkg/controller/garbagecollector/graph_builder.go:124: Failed to list <nil>: Unauthorized
E1108 00:40:36.642129       1 reflector.go:205] k8s.io/kubernetes/pkg/controller/garbagecollector/graph_builder.go:124: Failed to list <nil>: Unauthorized

And the scheduler has similar errors:

E1107 23:25:43.026465       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:130: Failed to list *v1beta1.ReplicaSet: Get https://mydomain.de:6443/apis/extensions/v1beta1/replicasets?limit=500&resourceVersion=0: EOF
E1107 23:25:43.026614       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:130: Failed to list *v1.Node: Get https://mydomain.de:e:6443/api/v1/nodes?limit=500&resourceVersion=0: EOF

So far, I have no clue how to correct these errors. Any help would be appreciated.

more information:

The kubeconfig for kube-proxy is:

----
apiVersion: v1
kind: Config
clusters:
- cluster:
    certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
    server: https://my.domain.de:6443
  name: default
contexts:
- context:
    cluster: default
    namespace: default
    user: default
  name: default
current-context: default
users:
- name: default
  user:
    tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
Events:  <none>
-- Oz123
kubeadm
kubernetes
kubernetes-ha

2 Answers

11/8/2018

There's some authentication issue (certificate) while talking to the active kube-apiserver on this endpoint: https://mydomain.de:6443/apis/extensions/v1beta1/replicasets?limit=500&resourceVersion=0.

Some pointers:

Is the load balancer for your kube-apiserver pointing to the right one? Are you using an L4 (TCP) load balancer and not a L7 (HTTP) load balancer?

Did you copy the same certs everywhere and made sure that they are the same?

USER=ubuntu # customizable
CONTROL_PLANE_IPS="10.0.0.7 10.0.0.8"
for host in ${CONTROL_PLANE_IPS}; do
    scp /etc/kubernetes/pki/ca.crt "${USER}"@$host:
    scp /etc/kubernetes/pki/ca.key "${USER}"@$host:
    scp /etc/kubernetes/pki/sa.key "${USER}"@$host:
    scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:
    scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:
    scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:
    scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:etcd-ca.crt
    scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:etcd-ca.key
    scp /etc/kubernetes/admin.conf "${USER}"@$host:
done

Did you check that the kube-apiserver and the kube-controller-manager configurations are equivalent under /etc/kubernetes/manifests?

-- Rico
Source: StackOverflow

11/8/2018

Looks good to me, however, can you describe the workernode and see resources available for pods? Also describe the pod and see what error it shows.

-- B Bhaskar
Source: StackOverflow