I'm trying to setup a Kubernetes HA cluster using kubeadm as installer and F5 as load-balancer (cannot use HAproxy). I'm experiencing issues with the F5 configuration.
I'm using self-signed certificates and passed the apiserver.crt and apiserver.key to the load balancer.
For some reasons the kubeadm init script fails with the following error:
[apiclient] All control plane components are healthy after 33.083159 seconds
I0805 10:09:11.335063 1875 uploadconfig.go:109] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0805 10:09:11.340266 1875 request.go:947] Request Body: {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"kubeadm-config","namespace":"kube-system","creationTimestamp":null},"data":{"ClusterConfiguration":"apiServer:\n certSANs:\n - $F5_LOAD_BALANCER_VIP\n extraArgs:\n authorization-mode: Node,RBAC\n timeoutForControlPlane: 4m0s\napiVersion: kubeadm.k8s.io/v1beta2\ncertificatesDir: /etc/kubernetes/pki\nclusterName: kubernetes\ncontrolPlaneEndpoint: $F5_LOAD_BALANCER_VIP:6443\ncontrollerManager: {}\ndns:\n type: CoreDNS\netcd:\n local:\n dataDir: /var/lib/etcd\nimageRepository: k8s.gcr.io\nkind: ClusterConfiguration\nkubernetesVersion: v1.15.1\nnetworking:\n dnsDomain: cluster.local\n podSubnet: 192.168.0.0/16\n serviceSubnet: 10.96.0.0/12\nscheduler: {}\n","ClusterStatus":"apiEndpoints:\n lnxkbmaster02:\n advertiseAddress: $MASTER01_IP\n bindPort: 6443\napiVersion: kubeadm.k8s.io/v1beta2\nkind: ClusterStatus\n"}}
I0805 10:09:11.340459 1875 round_trippers.go:419] curl -k -v -XPOST -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "User-Agent: kubeadm/v1.15.1 (linux/amd64) kubernetes/4485c6f" 'https://$F5_LOAD_BALANCER_VIP:6443/api/v1/namespaces/kube-system/configmaps'
I0805 10:09:11.342399 1875 round_trippers.go:438] POST https://$F5_LOAD_BALANCER_VIP:6443/api/v1/namespaces/kube-system/configmaps 403 Forbidden in 1 milliseconds
I0805 10:09:11.342449 1875 round_trippers.go:444] Response Headers:
I0805 10:09:11.342479 1875 round_trippers.go:447] Content-Type: application/json
I0805 10:09:11.342507 1875 round_trippers.go:447] X-Content-Type-Options: nosniff
I0805 10:09:11.342535 1875 round_trippers.go:447] Date: Mon, 05 Aug 2019 08:09:11 GMT
I0805 10:09:11.342562 1875 round_trippers.go:447] Content-Length: 285
I0805 10:09:11.342672 1875 request.go:947] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"configmaps is forbidden: User \"system:anonymous\" cannot create resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"","reason":"Forbidden","details":{"kind":"configmaps"},"code":403}
error execution phase upload-config/kubeadm: error uploading the kubeadm ClusterConfiguration: unable to create ConfigMap: configmaps is forbidden: User "system:anonymous" cannot create resource "configmaps" in API group "" in the namespace "kube-system"
The init is really basic:
kubeadm init --config=kubeadm-config.yaml --upload-certs
Here's the kubeadm-config.yaml:
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: stable
controlPlaneEndpoint: "$F5_LOAD_BALANCER_VIP:6443"
networking:
podSubnet: "192.168.0.0/16"
If I setup the cluster using a HAproxy the init runs smoothly:
#---------------------------------------------------------------------
# kubernetes
#---------------------------------------------------------------------
frontend kubernetes
bind $HAPROXY_LOAD_BALANCER_IP:6443
option tcplog
mode tcp
default_backend kubernetes-master-nodes
backend kubernetes-master-nodes
mode tcp
balance roundrobin
option tcp-check
server master01.my-domain $MASTER_01_IP:6443 check fall 3 rise 2
server master02.my-domain $MASTER_02_IP:6443 check fall 3 rise 2
server master03.my-domain $MASTER_03_IP:6443 check fall 3 rise 2
END
My solution has been to deploy the cluster without the proxy (F5) with a configuration as follows:
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: stable
controlPlaneEndpoint: "$MASTER_1_IP:6443"
networking:
podSubnet: "192.168.0.0/16"
Afterwards it was necessary to deploy on the cluster the F5 BIG-IP Controller for Kubernetes to manage the F5 device from Kubernetes. Detailed guide can be found here:
https://clouddocs.f5.com/products/connectors/k8s-bigip-ctlr/v1.10/
Beware that it requires an additional F5 license and admin privileges.