I am deploying HA kubernetes master(stacked etcd) with kubeadm ,I followed the instructions on official website : https://kubernetes.io/docs/setup/independent/high-availability/
four nodes are planned in my cluster for now:
I deployed haproxy with following configuration:
global
daemon
maxconn 256
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend haproxy_kube
bind *:6443
mode tcp
option tcplog
timeout client 10800s
default_backend masters
backend masters
mode tcp
option tcplog
balance leastconn
timeout server 10800s
server master01 <master01-ip>:6443 check
my kubeadm-config.yaml is like this:
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
nodeRegistration:
name: "master01"
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
apiServer:
certSANs:
- "<haproxyserver-dns>"
controlPlaneEndpoint: "<haproxyserver-dns>:6443"
networking:
serviceSubnet: "172.24.0.0/16"
podSubnet: "172.16.0.0/16"
my initial command is:
kubeadm init --config=kubeadm-config.yaml -v 11
but after I running the command above on the master01, it kept logging the following information:
I0122 11:43:44.039849 17489 manifests.go:113] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0122 11:43:44.041038 17489 local.go:57] [etcd] wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
I0122 11:43:44.041068 17489 waitcontrolplane.go:89] [wait-control-plane] Waiting for the API server to be healthy
I0122 11:43:44.042665 17489 loader.go:359] Config loaded from file /etc/kubernetes/admin.conf
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0122 11:43:44.044971 17489 round_trippers.go:419] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.2 (linux/amd64) kubernetes/cff46ab" 'https://<haproxyserver-dns>:6443/healthz?timeout=32s'
I0122 11:43:44.120973 17489 round_trippers.go:438] GET https://<haproxyserver-dns>:6443/healthz?timeout=32s in 75 milliseconds
I0122 11:43:44.120988 17489 round_trippers.go:444] Response Headers:
I0122 11:43:44.621201 17489 round_trippers.go:419] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.2 (linux/amd64) kubernetes/cff46ab" 'https://<haproxyserver-dns>:6443/healthz?timeout=32s'
I0122 11:43:44.703556 17489 round_trippers.go:438] GET https://<haproxyserver-dns>:6443/healthz?timeout=32s in 82 milliseconds
I0122 11:43:44.703577 17489 round_trippers.go:444] Response Headers:
I0122 11:43:45.121311 17489 round_trippers.go:419] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.2 (linux/amd64) kubernetes/cff46ab" 'https://<haproxyserver-dns>:6443/healthz?timeout=32s'
I0122 11:43:45.200493 17489 round_trippers.go:438] GET https://<haproxyserver-dns>:6443/healthz?timeout=32s in 79 milliseconds
I0122 11:43:45.200514 17489 round_trippers.go:444] Response Headers:
I0122 11:43:45.621338 17489 round_trippers.go:419] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.2 (linux/amd64) kubernetes/cff46ab" 'https://<haproxyserver-dns>:6443/healthz?timeout=32s'
I0122 11:43:45.698633 17489 round_trippers.go:438] GET https://<haproxyserver-dns>:6443/healthz?timeout=32s in 77 milliseconds
I0122 11:43:45.698652 17489 round_trippers.go:444] Response Headers:
I0122 11:43:46.121323 17489 round_trippers.go:419] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.2 (linux/amd64) kubernetes/cff46ab" 'https://<haproxyserver-dns>:6443/healthz?timeout=32s'
I0122 11:43:46.199641 17489 round_trippers.go:438] GET https://<haproxyserver-dns>:6443/healthz?timeout=32s in 78 milliseconds
I0122 11:43:46.199660 17489 round_trippers.go:444] Response Headers:
after quitting the loop with Ctrl-C, I run the curl command mannually, but every thing seems ok:
curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.2 (linux/amd64) kubernetes/cff46ab" 'https://<haproxyserver-dns>:6443/healthz?timeout=32s'
* About to connect() to <haproxyserver-dns> port 6443 (#0)
* Trying <haproxyserver-ip>...
* Connected to <haproxyserver-dns> (10.135.64.223) port 6443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* skipping SSL peer certificate verification
* NSS: client certificate not found (nickname not specified)
* SSL connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate:
* subject: CN=kube-apiserver
* start date: Jan 22 03:43:38 2019 GMT
* expire date: Jan 22 03:43:38 2020 GMT
* common name: kube-apiserver
* issuer: CN=kubernetes
> GET /healthz?timeout=32s HTTP/1.1
> Host: <haproxyserver-dns>:6443
> Accept: application/json, */*
> User-Agent: kubeadm/v1.13.2 (linux/amd64) kubernetes/cff46ab
>
< HTTP/1.1 200 OK
< Date: Tue, 22 Jan 2019 04:09:03 GMT
< Content-Length: 2
< Content-Type: text/plain; charset=utf-8
<
* Connection #0 to host <haproxyserver-dns> left intact
ok
I don't know how to find out the essential cause of this issue, hoping someone who know about this can give me some suggestion. Thanks!
After several days of finding and trying, again, I can solve this problem by myself. In fact, the problem perhaps came with a very rare situation:
I set proxy on master node in both
/etc/profile
anddocker.service.d
, which made the request to haproxy don't work well.
I don't know which setting cause this problem. But after adding a no proxy rule, the problem solved and kubeadm successfully initialized a master after the haproxy load balancer. Here is my proxy settings :
/etc/profile:
...
export http_proxy=http://<my-proxy-server-dns:port>/
export no_proxy=<my-k8s-master-loadbalance-server-dns>,<my-proxy-server-dns>,localhost
/etc/systemd/system/docker.service.d/http-proxy.conf:
[Service]
Environment="HTTP_PROXY=http://<my-proxy-server-dns:port>/" "NO_PROXY<my-k8s-master-loadbalance-server-dns>,<my-proxy-server-dns>,localhost, 127.0.0.0/8, 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16"