I'm new to k8s and trying to run 3-nodes (master + 2 workers) cluster (v1.9.6) in Vagrant (Ubuntu 16.04) from scratch without any automation. I believe this is a right way to get hands-on experience for the beginner like me. To be honest, I've already spent on this more than a week and feel desperate.
My problem is that coredns pod (same with kube-dns) can't reach out to kube-apiserver via ClusterIP. It looks like this:
vagrant@master-0:~$ kubectl get svc --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 2d
kube-system kube-dns ClusterIP 10.0.30.1 <none> 53/UDP,53/TCP 2h
vagrant@master-0:~$ kubectl logs coredns-5c6d9fdb86-mffzk -n kube-system
E0330 15:40:45.476465 1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:319: Failed to list *v1.Namespace: Get https://10.0.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.0.0.1:443: i/o timeout
E0330 15:40:45.478241 1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:312: Failed to list *v1.Service: Get https://10.0.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.0.0.1:443: i/o timeout
E0330 15:40:45.478289 1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:314: Failed to list *v1.Endpoints: Get https://10.0.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.0.0.1:443: i/o timeout
At the same time I can ping 10.0.0.1 from any machine and from inside the pods (used busybox to test) but curl doesn't work.
Master
interfaces
br-e468013fba9d Link encap:Ethernet HWaddr 02:42:8f:da:d3:35
inet addr:172.18.0.1 Bcast:172.18.255.255 Mask:255.255.0.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
docker0 Link encap:Ethernet HWaddr 02:42:d7:91:fd:9b
inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
enp0s3 Link encap:Ethernet HWaddr 02:74:f2:80:ad:a4
inet addr:10.0.2.15 Bcast:10.0.2.255 Mask:255.255.255.0
inet6 addr: fe80::74:f2ff:fe80:ada4/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:3521 errors:0 dropped:0 overruns:0 frame:0
TX packets:2116 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:784841 (784.8 KB) TX bytes:221888 (221.8 KB)
enp0s8 Link encap:Ethernet HWaddr 08:00:27:45:ed:ec
inet addr:192.168.0.1 Bcast:192.168.0.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fe45:edec/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:322839 errors:0 dropped:0 overruns:0 frame:0
TX packets:329938 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:45879993 (45.8 MB) TX bytes:89279972 (89.2 MB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:249239 errors:0 dropped:0 overruns:0 frame:0
TX packets:249239 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:75677355 (75.6 MB) TX bytes:75677355 (75.6 MB)
iptables
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-N DOCKER
-N DOCKER-ISOLATION
-N DOCKER-USER
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A FORWARD -o br-e468013fba9d -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o br-e468013fba9d -j DOCKER
-A FORWARD -i br-e468013fba9d ! -o br-e468013fba9d -j ACCEPT
-A FORWARD -i br-e468013fba9d -o br-e468013fba9d -j ACCEPT
-A DOCKER-ISOLATION -i br-e468013fba9d -o docker0 -j DROP
-A DOCKER-ISOLATION -i docker0 -o br-e468013fba9d -j DROP
-A DOCKER-ISOLATION -j RETURN
-A DOCKER-USER -j RETURN
routes
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.0.2.2 0.0.0.0 UG 0 0 0 enp0s3
10.0.2.0 0.0.0.0 255.255.255.0 U 0 0 0 enp0s3
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-e468013fba9d
192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 enp0s8
kube-apiserver (docker-compose)
version: '3'
services:
kube_apiserver:
image: gcr.io/google-containers/hyperkube:v1.9.6
restart: always
network_mode: host
container_name: kube-apiserver
ports:
- "8080"
volumes:
- "/var/lib/kubernetes/ca-key.pem:/var/lib/kubernetes/ca-key.pem"
- "/var/lib/kubernetes/ca.pem:/var/lib/kubernetes/ca.pem"
- "/var/lib/kubernetes/kubernetes.pem:/var/lib/kubernetes/kubernetes.pem"
- "/var/lib/kubernetes/kubernetes-key.pem:/var/lib/kubernetes/kubernetes-key.pem"
command: ["/usr/local/bin/kube-apiserver",
"--admission-control", "Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota",
"--advertise-address", "192.168.0.1",
"--etcd-servers", "http://192.168.0.1:2379,http://192.168.0.2:2379,http://192.168.0.3:2379",
"--insecure-bind-address", "127.0.0.1",
"--insecure-port", "8080",
"--kubelet-https", "true",
"--service-cluster-ip-range", "10.0.0.0/16",
"--allow-privileged", "true",
"--runtime-config", "api/all",
"--service-account-key-file", "/var/lib/kubernetes/ca-key.pem",
"--client-ca-file", "/var/lib/kubernetes/ca.pem",
"--tls-ca-file", "/var/lib/kubernetes/ca.pem",
"--tls-cert-file", "/var/lib/kubernetes/kubernetes.pem",
"--tls-private-key-file", "/var/lib/kubernetes/kubernetes-key.pem",
"--kubelet-certificate-authority", "/var/lib/kubernetes/ca.pem",
"--kubelet-client-certificate", "/var/lib/kubernetes/kubernetes.pem",
"--kubelet-client-key", "/var/lib/kubernetes/kubernetes-key.pem"]
kube-controller-manager (docker-compose)
version: '3'
services:
kube_controller_manager:
image: gcr.io/google-containers/hyperkube:v1.9.6
restart: always
network_mode: host
container_name: kube-controller-manager
ports:
- "10252"
volumes:
- "/var/lib/kubernetes/ca-key.pem:/var/lib/kubernetes/ca-key.pem"
- "/var/lib/kubernetes/ca.pem:/var/lib/kubernetes/ca.pem"
command: ["/usr/local/bin/kube-controller-manager",
"--allocate-node-cidrs", "true",
"--cluster-cidr", "10.10.0.0/16",
"--master", "http://127.0.0.1:8080",
"--port", "10252",
"--service-cluster-ip-range", "10.0.0.0/16",
"--leader-elect", "false",
"--service-account-private-key-file", "/var/lib/kubernetes/ca-key.pem",
"--root-ca-file", "/var/lib/kubernetes/ca.pem"]
kube-scheduler (docker-compose)
version: '3'
services:
kube_scheduler:
image: gcr.io/google-containers/hyperkube:v1.9.6
restart: always
network_mode: host
container_name: kube-scheduler
ports:
- "10252"
command: ["/usr/local/bin/kube-scheduler",
"--master", "http://127.0.0.1:8080",
"--port", "10251"]
Worker0
interfaces
br-c5e101440189 Link encap:Ethernet HWaddr 02:42:60:ba:c9:81
inet addr:172.18.0.1 Bcast:172.18.255.255 Mask:255.255.0.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
cbr0 Link encap:Ethernet HWaddr ae:48:89:15:60:fd
inet addr:10.10.0.1 Bcast:10.10.0.255 Mask:255.255.255.0
inet6 addr: fe80::a406:b0ff:fe1d:1d85/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1149 errors:0 dropped:0 overruns:0 frame:0
TX packets:409 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:72487 (72.4 KB) TX bytes:35650 (35.6 KB)
enp0s3 Link encap:Ethernet HWaddr 02:74:f2:80:ad:a4
inet addr:10.0.2.15 Bcast:10.0.2.255 Mask:255.255.255.0
inet6 addr: fe80::74:f2ff:fe80:ada4/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:3330 errors:0 dropped:0 overruns:0 frame:0
TX packets:2269 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:770147 (770.1 KB) TX bytes:246770 (246.7 KB)
enp0s8 Link encap:Ethernet HWaddr 08:00:27:07:69:06
inet addr:192.168.0.2 Bcast:192.168.0.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fe07:6906/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:268762 errors:0 dropped:0 overruns:0 frame:0
TX packets:258080 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:48488207 (48.4 MB) TX bytes:25791040 (25.7 MB)
flannel.1 Link encap:Ethernet HWaddr 86:8e:2f:c4:98:82
inet addr:10.10.0.0 Bcast:0.0.0.0 Mask:255.255.255.255
inet6 addr: fe80::848e:2fff:fec4:9882/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:8 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:2955 errors:0 dropped:0 overruns:0 frame:0
TX packets:2955 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:218772 (218.7 KB) TX bytes:218772 (218.7 KB)
vethe5d2604 Link encap:Ethernet HWaddr ae:48:89:15:60:fd
inet6 addr: fe80::ac48:89ff:fe15:60fd/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:828 (828.0 B)
iptables
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-N DOCKER
-N DOCKER-ISOLATION
-N DOCKER-USER
-N KUBE-FIREWALL
-N KUBE-FORWARD
-N KUBE-SERVICES
-A INPUT -j KUBE-FIREWALL
-A INPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A FORWARD -j DOCKER-USER
-A FORWARD -m comment --comment "kubernetes forward rules" -j KUBE-FORWARD
-A FORWARD -s 10.0.0.0/16 -j ACCEPT
-A FORWARD -d 10.0.0.0/16 -j ACCEPT
-A OUTPUT -j KUBE-FIREWALL
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A DOCKER-USER -j RETURN
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -s 10.10.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod source rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-FORWARD -d 10.10.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod destination rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
routes
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.0.2.2 0.0.0.0 UG 0 0 0 enp0s3
10.0.2.0 0.0.0.0 255.255.255.0 U 0 0 0 enp0s3
10.10.0.0 0.0.0.0 255.255.255.0 U 0 0 0 cbr0
10.10.1.0 10.10.1.0 255.255.255.0 UG 0 0 0 flannel.1
172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-c5e101440189
192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 enp0s8
kubelet (systemd-service)
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
#After=docker.service
#Requires=docker.service
[Service]
ExecStart=/usr/local/bin/kubelet \
--allow-privileged=true \
--anonymous-auth=false \
--authorization-mode=AlwaysAllow \
--cloud-provider= \
--cluster-dns=10.0.30.1 \
--cluster-domain=cluster.local \
--node-ip=192.168.0.2 \
--pod-cidr=10.10.0.0/24 \
--kubeconfig=/var/lib/kubelet/kubeconfig \
--runtime-request-timeout=15m \
--hostname-override=worker0 \
# --read-only-port=10255 \
--client-ca-file=/var/lib/kubernetes/ca.pem \
--tls-cert-file=/var/lib/kubelet/worker0.pem \
--tls-private-key-file=/var/lib/kubelet/worker0-key.pem
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
kube-proxy (systemd-service)
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
#After=docker.service
#Requires=docker.service
[Service]
ExecStart=/usr/local/bin/kube-proxy \
--cluster-cidr=10.10.0.0/16 \
--kubeconfig=/var/lib/kube-proxy/kubeconfig \
--v=5
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
Worker1 configuration is pretty similar to worker0.
If any additional info required, please let me know.
Please make sure the host you apiserver pod sit on have set the iptables accept the cidr range of your pods. Such as
-A INPUT -s 10.32.0.0/12 -j ACCEPT
I think this has something to do with when access service on the same host, the iptable does not use translate address as the source address.
Change from:
--service-cluster-ip-range", "10.0.0.0/16
To:
--service-cluster-ip-range", "10.10.0.0/16
So that --service-cluster-ip-range value to match with the flannel CIDR.
According to the kube-apiserver documentation:
--bind-address ip The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank, all interfaces will be used (0.0.0.0 for all IPv4 interfaces and :: for all IPv6 interfaces). (default 0.0.0.0)
--secure-port int The port on which to serve HTTPS with authentication and authorization. If 0, don't serve HTTPS at all. (default 6443)
As far as I see, the flags --bind-address
and --secure-port
wasn't defined in your kube-apiserver
configuration, so by default kube-apiserver
listens https connections on 0.0.0.0:6443
.
So, in order to solve your issue, just add --secure-port
flag to the kube-apiserver
configuration:
"--secure-port", "443",