I have an issue setting up my Kubernetes Cluster on my Ubuntu 16.04 machines, I have correctly set up a:
into a cluster with this information:
NAME READY STATUS RESTARTS AGE
pod/coredns-86c58d9df4-78lnp 1/1 Running 0 80m
pod/coredns-86c58d9df4-lw7vl 1/1 Running 0 80m
pod/etcd-di-linux-host 1/1 Running 0 111m
pod/kube-apiserver-di-linux-host 1/1 Running 0 110m
pod/kube-controller-manager-di-linux-host 1/1 Running 0 111m
pod/kube-flannel-ds-amd64-6wvkh 1/1 Running 0 109m
pod/kube-flannel-ds-amd64-p7ftb 1/1 Running 0 110m
pod/kube-proxy-rbfvz 1/1 Running 0 109m
pod/kube-proxy-zwr7b 1/1 Running 0 111m
pod/kube-scheduler-di-linux-host 1/1 Running 0 111m
pod/kubernetes-dashboard-79ff88449c-9f8qw 1/1 Running 0 89m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 111m
service/kubernetes-dashboard ClusterIP 10.98.188.215 <none> 443/TCP 89m
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/kube-flannel-ds-amd64 2 2 2 2 2 beta.kubernetes.io/arch=amd64 110m
daemonset.apps/kube-flannel-ds-arm 0 0 0 0 0 beta.kubernetes.io/arch=arm 110m
daemonset.apps/kube-flannel-ds-arm64 0 0 0 0 0 beta.kubernetes.io/arch=arm64 110m
daemonset.apps/kube-flannel-ds-ppc64le 0 0 0 0 0 beta.kubernetes.io/arch=ppc64le 110m
daemonset.apps/kube-flannel-ds-s390x 0 0 0 0 0 beta.kubernetes.io/arch=s390x 110m
daemonset.apps/kube-proxy 2 2 2 2 2 <none> 111m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/coredns 2/2 2 2 111m
deployment.apps/kubernetes-dashboard 1/1 1 1 89m
NAME DESIRED CURRENT READY AGE
replicaset.apps/coredns-86c58d9df4 2 2 2 111m
replicaset.apps/kubernetes-dashboard-79ff88449c 1 1 1 89m
My cluster information
Kubernetes master is running at https://10.10.1.122:6443
KubeDNS is running at https://10.10.1.122:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
My Pods
NAME READY STATUS RESTARTS AGE
guids-68898f7dc9-c65nv 1/1 Running 0 102m
Name: guids-68898f7dc9-c65nv
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: gmf.com/10.10.1.38
Start Time: Sun, 16 Dec 2018 15:43:41 +0200
Labels: pod-template-hash=68898f7dc9
run=guids
Annotations: <none>
Status: Running
IP: 10.244.1.15
Controlled By: ReplicaSet/guids-68898f7dc9
Containers:
guids:
Container ID: docker://125ceccad4e572b514538292aaeaa55e22050c5e9129f834de8e01dfd452c6c1
Image: alexellis2/guid-service:latest
Image ID: docker-pullable://alexellis2/guid-service@sha256:17207f799760ccdccd0fa1e7f37838af5df915a33f9f27e97951a6eeee8c3a6f
Port: 9000/TCP
Host Port: 0/TCP
State: Running
Started: Sun, 16 Dec 2018 15:43:46 +0200
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hnwtc (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-hnwtc:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hnwtc
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
The issue I am facing in here that whenever I want to curl the service or the pod IP from the master node it never hits, at the same time curling the service/pod from the worker node goes fine, I am a newbie to Kubernetes but I can't find any lead how I can diagnose this issue, any help would be much appreciated.
when I try to curl the exposed service even I got this result from the master:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
guids ClusterIP 10.97.160.160 <none> 9000/TCP 92m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 139m
ubuntu@master:/$ curl http://10.97.160.160:9000/guid
curl: (7) Failed to connect to 10.97.160.160 port 9000: Connection timed out
The pod IP which is accessible from other nodes and the clusterIP is accessible from pods inside the kubernetes cluster.
ClusterIP of the service is not a IP address of the pod, it is a virtual address mapped to the pod's IP address based on the rules defined in the service and it is managed by the kube-proxy
daemonset in kubernetes cluster.
ClusterIP is specifically desired for the communication inside the cluster to access the pods behind the service without caring the how many replicas of the pods are present or on which node pod is present and clusterIP is static unlike pod's IP.
If can read about how service IP works in official documentation
I would like you to follow some debugging steps:
You can check that your service name resolves to ClusterIP from inside the pod using:
kubectl exec -it <pod_name> bash
nslookup <svc_name>.<namespace>.svc.cluster.local
The above command will give you clusterIP of your service.
Check your worker node is pingable from your master node, if it is not then you have some issue with your overlay network, in your case it is flannel.