I'm setting up a sample Kubernetes cluster in my laptop using virtual box vm's. Using flannel as the overlay network. I have successfully created a master and a node. When I spin up a pod in the node to deploy a mongodb container, the pod and container is deployed successfully with endpoints. Service also created successfully
On the master
[osboxes@kubemaster pods]$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
busybox 1/1 Running 0 3m 172.17.60.3 192.168.6.103
mongodb 1/1 Running 0 21m 172.17.60.2 192.168.6.103
[osboxes@kubemaster pods]$ kubectl get services -o wide
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes 10.254.0.1 <none> 443/TCP 17d <none>
mongoservice 10.254.244.175 <none> 27017/TCP 47m name=mongodb
[osboxes@kubemaster pods]$ kubectl describe svc mongoservice
Name: mongoservice
Namespace: default
Labels: name=mongoservice
Selector: name=mongodb
Type: ClusterIP
IP: 10.254.244.175
Port: <unset> 27017/TCP
Endpoints: 172.17.60.2:27017
Session Affinity: None
On the node with docker ps
8707a465771b busybox "sh" 9 minutes ago Up 9minutes k8s_busybox.53e510d6_busybox_default_c4892314-cde3-11e6-8a53-08002700df07_492b1a89
bea9de4e05cf registry.access.redhat.com/rhel7/pod-infrastructure:latest "/pod" 9 minutes ago Up 9 minutes k8s_POD.ae8ee9ac_busybox_default_c4892314-cde3-11e6-8a53-08002700df07_2bffae46
eaff8dc1a360 mongo "/entrypoint.sh mongo" 28 minutes ago Up 28 minutes k8s_mongodb.d1eca71a_mongodb_default_240cd411-cde1-11e6-8a53-08002700df07_ef5a8bbe
6a90b06cd434 registry.access.redhat.com/rhel7/pod-infrastructure:latest "/pod" 28 minutes ago Up 28 minutes k8s_POD.7ce0ec0_mongodb_default_240cd411-cde1-11e6-8a53-08002700df07_11074b20
The only way I can connect to the mongodb service is running http://172.17.60.2:27017/ on the node which displays "It looks like you are trying to access MongoDB over HTTP on the native driver port."
The issue is that I am not able to access mongodb endpoint from the master or any of he other nodes in atleast hitting the same url as above.. I have another java webapp container will run as a pod in another node and I need to make it interact , but that is the next step. I plan to use env variables for interpod communication and also see that the environment variables are created properly on the container in the node.
I went through this http://kubernetes.io/docs/user-guide/debugging-services/ and followed the process but not able to get any of the steps working.
[osboxes@kubemaster pods]$ sudo kubectl run -i --tty busybox --image=busybox --generator="run-pod/v1"
Waiting for pod default/busybox to be running, status is Pending, pod ready: false
Waiting for pod default/busybox to be running, status is Pending, pod ready: false
If you don't see a command prompt, try pressing enter.
Error attaching, falling back to logs: dial tcp 192.168.6.103:10250: getsockopt: connection refused
Error from server: Get https://192.168.6.103:10250/containerLogs/default/busybox/busybox: dial tcp 192.168.6.103:10250: getsockopt: connection refused
[osboxes@kubemaster pods]$ curl 172.17.60.2:27017
curl: (7) Failed connect to 172.17.60.2:27017; No route to host
[osboxes@kubemaster pods]$ curl 10.254.244.175:27017
curl: (7) Failed connect to 10.254.244.175:27017; Connection timed out
[osboxes@kubemaster pods]$ kubectl exec -ti mongodb -c k8s_mongodb.d1eca71a_mongodb_default_240cd411-cde1-11e6-8a53-08002700df07_ef5a8bbe sh
Error from server: container k8s_mongodb.d1eca71a_mongodb_default_240cd411-cde1-11e6-8a53-08002700df07_ef5a8bbe is not valid for pod mongodb
[osboxes@kubemaster pods]$ kubectl exec -ti mongodb -c k8s_POD.7ce0ec0_mongodb_default_240cd411-cde1-11e6-8a53-08002700df07_11074b20 sh
Error from server: container k8s_POD.7ce0ec0_mongodb_default_240cd411-cde1-11e6-8a53-08002700df07_11074b20 is not valid for pod mongodb
[osboxes@kubemaster pods]$ kubectl exec -ti mongodb sh
Error from server: dial tcp 192.168.6.103:10250: getsockopt: connection refused
I suspect that there is some underlying network issue inspite of the pods and services created, but Im not a networking person, so can't figure out what exactly is the issue. Kindly help.
Found out the issue. It was because in in /etc/kubernetes/kubelet
The kubelet address was 127.0.0.1 somehow, changing it to 0.0.0.0 worked fine.
#KUBELET_ADDRESS="--address=127.0.0.1"
KUBELET_ADDRESS="--address=0.0.0.0"
sudo kubectl logs mongodb
So the above command works fine now from master.
But still this command works only on the node where mongodb is running
curl 10.254.39.156:27017
It looks like you are trying to access MongoDB over HTTP on the native driver port.
I'm not able to access this from the kube master or from other nodes. So still would have to figure that out. Anyone out there could provide me some debugging tips, would be great.
Update:
Route table on the kube master
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.6.1 0.0.0.0 UG 100 0 0 enp0s3
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.6.0 0.0.0.0 255.255.255.0 U 100 0 0 enp0s3
192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0
Node1
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.6.1 0.0.0.0 UG 100 0 0 enp0s3
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 flannel0
172.17.100.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0
192.168.6.0 0.0.0.0 255.255.255.0 U 100 0 0 enp0s3
192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0
Node2
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.6.1 0.0.0.0 UG 100 0 0 enp0s3
0.0.0.0 10.0.3.2 0.0.0.0 UG 101 0 0 enp0s8
10.0.3.0 0.0.0.0 255.255.255.0 U 100 0 0 enp0s8
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 flannel0
172.17.95.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0
192.168.6.0 0.0.0.0 255.255.255.0 U 100 0 0 enp0s3
TL;DR: you cannot connect from the master to the service as it is, you have to create a NodePort
service and then, use the NodePort
given to connect.
Look at the service description:
[osboxes@kubemaster pods]$ kubectl describe svc mongoservice
Name: mongoservice
Namespace: default
Labels: name=mongoservice
Selector: name=mongodb
Type: ClusterIP
IP: 10.254.244.175
Port: <unset> 27017/TCP
Endpoints: 172.17.60.2:27017
Session Affinity: None
The service endpoint is: Endpoints: 172.17.60.2:27017
which is, as you can see, a flannel IP: the IP of the container that runs Mongo. This IP is part of your overlay network. It's not accessible from outside the virtual network (flannel network).
That's why when you do this:
[osboxes@kubemaster pods]$ curl 172.17.60.2:27017
You get this error, which is right:
curl: (7) Failed connect to 172.17.60.2:27017; No route to host
Because you're trying to access the container IP from the master node.
You have 2 options, you either expose your mongoservice as a NodePort service (that binds the container port to a node port, the way you do in docker when using -p 8000:8080
, or you jump inside the cluster and try to connect to the service from the pod (that's what you've tried and failed).
To expose the service as nodeport (the equivalent of LoadBalancer when you do not run in the cloud)
kubectl expose service mongoservice --port=27017 --type=NodePort
You might have to delete the mongoservice if it already exists. let;s check the service now:
Name: mongo-4025718836
Namespace: default
Labels: pod-template-hash=4025718836
run=mongo
Selector: pod-template-hash=4025718836,run=mongo
Type: NodePort
IP: 10.0.0.11
Port: <unset> 27017/TCP
NodePort: <unset> 32076/TCP
Endpoints: 172.17.0.9:27017
Session Affinity: None
Note that now you have the same endpoint (Pod Ip, mongo port: 172.17.0.9:27017), but the NodePort has a value, which is the port in the NODE where mongodb is exposed).
curl NODE:32076
It looks like you are trying to access MongoDB over HTTP on the native driver port.