Kubernetes service communication isse - Kubedns

2/9/2017

I have two pods mapped to two services up and running using virtual box vm's in my laptop. I have kube dns working. One pod is a webservice and the other is a mongodb.

The spec of webapp pod is below

spec:
  containers:
    - resources:
        limits:
          cpu: 0.5
          .
          .
      name: wsemp
      ports:
        - containerPort: 8080
  #     name: wsemp
  #command: ["java","-Dspring.data.mongodb.uri=mongodb://192.168.6.103:30061/microservices", "-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
  command: ["java","-Dspring.data.mongodb.uri=mongodb://mongoservice/microservices", "-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]

The spec of corresponding service

apiVersion: v1
kind: Service
metadata:
  labels:
    name: webappservice
  name: webappservice
spec:
  ports:
   - port: 8080
     nodePort: 30062
     targetPort: 8080
     protocol: TCP
  type: NodePort
  selector:
    name: webapp

Mongodb pod spec

apiVersion: v1
kind: Pod
metadata:
  name: mongodb
  labels:
    name: mongodb
spec:
  containers:
    .
    .
  name: mongodb
  ports:
    - containerPort: 27017

Mongodb service spec

apiVersion: v1
kind: Service
metadata:
  labels:
    name: mongodb
  name: mongoservice
spec:
  ports:
   - port: 27017
     nodePort: 30061
     targetPort: 27017
     protocol: TCP
  type: NodePort
  selector:
    name: mongodb

UPDATED TARGET PORTS IN SERVICE AFTER COMMENT

Issue

The webapp when it starts is not able to connect with the mongoservice port and gives this error on start

Exception in monitor thread while connecting to server mongoservice:27017
com.mongodb.MongoSocketOpenException: Exception opening socket
at com.mongodb.connection.SocketStream.open(SocketStream.java:63) ~[mongodb-driver-core-3.2.2.jar!/:na]
at        com.mongodb.connection.InternalStreamConnection.open(InternalStreamConnection.java:114) ~[mongodb-driver-core-3.2.2.jar!/:na]
at com.mongodb.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:128) ~[mongodb-driver-core-3.2.2.jar!/:na]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111]
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method) ~[na:1.8.0_111]

describe svc

kubectl describe svc mongoservice
Name:           mongoservice
Namespace:      default
Labels:         name=mongodb
Selector:       name=mongodb
Type:           NodePort
IP:         10.254.146.189
Port:           <unset> 27017/TCP
NodePort:       <unset> 30061/TCP
Endpoints:      172.17.99.2:27017
Session Affinity:   None
No events.

kubectl describe svc webappservice 
Name:           webappservice
Namespace:      default
Labels:         name=webappservice
Selector:       name=webapp
Type:           NodePort
IP:         10.254.112.121
Port:           <unset> 8080/TCP
NodePort:       <unset> 30062/TCP
Endpoints:      172.17.99.3:8080
Session Affinity:   None
No events.

Debugging

root@webapp:/# nslookup mongoservice
Server:     10.254.0.2
Address:    10.254.0.2#53

Non-authoritative answer:
Name:   mongoservice.default.svc.cluster.local
Address: 10.254.146.189

root@webapp:/# curl 10.254.146.189:27017
curl: (7) Failed to connect to 10.254.146.189 port 27017: Connection refused
root@webapp:/# curl mongoservice:27017
curl: (7) Failed to connect to mongoservice port 27017: Connection refused


sudo iptables-save | grep webapp

-A KUBE-NODEPORTS -p tcp -m comment --comment "default/webappservice:" -m tcp --dport 30062 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/webappservice:" -m tcp --dport 30062 -j KUBE-SVC-NQBDRRKQULANV7O3
-A KUBE-SEP-IE7EBTQCN7T6HXC4 -s 172.17.99.3/32 -m comment --comment "default/webappservice:" -j KUBE-MARK-MASQ
-A KUBE-SEP-IE7EBTQCN7T6HXC4 -p tcp -m comment --comment "default/webappservice:" -m tcp -j DNAT --to-destination 172.17.99.3:8080
-A KUBE-SERVICES -d 10.254.217.24/32 -p tcp -m comment --comment "default/webappservice: cluster IP" -m tcp --dport 8080 -j KUBE-SVC-NQBDRRKQULANV7O3
-A KUBE-SVC-NQBDRRKQULANV7O3 -m comment --comment "default/webappservice:" -j KUBE-SEP-IE7EBTQCN7T6HXC4
$ curl 10.254.217.24:8080
{"timestamp":1486678423757,"status":404,"error":"Not Found","message":"No message available","path":"/"}[osboxes@kube-node1 ~]$ 


sudo iptables-save | grep mongodb
[osboxes@osboxes ~]$ sudo iptables-save | grep mongo
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/mongoservice:" -m tcp --dport 30061 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/mongoservice:" -m tcp --dport 30061 -j KUBE-SVC-2HQWGC3WSIBZF7CN
-A KUBE-SEP-FVWOWAWXXVAVIQ5O -s 172.17.99.2/32 -m comment --comment "default/mongoservice:" -j KUBE-MARK-MASQ
-A KUBE-SEP-FVWOWAWXXVAVIQ5O -p tcp -m comment --comment "default/mongoservice:" -m tcp -j DNAT --to-destination 172.17.99.2:27017
-A KUBE-SERVICES -d 10.254.146.189/32 -p tcp -m comment --comment "default/mongoservice: cluster IP" -m tcp --dport 27017 -j KUBE-SVC-2HQWGC3WSIBZF7CN
-A KUBE-SVC-2HQWGC3WSIBZF7CN -m comment --comment "default/mongoservice:" -j KUBE-SEP-FVWOWAWXXVAVIQ5O
[osboxes@osboxes ~]$ sudo curl  10.254.146.189:8080
^C[osboxes@osboxes ~]$ sudo curl  10.254.146.189:27017

It looks like you are trying to access MongoDB over HTTP on the native driver port.


root@mongodb:/# netstat -an
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State      
tcp        0      0 0.0.0.0:27017           0.0.0.0:*               LISTEN     
tcp        0      0 172.17.99.2:60724       151.101.128.204:80      TIME_WAIT  
tcp        0      0 172.17.99.2:60728       151.101.128.204:80      TIME_WAIT  

mongodb container has no errors on startup.

Trying to follow steps in https://kubernetes.io/docs/user-guide/debugging-services/#iptables, stuck in the part where it says " try restarting kube-proxy with the -V flag set to 4" since I dont know how to do it.

I'm not a networking person, so dont know how and what needs to be analyzed in this. Any kind of tips to debug would be of great help.

Thanks.

-- Vikram
kube-dns
kubernetes

3 Answers

2/14/2017

thanks. I had got a clue on this and since I was using the flannel network, there was an issue with the communication between the pods in the flannel network.

Particularly this part, FLANNEL_OPTIONS="--iface=eth1" as mentioned in the link http://jayunit100.blogspot.com/2015/06/flannel-and-vagrant-heads-up.html

Thanks.

-- Vikram
Source: StackOverflow

2/14/2017

The iptables rules looks ok, but it is not sure what network solution (flannel/calico) used in your kubernetes. You may check whether you can access kube dns pod IP from your web pod.

-- Jian Qiu
Source: StackOverflow

2/10/2017

:)

As a side note, have in mind that curl performs HTTP requests by default, but the port 27017 in the host you are trying to reach is not binded to an application that understands such protocol. Typically, what you would you in these scenarios is to use netcat:

nc -zv mongoservice 27017

This reports whether the port 27017 from such host is open or not.

  • nc = netcat
  • -z scan for listening daemons without sending data
  • -v adds verbosity

Regarding your MongoDB file, you must remember to set the targetPort directive. As explained in Kubernetes docs regarding targetPort:

This specification will create a Service which targets TCP port 80 on any Pod with the run: my-nginx label, and expose it on an abstracted Service port (targetPort: is the port the container accepts traffic on, port: is the abstracted Service port, which can be any port other pods use to access the Service). View service API object to see the list of supported fields in service definition.

Therefore, just set it to 27017 for consistency.

You should not run into issues after following these advices. Keep the good work and learn as much as you can!

-- David González Ruiz
Source: StackOverflow