In K8s I can not telnet to the port using the cluster IP from selector app

11/7/2018

I am searching for a long time on the net. But no use. Please help or try to give some ideas how to achieve this.

Service definition:

{
  "kind": "Service",
  "apiVersion": "v1",
  "metadata": {
    "name": "eureka1",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/services/eureka1",
    "uid": "aed393f1-d127-11e8-8f19-fa163e4dc428",
    "resourceVersion": "7432445",
    "creationTimestamp": "2018-10-16T09:41:40Z",
    "labels": {
      "k8s-app": "eureka1"
    }
  },
  "spec": {
    "ports": [
      {
      "name": "tcp-38761-8761-6fjms",
      "protocol": "TCP",
      "port": 80,
      "targetPort": 80,
      "nodePort": 8761
    }
  ],
  "selector": {
    "k8s-app": "eureka1"
  },
  "clusterIP": "10.254.65.233",
  "type": "NodePort",
  "sessionAffinity": "None",
  "externalTrafficPolicy": "Cluster"
  },
  "status": {
    "loadBalancer": {}
  }
}

kubectl describe service eureka1:

Name:                     eureka1
Namespace:                default
Labels:                   k8s-app=eureka1
Annotations:              <none>
Selector:                 k8s-app=eureka1
Type:                     NodePort
IP:                       10.254.65.233
Port:                     tcp-38761-8761-6fjms  80/TCP
TargetPort:               80/TCP
NodePort:                 tcp-38761-8761-6fjms  8761/TCP
Endpoints:                172.101.51.8:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

kubectl get ep:

NAME                         ENDPOINTS
eureka1                      172.101.51.8:80
eureka2                      172.101.52.8:80

If I in the eureka1 app telnet to 10.254.65.233 80

Trying 10.254.65.233...
telnet: connect to address 10.254.65.233: Connection timed out

but I can ping 10.254.65.233

Try another service IP not selector and can telnet.

The kube-proxy mode is ipvs

Thanks

-- ming_v5
docker
kube-proxy
kubernetes

1 Answer

11/20/2018

this can happen when the network is not properly configured for “hairpin” traffic, usually when kube-proxy is running in iptables mode and Pods are connected with bridge network. The Kubelet exposes a hairpin-mode flag that allows endpoints of a Service to loadbalance back to themselves if they try to access their own Service VIP. The hairpin-mode flag must either be set to hairpin-veth or promiscuous-bridge.

-- ming_v5
Source: StackOverflow