Can't telnet K8S ClientIP service from all nodes

4/30/2018

I'm struggling with K8S networking when I try to expose a service within a cluster. In particular I need to deploy a private container registry (through K8S) and exposing it as clusterIP service.

In order to do this I've followed this article

For the moment I don't need any particular volume, I just want to expose the service inside the cluster.

This is the POD yml file:

apiVersion: v1
kind: Pod
metadata:
  name: registry
  labels:
    app: registry
  namespace: default
spec:
  containers:
  - name: registry
    image: registry:2
    imagePullPolicy: Always
    ports:
      - containerPort: 5000

While this is my service file:

---
kind: Service
apiVersion: v1
metadata:
  name: registry
  namespace: default
spec:
  selector:
    app: registry
  ports:
    - port: 5000
      targetPort: 5000

Both objects are created through kubectl create -f <FILE_NAME>

Those are my exposed services

NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
kubernetes   ClusterIP   10.43.0.1       <none>        443/TCP    55m
registry     ClusterIP   10.43.198.164   <none>        5000/TCP   10m

While this is my services description list

Name:              kubernetes
Namespace:         default
Labels:            component=apiserver
                   provider=kubernetes
Annotations:       <none>
Selector:          <none>
Type:              ClusterIP
IP:                10.43.0.1
Port:              https  443/TCP
TargetPort:        6443/TCP
Endpoints:         172.31.5.173:6443
Session Affinity:  ClientIP
Events:            <none>


Name:              registry
Namespace:         default
Labels:            <none>
Annotations:       <none>
Selector:          app=registry
Type:              ClusterIP
IP:                10.43.198.164
Port:              <unset>  5000/TCP
TargetPort:        5000/TCP
Endpoints:         10.42.1.4:5000
Session Affinity:  None
Events:            <none>

When I run telnet 10.43.198.164 5000 in the same node where the pod is deployed everything is working fine, while if I launch it in another node (it's a 2 node cluster), the command remain appended.

Nodes are AWS ec2 instances with CentOS 7

Kubernetes is under version 1.8.3, deployed through RANCHER RKE

I found several issues for this problem, but nothing that can help me investigating the problem.

Here you can find used RKE config file for instantiating the cluster

#{{ ansible_managed }}

nodes:
  - address: node1
    user: user
    role: [controlplane,worker,etcd]
    ssh_key_path: path
  - address: node2
    user: user
    role: [worker]
    ssh_key_path: path

ignore_docker_version: false

kubernetes_version: v1.10.1
network:
  plugin:flannel

Any help? Thanks.

-- luke035
amazon-web-services
kubernetes
rancher

1 Answer

4/30/2018

I don't think that problem is related to docker registry. Looks like it's on network layer.

Debug questions:

  • What CNI plugin do you use?
  • Can you reach pod directly (telnet 10.42.1.4 5000)?
  • Are your nodes (kubectl get nodes) and system pods (kubectl -n kube-system get pods) ready?
-- Igor Stepin
Source: StackOverflow