Kubernetes pods are not reaching pods in another node within a cluster

10/18/2018

I am very new in rancher/kubernetes world and I am having a problem.

I am trying to deploy an application that needs to be stateful

To be honest I am trying to deploy a service registry(Yes, I need to). What and why I am trying to do:

What: - Deploy multiple service registries that get registered into between them(for High availability) - Iḿ exposing them with a StatefulSet object, to use a specific name of the registry(for client registration purposes), so I need to get something like registry-0, registry-1 and use this names to config the clients.

Why: - If I use the ClusterIP, I'll balance between service registries and not register my client into each server(because the client and server could self-register into only one registry), this is bad for me.

My infrastructure:

  • Rancher installed into AWS
  • A Kubernetes cluster configured into 3
    nodes as:
  • node1: all(worker,etcd, controlpane)
  • node2: worker
  • node3: worker

My problem is:

When I apply the YAML and the Kubernetes deploy my application, if the service registry is in the node1 it works perfectly and it can see himself and the other replicas that are in node1, for example:

node1: eureka1;eureka2 (eureka1 see itself and eureka2) same occurs with eureka2(see itself and eureka1)

but if I create another 4 replicas of Eureka for example and master deploy into another node like 2 more eureka into node2(worker only) and then another 2 into node3 (worker only) they can not see each other either itself and eureka1 and eureka2 cannot see eureka3 eureka4, eureka5, and eureka6

TLDR:

  • The pods in node 1 can see each other but don't see the other nodes.
  • The pods in node 2 and node 3 can't see himself and neither the other nodes.
  • If I execute in localhost with minikube, all works fine.

To reproduce, just apply both files above and access the main ip of the kubernetes.

Service registry deployment file is:

Service-registry.yaml:

---
apiVersion: v1
kind: Service
metadata:
  name: eureka
  labels:
    app: eureka
spec:
  ports:
  - port: 7700
    name: eureka
  clusterIP: None
  selector:
    app: eureka
---    
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
  name: eureka
spec:
  serviceName: "eureka"
  replicas: 5 
  selector:
    matchLabels:
      app: eureka
  template:
    metadata:
      labels:
        app: eureka
    spec:
      containers:
      - name: eureka
        image: leandrozago/eureka
        ports:
        - containerPort: 7700
        env:
        - name: MY_POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
          # Due to camelcase issues with "defaultZone" and "preferIpAddress", _JAVA_OPTIONS is used here
        - name: _JAVA_OPTIONS
          value: -Deureka.instance.preferIpAddress=false -Deureka.client.serviceUrl.defaultZone=http://eureka-0.eureka:7700/eureka/,http://eureka-1.eureka:7700/eureka/,http://eureka-2.eureka:7700/eureka/,http://eureka-3.eureka:7700/eureka/,http://eureka-4.eureka:7700/eureka/,http://eureka-5.eureka:7700/eureka/,http://eureka-6.eureka:7700/eureka/
        - name: EUREKA_CLIENT_REGISTERWITHEUREKA
          value: "true"
        - name: EUREKA_CLIENT_FETCHREGISTRY
          value: "true"
        # The hostnames must match with the eureka serviceUrls, otherwise, the replicas are reported as unavailable in the eureka dashboard      
        - name: EUREKA_INSTANCE_HOSTNAME
          value: ${MY_POD_NAME}.eureka
      # No need to start the pods in order. We just need the stable network identity
  podManagementPolicy: "Parallel"

Ingress yaml:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-service
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
    - http:
        paths:
          - path: /
            backend:
              serviceName: eureka
              servicePort: 7700

EDITED:

kubectl get pods --all-namespaces -o wide

NAMESPACE       NAME                                      READY     STATUS             RESTARTS   AGE       IP              NODE
cattle-system   cattle-cluster-agent-557ff9f65d-5qsv6     0/1       CrashLoopBackOff   15         58m       10.42.1.41      rancher-b2b-rancheragent-1-worker
cattle-system   cattle-node-agent-mxfpm                   1/1       Running            0          4d        172.18.80.152   rancher-b2b-rancheragent-0-all
cattle-system   cattle-node-agent-x2wdc                   1/1       Running            0          4d        172.18.82.84    rancher-b2b-rancheragent-0-worker
cattle-system   cattle-node-agent-z6cnw                   1/1       Running            0          4d        172.18.84.152   rancher-b2b-rancheragent-1-worker
default         eureka-0                                  1/1       Running            0          52m       10.42.2.41      rancher-b2b-rancheragent-0-worker
default         eureka-1                                  1/1       Running            0          52m       10.42.1.42      rancher-b2b-rancheragent-1-worker
default         eureka-2                                  1/1       Running            0          52m       10.42.0.28      rancher-b2b-rancheragent-0-all
default         eureka-3                                  1/1       Running            0          52m       10.42.1.43      rancher-b2b-rancheragent-1-worker
default         eureka-4                                  1/1       Running            0          52m       10.42.2.42      rancher-b2b-rancheragent-0-worker
default         eureka-5                                  1/1       Running            0          59s       10.42.0.29      rancher-b2b-rancheragent-0-all
default         eureka-6                                  1/1       Running            0          59s       10.42.2.43      rancher-b2b-rancheragent-0-worker
ingress-nginx   default-http-backend-797c5bc547-wkp5z     1/1       Running            0          4d        10.42.0.5       rancher-b2b-rancheragent-0-all
ingress-nginx   nginx-ingress-controller-dd5mt            1/1       Running            0          4d        172.18.82.84    rancher-b2b-rancheragent-0-worker
ingress-nginx   nginx-ingress-controller-m6jkh            1/1       Running            1          4d        172.18.84.152   rancher-b2b-rancheragent-1-worker
ingress-nginx   nginx-ingress-controller-znr8c            1/1       Running            0          4d        172.18.80.152   rancher-b2b-rancheragent-0-all
kube-system     canal-bqh22                               3/3       Running            0          4d        172.18.80.152   rancher-b2b-rancheragent-0-all
kube-system     canal-bv7zp                               3/3       Running            0          3d        172.18.84.152   rancher-b2b-rancheragent-1-worker
kube-system     canal-m5jnj                               3/3       Running            0          4d        172.18.82.84    rancher-b2b-rancheragent-0-worker
kube-system     kube-dns-7588d5b5f5-wdkqm                 3/3       Running            0          4d        10.42.0.4       rancher-b2b-rancheragent-0-all
kube-system     kube-dns-autoscaler-5db9bbb766-snp4h      1/1       Running            0          4d        10.42.0.3       rancher-b2b-rancheragent-0-all
kube-system     metrics-server-97bc649d5-q2bxh            1/1       Running            0          4d        10.42.0.2       rancher-b2b-rancheragent-0-all
kube-system     rke-ingress-controller-deploy-job-bqvcl   0/1       Completed          0          4d        172.18.80.152   rancher-b2b-rancheragent-0-all
kube-system     rke-kubedns-addon-deploy-job-sf4w5        0/1       Completed          0          4d        172.18.80.152   rancher-b2b-rancheragent-0-all
kube-system     rke-metrics-addon-deploy-job-55xwp        0/1       Completed          0          4d        172.18.80.152   rancher-b2b-rancheragent-0-all
kube-system     rke-network-plugin-deploy-job-fdg9d       0/1       Completed          0          21h       172.18.80.152   rancher-b2b-rancheragent-0-all
-- Lucas
docker
kubernetes
rancher

0 Answers