Two kubernetes deployments in the same namespace are not able to communicate

5/9/2019

I'm deploying ELK stack (oss) to kubernetes cluster. Elasticsearch deployment and service starts correctly and API is reacheble. Kibana deployment starts but can't access elasticsearch:

From Kibana container logs:

{"type":"log","@timestamp":"2019-05-08T22:49:26Z","tags":["error","elasticsearch","admin"],"pid":1,"message":"Request error, retrying\nHEAD http://elasticsearch:9200/ => getaddrinfo ENOTFOUND elasticsearch elasticsearch:9200"}
{"type":"log","@timestamp":"2019-05-08T22:50:44Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}
{"type":"log","@timestamp":"2019-05-08T22:50:44Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}

Both deployments are in the same namespace "observability". I also tried to reference elasticsearch container as elasticsearch.observability.svc.cluster.local but it's not working too.

What I'am doing wrong? How to reference elasticsearch container from kibana container?

More info:

kubectl --context=19team-observability-admin-context -n observability get pods

NAME                            READY     STATUS    RESTARTS   AGE
elasticsearch-9d495b84f-j2297   1/1       Running   0          15s
kibana-65bc7f9c4-s9cv4          1/1       Running   0          15s

kubectl --context=19team-observability-admin-context -n observability get service

NAME            TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)                         AGE
elasticsearch   NodePort   10.104.250.175   <none>        9200:30083/TCP,9300:30059/TCP   1m
kibana          NodePort   10.102.124.171   <none>        5601:30124/TCP                  1m

I start my containers with command

kubectl --context=19team-observability-admin-context -n observability apply -f .\elasticsearch.yaml -f .\kibana.yaml

elasticsearch.yaml

apiVersion: v1
kind: Service
metadata:
  name: elasticsearch
  namespace: observability
spec:
  type: NodePort
  ports:
  - name: "9200"
    port: 9200
    targetPort: 9200
  - name: "9300"
    port: 9300
    targetPort: 9300
  selector:
    app: elasticsearch
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: elasticsearch
  namespace: observability
spec:
  replicas: 1
  selector:
    matchLabels:
      app: elasticsearch
  template:
    metadata:
      labels:
        app: elasticsearch
    spec:
      initContainers:
      - name: set-vm-max-map-count
        image: busybox
        imagePullPolicy: IfNotPresent
        command: ['sysctl', '-w', 'vm.max_map_count=262144']
        securityContext:
          privileged: true
        resources:
          requests:
            memory: "512Mi"
            cpu: "1"
          limits:
            memory: "724Mi"
            cpu: "1"
      containers:
      - name: elasticsearch
        image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.7.1
        ports:
        - containerPort: 9200
        - containerPort: 9300
        resources:
          requests:
            memory: "3Gi"
            cpu: "1"
          limits:
            memory: "3Gi"
            cpu: "1"

kibana.yaml

apiVersion: v1
kind: Service
metadata:
  name: kibana
  namespace: observability
spec:
  type: NodePort
  ports:
  - name: "5601"
    port: 5601
    targetPort: 5601
  selector:
    app: observability_platform_kibana
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    app: observability_platform_kibana
  name: kibana
  namespace: observability
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: observability_platform_kibana
    spec:
      containers:
      - env:
        # THIS IS WHERE WE SET CONNECTION BETWEEN KIBANA AND ELASTIC
        - name: ELASTICSEARCH_HOSTS
          value: http://elasticsearch:9200
        - name: SERVER_NAME
          value: kibana
        image: docker.elastic.co/kibana/kibana-oss:6.7.1
        name: kibana
        ports:
        - containerPort: 5601
        resources:
          requests:
            memory: "512Mi"
            cpu: "1"
          limits:
            memory: "724Mi"
            cpu: "1"
      restartPolicy: Always

UPDATE 1

As gonzalesraul proposed I've created second service for elastic with ClusterIP type:

apiVersion: v1
kind: Service
metadata:
  labels:
    app: elasticsearch
  name: elasticsearch-local
  namespace: observability
spec:
  type: ClusterIP
  ports:
  - port: 9200
    protocol: TCP
    targetPort: 9200
  selector:
    app: elasticsearch

Service is created:

kubectl --context=19team-observability-admin-context -n observability get service

NAME                  TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                         AGE
elasticsearch         NodePort    10.106.5.94     <none>        9200:31598/TCP,9300:32018/TCP   26s
elasticsearch-local   ClusterIP   10.101.178.13   <none>        9200/TCP                        26s
kibana                NodePort    10.99.73.118    <none>        5601:30004/TCP                  26s

And reference elastic as "http://elasticsearch-local:9200"

Unfortunately it does not work, in kibana container:

{"type":"log","@timestamp":"2019-05-09T10:13:54Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch-local:9200/"}
-- Philipp Bocharov
kubernetes

2 Answers

5/9/2019

Edit the server name value in kibana.yaml and set it to kibana:5601.

I think if you don't do this, by default it is trying to go to port 80.

This is what looks like now kibana.yaml:

...
spec:
  containers:
  - env:
    - name: ELASTICSEARCH_HOSTS
      value: http://elasticsearch:9200
    - name: SERVER_NAME
      value: kibana:5601
    image: docker.elastic.co/kibana/kibana-oss:6.7.1
    imagePullPolicy: IfNotPresent
    name: kibana
 ...

And this is the output now:

{"type":"log","@timestamp":"2019-05-09T10:37:16Z","tags":["status","plugin:console@6.7.1","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2019-05-09T10:37:16Z","tags":["status","plugin:interpreter@6.7.1","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2019-05-09T10:37:16Z","tags":["status","plugin:metrics@6.7.1","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2019-05-09T10:37:16Z","tags":["status","plugin:tile_map@6.7.1","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2019-05-09T10:37:16Z","tags":["status","plugin:timelion@6.7.1","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2019-05-09T10:37:16Z","tags":["status","plugin:elasticsearch@6.7.1","info"],"pid":1,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","@timestamp":"2019-05-09T10:37:17Z","tags":["listening","info"],"pid":1,"message":"Server running at http://0:5601"}

UPDATE

I just tested it on a bare metal cluster (bootstraped through kubeadm), and worked again.

This is the output:

{"type":"log","@timestamp":"2019-05-09T11:09:59Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
{"type":"log","@timestamp":"2019-05-09T11:10:01Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}
{"type":"log","@timestamp":"2019-05-09T11:10:01Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
{"type":"log","@timestamp":"2019-05-09T11:10:04Z","tags":["status","plugin:elasticsearch@6.7.1","info"],"pid":1,"state":"green","message":"Status changed from red to green - Ready","prevState":"red","prevMsg":"Unable to connect to Elasticsearch."}
{"type":"log","@timestamp":"2019-05-09T11:10:04Z","tags":["info","migrations"],"pid":1,"message":"Creating index .kibana_1."}
{"type":"log","@timestamp":"2019-05-09T11:10:06Z","tags":["info","migrations"],"pid":1,"message":"Pointing alias .kibana to .kibana_1."}
{"type":"log","@timestamp":"2019-05-09T11:10:06Z","tags":["info","migrations"],"pid":1,"message":"Finished in 2417ms."}
{"type":"log","@timestamp":"2019-05-09T11:10:06Z","tags":["listening","info"],"pid":1,"message":"Server running at http://0:5601"}

Note that it passed from "No Living Connections" to "Running". I am running the nodes on GCP. I had to open the firewalls for it to work. What's your environment?

-- suren
Source: StackOverflow

5/9/2019

Do not use a NodePort service, instead use a ClusterIP. If you need to expose as a Nodeport your service, create a second service besides, for instance:

---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: elasticsearch
  name: elasticsearch-local
  namespace: observability
spec:
  type: ClusterIP
  ports:
  - port: 9200
    protocol: TCP
    targetPort: 9200
  selector:
    app: elasticsearch

Then update the kibana manifest to point to the ClusterIP service:

# ...
# THIS IS WHERE WE SET CONNECTION BETWEEN KIBANA AND ELASTIC
- name: ELASTICSEARCH_HOSTS
  value: http://elasticsearch-local:9200
# ...

The nodePort services do not create a 'dns entry' (ex. elasticsearch.observability.svc.cluster.local) on kubernetes

-- gonzalesraul
Source: StackOverflow