How can I connect to a container from a different pod in k8s?

12/31/2020

I create two containers in two pods in k8s, elasticsearch and kibana. kibana container needs to access elasticsearch endpoint on the port 9200. So I set the env to be http://es-cluster-1.elasticsearch.default.svc.local:9200 based on this doc: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/.

But in kibana logs I can see it can't reach to this endpoint. What did I do wrong?

{"type":"log","@timestamp":"2020-12-31T07:44:05Z","tags":["error","elasticsearch","data"],"pid":6,"message":"[ConnectionError]: getaddrinfo ENOTFOUND es-cluster-1.elasticsearch-entrypoint.default.svc.local es-cluster-1.elasticsearch-entrypoint.default.svc.local:9200"}

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: es-cluster
spec:
  serviceName: elasticsearch-entrypoint
  replicas: 1
  selector:
    matchLabels:
      name: elasticsearch
  template:
    metadata:
      labels:
        name: elasticsearch
    spec:
      containers:
        - name: elasticsearch
          image: elasticsearch:7.10.1
          ports:
            - containerPort: 9200
              name: rest
            - containerPort: 9300
              name: inter-node
          env:
            - name: discovery.type
              value: single-node
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kibana
spec:
  replicas: 1
  selector:
    matchLabels:
      name: kibana
  template:
    metadata:
      labels:
        name: kibana
    spec:
      containers:
        - name: kibana
          image: kibana:7.10.1
          ports:
            - containerPort: 5601
          env:
            - name: ELASTICSEARCH_HOSTS
              value: http://es-cluster-1.elasticsearch-entrypoint.default.svc.local:9200
---
apiVersion: v1
kind: Service
metadata:
  name: elasticsearch-entrypoint
  namespace: default
spec:
  clusterIP: None
  selector:
    name: elasticsearch
  ports:
  - port: 9200
    name: rest
  - port: 9300
    name: inter-node
---
apiVersion: v1
kind: Service
metadata:
  name: kibana-entrypoint
  namespace: default
spec:
  selector:
    name: kibana
  ports:
  - port: 5601
-- Joey Yi Zhao
kubernetes

3 Answers

12/31/2020

You need to access it via the service created so traffic is balanced between all pods in the statefulset elasticsearch-entrypoint.svc.cluster.local (if your cluster DNS is the default, cluster.local) You also don't need to specify the namespace unless you're trying to communicate out of the same namespace (if kibana and elasticsearch pods both in default)

This service will use the label selector to send traffic to any pods with the labels name: elasticsearch

-- Dom
Source: StackOverflow

12/31/2020

While reproducing the issue I have encountered exact same problem with kibana pod unable to connect but the reason for that was es-cluster-0 pod was falling with this error:

[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

This is because elasticsearch uses a mmapfs driectory by default to store its indices. The default operating system limits on mmap counts is likely to be too low, which may result in out of memory exceptions.

This value can be changed with this command:

sysctl -w vm.max_map_count=262144

To do this you have to exec into minikube:

minikube ssh

And then use that command:

docker@minikube:~$ sudo sysctl -w vm.max_map_count=262144
vm.max_map_count = 262144

You can read more about vm-max-map count here.

-- acid_fuji
Source: StackOverflow

12/31/2020

I tried to run your YAML config, but it doesn't work, the pod keep restarting

like @Dom said, your service should be available at "elasticsearch-entrypoint.svc.cluster.local", only iff your pod is running

For more regarding DNS, read the docs https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-network-id

That being said, lets fix your YAML config,

now that your pod is not running, what should we do? Check the logs and so did I and I found interesting log entry:

ERROR: [1] bootstrap checks failed
[1]: the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured

So, there must be something wrong with elasticsearch discovery service, ok. Luckily I found this blog by rancher labs https://rancher.com/blog/2018/2018-11-22-deploying-elasticsearch/ about running elasticserach on k8s, I read their config and tweak your original config to become

apiVersion: v1
kind: ConfigMap
metadata:
  name: es-config
data:
  elasticsearch.yml: |
    network.host: "0.0.0.0"
    cluster.initial_master_nodes: es-cluster-0
    discovery.zen.minimum_master_nodes: 1
    xpack.security.enabled: false
    xpack.monitoring.enabled: false
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: es-cluster
spec:
  serviceName: elasticsearch-entrypoint
  replicas: 1
  selector:
    matchLabels:
      name: elasticsearch
  template:
    metadata:
      labels:
        name: elasticsearch
    spec:
      volumes:
      - name: elasticsearch-config
        configMap:
          name: es-config
          items:
            - key: elasticsearch.yml
              path: elasticsearch.yml
      containers:
        - name: elasticsearch
          image: elasticsearch:7.10.1
          ports:
            - containerPort: 9200
              name: rest
            - containerPort: 9300
              name: inter-node
          volumeMounts:
          - name: elasticsearch-config
            mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
            subPath: elasticsearch.yml
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kibana
spec:
  replicas: 1
  selector:
    matchLabels:
      name: kibana
  template:
    metadata:
      labels:
        name: kibana
    spec:
      containers:
        - name: kibana
          image: kibana:7.10.1
          ports:
            - containerPort: 5601
          env:
            - name: ELASTICSEARCH_HOSTS
              value: http://elasticsearch-entrypoint.default.svc.cluster.local:9200
---
apiVersion: v1
kind: Service
metadata:
  name: elasticsearch-entrypoint
  namespace: default
spec:
  clusterIP: None
  selector:
    name: elasticsearch
  ports:
  - port: 9200
    name: rest
  - port: 9300
    name: inter-node
---
apiVersion: v1
kind: Service
metadata:
  name: kibana-entrypoint
  namespace: default
spec:
  selector:
    name: kibana
  ports:
  - port: 5601

The difference is that now the ES pod read config from our specified config map, to make sure the service is running properly we specify livenessProbe in our YAML spec or manually using

/ > dig +search elasticsearch-entrypoint

; <<>> DiG 9.16.6 <<>> +search elasticsearch-entrypoint
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 27045
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: ca540a28cfb103a8 (echoed)
;; QUESTION SECTION:
;elasticsearch-entrypoint.default.svc.cluster.local. IN A

;; ANSWER SECTION:
elasticsearch-entrypoint.default.svc.cluster.local. 30 IN A 10.244.82.52

;; Query time: 1 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Thu Dec 31 09:33:32 UTC 2020
;; MSG SIZE  rcvd: 157

/ #
/ > nc -vz elasticsearch-entrypoint.default.svc.cluster.local. 9200
elasticsearch-entrypoint.default.svc.cluster.local. (10.244.82.52:9200) open
-- ThatBuffDude
Source: StackOverflow