I'm trying to run Elasticsearch and Kibana on Kubernetes cluster (same namepace). I created Pod and a Service for Elasticsearch and for Kibana. When I enter to the elasticsearch website (http://localhost:8001/api/v1/namespaces/default/pods/elasticsearch/proxy/), everything seems fine, but when I enter to Kibana's website, I see "Kibana did not load properly. Check the server output for more information.".
The Kibana pod's logs, are the following:
{"type":"error","@timestamp":"2019-03-04T19:27:21Z","tags":["warning","stats-collection"],"pid":1,"level":"error","error":{"message":"Request Timeout after 30000ms","name":"Error","stack":"Error: Request Timeout after 30000ms\n at /usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:355:15\n at Timeout.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:384:7)\n at ontimeout (timers.js:436:11)\n at tryOnTimeout (timers.js:300:5)\n at listOnTimeout (timers.js:263:5)\n at Timer.processTimers (timers.js:223:10)"},"message":"Request Timeout after 30000ms"}
Those are the yaml files:
deployment_elasticsearch.yaml:
apiVersion: v1
kind: Pod
metadata:
name: elasticsearch
labels:
service: elasticsearch
spec:
ports:
containers:
- name: elasticsearch
image: elasticsearch:6.6.1
ports:
- containerPort: 9200
- containerPort: 9300
env:
- name: discovery.type
value: "single-node"
deployment_elasticsearch_service.yaml:
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
labels:
service: elasticsearch
spec:
ports:
- port: 9200
name: serving
- port: 9300
name: node-to-node
selector:
service: elasticsearch
deployment_kibana.yaml:
apiVersion: v1
kind: Pod
metadata:
name: kibana
labels:
service: kibana
spec:
ports:
containers:
- name: kibana
image: kibana:6.6.1
ports:
- containerPort: 5601
deployment_kibana_service.yaml:
apiVersion: v1
kind: Service
metadata:
name: kibana
labels:
service: kibana
spec:
ports:
- port: 5601
name: serving
selector:
service: kibana
Also, when I enter to kibana pod, and run "$curl http://elasticsearch:9200", I get the elasticsearch home page (so I think that kibana can reach elasticsearch).
EDIT This is the grep error logs for kibana:
{"type":"log","@timestamp":"2019-03-04T22:41:16Z","tags":["status","plugin:index_management@6.6.1","error"],"pid":1,"state":"red","message":"Status changed from green to red - Request Timeout after 30000ms","prevState":"green","prevMsg":"Ready"}
{"type":"log","@timestamp":"2019-03-04T22:41:16Z","tags":["status","plugin:index_lifecycle_management@6.6.1","error"],"pid":1,"state":"red","message":"Status changed from green to red - Request Timeout after 30000ms","prevState":"green","prevMsg":"Ready"}
{"type":"log","@timestamp":"2019-03-04T22:41:16Z","tags":["status","plugin:rollup@6.6.1","error"],"pid":1,"state":"red","message":"Status changed from green to red - Request Timeout after 30000ms","prevState":"green","prevMsg":"Ready"}
{"type":"log","@timestamp":"2019-03-04T22:41:16Z","tags":["status","plugin:remote_clusters@6.6.1","error"],"pid":1,"state":"red","message":"Status changed from green to red - Request Timeout after 30000ms","prevState":"green","prevMsg":"Ready"}
{"type":"log","@timestamp":"2019-03-04T22:41:16Z","tags":["status","plugin:cross_cluster_replication@6.6.1","error"],"pid":1,"state":"red","message":"Status changed from green to red - Request Timeout after 30000ms","prevState":"green","prevMsg":"Ready"}
{"type":"log","@timestamp":"2019-03-04T22:41:16Z","tags":["status","plugin:reporting@6.6.1","error"],"pid":1,"state":"red","message":"Status changed from green to red - Request Timeout after 30000ms","prevState":"green","prevMsg":"Ready"}
{"type":"log","@timestamp":"2019-03-04T22:41:34Z","tags":["spaces","error"],"pid":1,"message":"Unable to navigate to space \"default\", redirecting to Space Selector. Error: Request Timeout after 30000ms"}
{"type":"log","@timestamp":"2019-03-04T22:41:41Z","tags":["spaces","error"],"pid":1,"message":"Unable to navigate to space \"default\", redirecting to Space Selector. Error: Request Timeout after 30000ms"}
From online research, I think that the problem is that els and kibana can't talk to one another. Can you know why?
Edit 2, describe logs:
kubectl describe pod kibana
Name: kibana
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: minikube/10.0.2.15
Start Time: Tue, 05 Mar 2019 00:21:23 +0200
Labels: service=kibana
Annotations: <none>
Status: Running
IP: 172.17.0.5
Containers:
kibana:
Container ID: docker://7eecb30b2f197120706d790e884db44696d5d1a30d3ec48a9ca2a6255eca7e8a
Image: kibana:6.6.1
Image ID: docker-pullable://kibana@sha256:a2b329d8903978069632da8aa85cc5199c5ab2cf289c48b7851bafd6ee58bbea
Port: 5601/TCP
Host Port: 0/TCP
State: Running
Started: Tue, 05 Mar 2019 00:21:24 +0200
Ready: True
Restart Count: 0
Environment:
ELASTICSEARCH_URL: http://elasticsearch:9200
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-q25px (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-q25px:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-q25px
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 51m default-scheduler Successfully assigned default/kibana to minikube
Normal Pulled 51m kubelet, minikube Container image "kibana:6.6.1" already present on machine
Normal Created 51m kubelet, minikube Created container
Normal Started 51m kubelet, minikube Started container
I reproduced your setup in my cluster. And the connectivity between kibana and elasticsearch is fine.
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
elasticsearch 1/1 Running 0 37m 10.244.1.8 worker-12 <none> <none>
kibana 1/1 Running 0 25m 10.244.3.10 worker-14 <none> <none>
Ping from kibana to elasticsearch
bash-4.2$ ping 10.244.1.8
PING 10.244.1.8 (10.244.1.8) 56(84) bytes of data.
64 bytes from 10.244.1.8: icmp_seq=1 ttl=62 time=0.705 ms
64 bytes from 10.244.1.8: icmp_seq=2 ttl=62 time=0.501 ms
Ping from elasticsearch to kibana
[root@elasticsearch elasticsearch]# ping 10.244.3.10
PING 10.244.3.10 (10.244.3.10) 56(84) bytes of data.
64 bytes from 10.244.3.10: icmp_seq=1 ttl=62 time=0.444 ms
64 bytes from 10.244.3.10: icmp_seq=2 ttl=62 time=0.462 ms
The problem you are facing is because of the hostnames used. The kibana.yml uses the 'elasticsearch' in the elastic URL -- http://elasticsearch:9200 --. The kibana container is not able to resolve the name 'elasticsearch'.
So you will have to add an entry in to /etc/hosts file, mentioning IP address of 'elasticsearch'. For e.g. in my case, in /etc/hosts
# Kubernetes-managed hosts file.
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
fe00::0 ip6-mcastprefix
fe00::1 ip6-allnodes
fe00::2 ip6-allrouters
10.244.3.10 kibana
10.244.1.8 elasticsearch
That should solve your problem.
But, that won't be easy, you will not be able to change that file, you will have to rebuild your image or run the container with --add-host options. look here for --add-host
A simpler work around is changing the kibana.yml, to look like this,
# Default Kibana configuration from kibana-docker.
server.name: kibana
server.host: "0"
elasticsearch.url: http://10.244.1.8:9200 #enter your elasticsearch container IP
xpack.monitoring.ui.container.elasticsearch.enabled: true
Configure the correct IP address of elasticsearch container and restart your kibana container. The vice-versa applies to elasticsearch container.
Take your pick.
Further edit.
To change the hosts file from k8s yml,
Start the elastic service/cluster before hand,
[root@controller-11 test-dir]# kubectl get services elasticsearch -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
elasticsearch ClusterIP 10.103.254.157 <none> 9200/TCP,9300/TCP 153m service=elasticsearch
Then go on to edit the kibana.yml file with IP address of elasticsearch service. It would look like this,
apiVersion: v1
kind: Pod
metadata:
name: kibana
labels:
service: kibana
spec:
hostAliases:
- ip: "10.103.254.157"
hostnames:
- "elasticsearch"
ports:
containers:
- name: kibana
image: kibana:6.6.1
ports:
- containerPort: 5601
Login to your kibana container and checkout the /etc/hosts file, it would look like this,
bash-4.2$ cat /etc/hosts
# Kubernetes-managed hosts file.
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
fe00::0 ip6-mcastprefix
fe00::1 ip6-allnodes
fe00::2 ip6-allrouters
10.244.2.2 kibana
# Entries added by HostAliases.
10.103.254.157 elasticsearch
Then try reaching out to the elastic server, it would look like this,
bash-4.2$ curl http://elasticsearch:9200
{
"name" : "tyqNRro",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "tFmM2Nq9RDmGlDy6G2FUZw",
"version" : {
"number" : "6.6.1",
"build_flavor" : "default",
"build_type" : "tar",
"build_hash" : "1fd8f69",
"build_date" : "2019-02-13T17:10:04.160291Z",
"build_snapshot" : false,
"lucene_version" : "7.6.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
That should do it I suppose.
Further edit.
Upon further investigation, looks like the configuration you used should work without any changes i suggested. Looks like your k8s elasticsearch service is not configured properly. if the service is configured properly then we should find the endpoints configured to your elastic search container. It should look like this,
root@server1d:~# kubectl describe service elasticsearch
Name: elasticsearch
Namespace: default
Labels: service=elasticsearch
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"service":"elasticsearch"},"name":"elasticsearch","namespace"...
Selector: service=elasticsearch
Type: ClusterIP
IP: 10.102.227.86
Port: serving 9200/TCP
TargetPort: 9200/TCP
Endpoints: 10.244.1.9:9200
Port: node-to-node 9300/TCP
TargetPort: 9300/TCP
Endpoints: 10.244.1.9:9300
Session Affinity: None
Events: <none>