“Kibana server is not ready yet” error when deploying ELK in k8s addons file

4/29/2019

l am new to ELK stack, l want to deploy ELK in my k8s cluster, l use minikube for a try.

The yaml files are all from kubernetes repo :

https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch

l just changed kibana-service.yaml by adding one more line type: NodePort

The command used:

 kubectl create -f fluentd-elasticsearch/

 kubectl get pods -n kube-system
 // omit some info
 elasticsearch-logging-0                 1/1     Running   
 elasticsearch-logging-1                 1/1     Running   
 fluentd-es-v2.5.1-cz6zp                 1/1     Running   
 kibana-logging-5c895c4cd-qjrkz          1/1     Running  
 kube-addon-manager-minikube             1/1     Running  
 kube-dns-7cd4f8cd9f-gzbxb               3/3     Running   
 kubernetes-dashboard-7b7c7bd496-m748h   1/1     Running  

 kubectl get svc -n kube-system
 elasticsearch-logging   ClusterIP   10.96.18.172    <none>        9200/TCP         74m
 kibana-logging          NodePort    10.102.218.78   <none>        5601:30345/TCP   74m
 kube-dns                ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP    42d
 kubernetes-dashboard    NodePort    10.102.61.203   <none>        80:30000/TCP     42d
kubectl describe pods elasticsearch-logging-0 -n kube-system

Name:           elasticsearch-logging-0
Namespace:      kube-system
Node:           minikube/192.168.99.100
Start Time:     Mon, 29 Apr 2019 21:42:25 +0800
Labels:         controller-revision-hash=elasticsearch-logging-76ccc76cd9
                k8s-app=elasticsearch-logging
                kubernetes.io/cluster-service=true
                statefulset.kubernetes.io/pod-name=elasticsearch-logging-0
                version=v6.6.1
Annotations:    <none>
Status:         Running
IP:             172.17.0.20
Controlled By:  StatefulSet/elasticsearch-logging
Init Containers:
  elasticsearch-logging-init:
    Container ID:  docker://ff75d166b9df3ee444efb19e2498907d0cfec53d35b14d124bbb6756eb4418ed
    Image:         alpine:3.6
    Image ID:      docker-pullable://alpine@sha256:ee0c0e7b6b20b175f5ffb1bbd48b41d94891b0b1074f2721acb008aafdf25417
    Port:          <none>
    Host Port:     <none>
    Command:
      /sbin/sysctl
      -w
      vm.max_map_count=262144
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Mon, 29 Apr 2019 21:42:25 +0800
      Finished:     Mon, 29 Apr 2019 21:42:25 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from elasticsearch-logging-token-g2rcx (ro)
Containers:
  elasticsearch-logging:
    Container ID:   docker://8f52602890334bdbbfd5a2042ac6e99426308230db0f338ea80a3cd2bef3bda3
    Image:          gcr.io/fluentd-elasticsearch/elasticsearch:v6.6.1
    Image ID:       docker-pullable://gcr.io/fluentd-elasticsearch/elasticsearch@sha256:89cdf74301f36f911e0fc832b21766114adbd591241278cf97664b7cb76b2e67
    Ports:          9200/TCP, 9300/TCP
    Host Ports:     0/TCP, 0/TCP
    State:          Running
      Started:      Mon, 29 Apr 2019 21:59:20 +0800
    Last State:     Terminated
      Reason:       Error
      Exit Code:    137
      Started:      Mon, 29 Apr 2019 21:58:02 +0800
      Finished:     Mon, 29 Apr 2019 21:58:35 +0800
    Ready:          True
    Restart Count:  4
    Limits:
      cpu:  1
    Requests:
      cpu:  100m
Environment:
      NAMESPACE:  kube-system (v1:metadata.namespace)
    Mounts:
      /data from elasticsearch-logging (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from elasticsearch-logging-token-g2rcx (ro)
Conditions:
  Type           Status
  Initialized    True 
  Ready          True 
  PodScheduled   True 
Volumes:
  elasticsearch-logging:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  elasticsearch-logging-token-g2rcx:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  elasticsearch-logging-token-g2rcx
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     <none>
Events:
  Type     Reason                 Age                  From               Message
  ----     ------                 ----                 ----               -------
  Normal   Scheduled              19m                  default-scheduler  Successfully assigned elasticsearch-logging-0 to minikube
  Normal   SuccessfulMountVolume  19m                  kubelet, minikube  MountVolume.SetUp succeeded for volume "elasticsearch-logging"
  Normal   SuccessfulMountVolume  19m                  kubelet, minikube  MountVolume.SetUp succeeded for volume "elasticsearch-logging-token-g2rcx"
  Normal   Pulled                 19m                  kubelet, minikube  Container image "alpine:3.6" already present on machine
  Normal   Created                19m                  kubelet, minikube  Created container
  Normal   Started                19m                  kubelet, minikube  Started container
  Warning  BackOff                3m3s (x6 over 10m)   kubelet, minikube  Back-off restarting failed container
  Normal   Pulled                 2m49s (x5 over 19m)  kubelet, minikube  Container image "gcr.io/fluentd-elasticsearch/elasticsearch:v6.6.1" already present on machine
  Normal   Created                2m49s (x5 over 19m)  kubelet, minikube  Created container
  Normal   Started                2m49s (x5 over 19m)  kubelet, minikube  Started container

when l visit minikube-ip:30345, l got "Kibana server is not ready yet"

when l ssh into minikube, curl 10.96.18.172:9200 doesn't work, l suspect the problem lies in elasticsearch...

Anyone can help me? Thanks in advance!

-- Rosmee
elasticsearch
kibana
kubernetes

0 Answers