How to fix "Kibana server is not ready yet" error when using AKS

2/7/2019

I'm setting up ELK services in Azure Kubernetes Service. But I only see this error:

"Kibana server is not ready yet"

I'm using Helm to install the stable/elastic-stack release without any changes (default for everything) in AKS.

helm install --name elk stable/elastic-stack

And I also added an ingress controller to expose the Kibana server to public. However, I only see "Kibana server is not ready yet" error.

I've checked the Kibana pod is running, as well as the ElasticSearch. As a newbie in Kubernetes, I have no idea about how to find the error log in Kibana instance. Can anyone help me on this? It is also appreciated if you can indicate what step I am missing.

-- Vincent Shen
azure
azure-kubernetes
kibana
kubernetes
kubernetes-helm

2 Answers

10/9/2019

It might be the version incompatible issue. Just follow the console to get the errors. Kibana version should be always higher than Elasticsearch.In that case, it gives an error following.

[error][status][plugin:xpack_main@7.4.0] Status changed from yellow to red - This version of Kibana requires Elasticsearch v7.4.0 on all nodes. I found the following incompatible nodes in your cluster: v7.1.1 @ 127.0.0.1:9200 (127.0.0.1)

-- Shravan
Source: StackOverflow

2/8/2019

Most probably you didn't change the value for ELASTICSEARCH_URL environment variable in Kibana deployment with your original one, as it was shipped with default values from Elastic-stack Helm chart. Therefore, you have to replace Elasticsearch URL with actual service address inside Kibana configuration.

You can do it in a two ways:

  • Update the value within Helm Chart:

    helm upgrade -f new-values.yml {release name} {package name or path}

The default values.yaml for Elastic-stack Helm chart can be found here. Also might be useful to get more details in the official Helm documentation.

  • Replace ELASTICSEARCH_URL environment variable in the related to Kibana deployment:

    kubectl edit deployment elk-kibana

    kubectl delete pod <elk-kibana-Pod-name>

Wait until Kubernetes successfully terminates the old and spin up a new Kibana Pod.

-- mk_sta
Source: StackOverflow