kibana portal not showing elastic search where I can possibly learn data and indices. elastic-stack

4/12/2020

I have 2 k8s clusters hosted on seperate Hosts/VMs. First cluster hosts application micro services. Second cluster hosts elastic search and kibana.

Micro services per first cluster are configured to send logs to elastic search hosted by second cluster via below flag passed to application helm install command: "-set global.elasticsearch.url="http://example.com:30001"

On second k8s cluster, I installed elastic-stack using below helm command:

helm install --name elk **stable/elastic-stack** -f temp.yaml

temp.yaml

elasticsearch:
  enabled: true
  client:
    serviceType: NodePort
    httpNodePort: 30001
kibana:
  enabled: true
  resources:
    requests:
      cpu: "100m"
      memory: "512M"
  service:
    type: NodePort
    port: 5601
    targetPort: 5601
    protocol: TCP
    nodePort: 30002
  env:
    ELASTICSEARCH_HOSTS: http://{{ .Release.Name }}-elasticsearch-client:9200

kubectl get service

NAME                          TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
elk-elasticsearch-client      NodePort    10.233.55.130   <none>        9200:30001/TCP   23s
elk-elasticsearch-discovery   ClusterIP   None            <none>        9300/TCP         23s
elk-kibana                    NodePort    10.233.53.171   <none>        443:30002/TCP    23s
kubernetes                    ClusterIP   10.233.0.1      <none>        443/TCP          16h

Below command proves that kibana container is able to reach/talk to elasticsearch container

kubectl exec -it elk-kibana-79698f574f-kkhvb /bin/bash

curl elk-elasticsearch-client:9200

{
  "name" : "elk-elasticsearch-client-97d8dd99f-cl9x4",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "M04WseoxQty2lQ7qJmmlOw",
  "version" : {
    "number" : "6.8.2",
    "build_flavor" : "oss",
    "build_type" : "docker",
    "build_hash" : "b506955",
    "build_date" : "2019-07-24T15:24:41.545295Z",
    "build_snapshot" : false,
    "lucene_version" : "7.7.0",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}

I go to http://example.com:30002 where I expect to see a section about elastic search where I would see discovered logs or indices coming from cluster 1 with the running application micro services, but I don't see it. Am I missing something? Is Kibana really seeing ElasticSearch ?

please click here to see how kibana looks like

This is what I see in terms of logs per below containers:

Kibana container log:

kubectl logs -f elk-kibana-79698f574f-kkhvb

{"type":"log","@timestamp":"2020-04-12T05:25:17Z","tags":["warning"],"pid":1,"kibanaVersion":"6.7.0","nodes":[{"version":"6.8.2","http":{"publish_address":"10.233.69.142:9200"},"ip":"10.233.69.142"},{"version":"6.8.2","http":{"publish_address":"10.233.69.119:9200"},"ip":"10.233.69.119"},{"version":"6.8.2","http":{"publish_address":"10.233.69.129:9200"},"ip":"10.233.69.129"},{"version":"6.8.2","http":{"publish_address":"10.233.69.110:9200"},"ip":"10.233.69.110"},{"version":"6.8.2","http":{"publish_address":"10.233.69.111:9200"},"ip":"10.233.69.111"},{"version":"6.8.2","http":{"publish_address":"10.233.69.123:9200"},"ip":"10.233.69.123"},{"version":"6.8.2","http":{"publish_address":"10.233.69.109:9200"},"ip":"10.233.69.109"}],"message":"You're running Kibana 6.7.0 with some different versions of Elasticsearch. Update Kibana or Elasticsearch to the same version to prevent compatibility issues: v6.8.2 @ 10.233.69.142:9200 (10.233.69.142), v6.8.2 @ 10.233.69.119:9200 (10.233.69.119), v6.8.2 @ 10.233.69.129:9200 (10.233.69.129), v6.8.2 @ 10.233.69.110:9200 (10.233.69.110), v6.8.2 @ 10.233.69.111:9200 (10.233.69.111), v6.8.2 @ 10.233.69.123:9200 (10.233.69.123), v6.8.2 @ 10.233.69.109:9200 (10.233.69.109)"}
{"type":"log","@timestamp":"2020-04-12T05:25:20Z","tags":["warning"],"pid":1,"kibanaVersion":"6.7.0","nodes":[{"version":"6.8.2","http":{"publish_address":"10.233.69.129:9200"},"ip":"10.233.69.129"},{"version":"6.8.2","http":{"publish_address":"10.233.69.123:9200"},"ip":"10.233.69.123"},{"version":"6.8.2","http":{"publish_address":"10.233.69.119:9200"},"ip":"10.233.69.119"},{"version":"6.8.2","http":{"publish_address":"10.233.69.109:9200"},"ip":"10.233.69.109"},{"version":"6.8.2","http":{"publish_address":"10.233.69.111:9200"},"ip":"10.233.69.111"},{"version":"6.8.2","http":{"publish_address":"10.233.69.110:9200"},"ip":"10.233.69.110"},{"version":"6.8.2","http":{"publish_address":"10.233.69.142:9200"},"ip":"10.233.69.142"}],"message":"You're running Kibana 6.7.0 with some different versions of Elasticsearch. Update Kibana or Elasticsearch to the same version to prevent compatibility issues: v6.8.2 @ 10.233.69.129:9200 (10.233.69.129), v6.8.2 @ 10.233.69.123:9200 (10.233.69.123), v6.8.2 @ 10.233.69.119:9200 (10.233.69.119), v6.8.2 @ 10.233.69.109:9200 (10.233.69.109), v6.8.2 @ 10.233.69.111:9200 (10.233.69.111), v6.8.2 @ 10.233.69.110:9200 (10.233.69.110), v6.8.2 @ 10.233.69.142:9200 (10.233.69.142)"}
{"type":"log","@timestamp":"2020-04-12T05:25:22Z","tags":["warning"],"pid":1,"kibanaVersion":"6.7.0","nodes":[{"version":"6.8.2","http":{"publish_address":"10.233.69.123:9200"},"ip":"10.233.69.123"},{"version":"6.8.2","http":{"publish_address":"10.233.69.142:9200"},"ip":"10.233.69.142"},{"version":"6.8.2","http":{"publish_address":"10.233.69.111:9200"},"ip":"10.233.69.111"},{"version":"6.8.2","http":{"publish_address":"10.233.69.110:9200"},"ip":"10.233.69.110"},{"version":"6.8.2","http":{"publish_address":"10.233.69.129:9200"},"ip":"10.233.69.129"},{"version":"6.8.2","http":{"publish_address":"10.233.69.119:9200"},"ip":"10.233.69.119"},{"version":"6.8.2","http":{"publish_address":"10.233.69.109:9200"},"ip":"10.233.69.109"}],"message":"You're running Kibana 6.7.0 with some different versions of Elasticsearch. Update Kibana or Elasticsearch to the same version to prevent compatibility issues: v6.8.2 @ 10.233.69.123:9200 (10.233.69.123), v6.8.2 @ 10.233.69.142:9200 (10.233.69.142), v6.8.2 @ 10.233.69.111:9200 (10.233.69.111), v6.8.2 @ 10.233.69.110:9200 (10.233.69.110), v6.8.2 @ 10.233.69.129:9200 (10.233.69.129), v6.8.2 @ 10.233.69.119:9200 (10.233.69.119), v6.8.2 @ 10.233.69.109:9200 (10.233.69.109)"}
{"type":"log","@timestamp":"2020-04-12T05:25:25Z","tags":["warning"],"pid":1,"kibanaVersion":"6.7.0","nodes":[{"version":"6.8.2","http":{"publish_address":"10.233.69.111:9200"},"ip":"10.233.69.111"},{"version":"6.8.2","http":{"publish_address":"10.233.69.110:9200"},"ip":"10.233.69.110"},{"version":"6.8.2","http":{"publish_address":"10.233.69.109:9200"},"ip":"10.233.69.109"},{"version":"6.8.2","http":{"publish_address":"10.233.69.119:9200"},"ip":"10.233.69.119"},{"version":"6.8.2","http":{"publish_address":"10.233.69.142:9200"},"ip":"10.233.69.142"},{"version":"6.8.2","http":{"publish_address":"10.233.69.129:9200"},"ip":"10.233.69.129"},{"version":"6.8.2","http":{"publish_address":"10.233.69.123:9200"},"ip":"10.233.69.123"}],"message":"You're running Kibana 6.7.0 with some different versions of Elasticsearch. Update Kibana or Elasticsearch to the same version to prevent compatibility issues: v6.8.2 @ 10.233.69.111:9200 (10.233.69.111), v6.8.2 @ 10.233.69.110:9200 (10.233.69.110), v6.8.2 @ 10.233.69.109:9200 (10.233.69.109), v6.8.2 @ 10.233.69.119:9200 (10.233.69.119), v6.8.2 @ 10.233.69.142:9200 (10.233.69.142), v6.8.2 @ 10.233.69.129:9200 (10.233.69.129), v6.8.2 @ 10.233.69.123:9200 (10.233.69.123)"}

elasticsearch client logs:

kubectl logs -f elk-elasticsearch-client-97d8dd99f-4sfgt

[2020-04-12T05:16:40,375][WARN ][o.e.d.z.ZenDiscovery     ] [elk-elasticsearch-client-97d8dd99f-4sfgt] not enough master nodes discovered during pinging (found [[Candidate{node={elk-elasticsearch-master-0}{eKU_y64SRUCrtkjqCyYm-A}{_QV8vlJZRpe4AS-dFszgAQ}{10.233.69.110}{10.233.69.110:9300}, clusterStateVersion=-1}]], but needed [2]), pinging again
[2020-04-12T05:16:43,378][WARN ][o.e.d.z.ZenDiscovery     ] [elk-elasticsearch-client-97d8dd99f-4sfgt] not enough master nodes discovered during pinging (found [[Candidate{node={elk-elasticsearch-master-0}{eKU_y64SRUCrtkjqCyYm-A}{_QV8vlJZRpe4AS-dFszgAQ}{10.233.69.110}{10.233.69.110:9300}, clusterStateVersion=-1}]], but needed [2]), pinging again
[2020-04-12T05:16:46,382][WARN ][o.e.d.z.ZenDiscovery     ] [elk-elasticsearch-client-97d8dd99f-4sfgt] not enough master nodes discovered during pinging (found [[Candidate{node={elk-elasticsearch-master-0}{eKU_y64SRUCrtkjqCyYm-A}{_QV8vlJZRpe4AS-dFszgAQ}{10.233.69.110}{10.233.69.110:9300}, clusterStateVersion=-1}]], but needed [2]), pinging again
[2020-04-12T05:16:47,920][WARN ][r.suppressed             ] [elk-elasticsearch-client-97d8dd99f-4sfgt] path: /_cluster/health, params: {}
org.elasticsearch.discovery.MasterNotDiscoveredException: null
    at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$4.onTimeout(TransportMasterNodeAction.java:262) [elasticsearch-6.8.2.jar:6.8.2]
    at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:322) [elasticsearch-6.8.2.jar:6.8.2]
    at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:249) [elasticsearch-6.8.2.jar:6.8.2]
    at org.elasticsearch.cluster.service.ClusterApplierService$NotifyTimeout.run(ClusterApplierService.java:564) [elasticsearch-6.8.2.jar:6.8.2]
    at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:681) [elasticsearch-6.8.2.jar:6.8.2]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
    at java.lang.Thread.run(Thread.java:835) [?:?]
[2020-04-12T05:16:49,384][WARN ][o.e.d.z.ZenDiscovery     ] [elk-elasticsearch-client-97d8dd99f-4sfgt] not enough master nodes discovered during pinging (found [[Candidate{node={elk-elasticsearch-master-0}{eKU_y64SRUCrtkjqCyYm-A}{_QV8vlJZRpe4AS-dFszgAQ}{10.233.69.110}{10.233.69.110:9300}, clusterStateVersion=-1}]], but needed [2]), pinging again
[2020-04-12T05:16:52,387][WARN ][o.e.d.z.ZenDiscovery     ] [elk-elasticsearch-client-97d8dd99f-4sfgt] not enough master nodes discovered during pinging (found [[Candidate{node={elk-elasticsearch-master-0}{eKU_y64SRUCrtkjqCyYm-A}{_QV8vlJZRpe4AS-dFszgAQ}{10.233.69.110}{10.233.69.110:9300}, clusterStateVersion=-1}]], but needed [2]), pinging again
[2020-04-12T05:16:55,394][WARN ][o.e.d.z.ZenDiscovery     ] [elk-elasticsearch-client-97d8dd99f-4sfgt] not enough master nodes discovered during pinging (found [[Candidate{node={elk-elasticsearch-master-0}{eKU_y64SRUCrtkjqCyYm-A}{_QV8vlJZRpe4AS-dFszgAQ}{10.233.69.110}{10.233.69.110:9300}, clusterStateVersion=-1}]], but needed [2]), pinging again
[2020-04-12T05:16:57,818][WARN ][r.suppressed             ] [elk-elasticsearch-client-97d8dd99f-4sfgt] path: /_cluster/health, params: {}
org.elasticsearch.discovery.MasterNotDiscoveredException: null
    at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$4.onTimeout(TransportMasterNodeAction.java:262) [elasticsearch-6.8.2.jar:6.8.2]
    at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:322) [elasticsearch-6.8.2.jar:6.8.2]
    at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:249) [elasticsearch-6.8.2.jar:6.8.2]
    at org.elasticsearch.cluster.service.ClusterApplierService$NotifyTimeout.run(ClusterApplierService.java:564) [elasticsearch-6.8.2.jar:6.8.2]
    at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:681) [elasticsearch-6.8.2.jar:6.8.2]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
    at java.lang.Thread.run(Thread.java:835) [?:?]
[2020-04-12T05:17:00,303][INFO ][o.e.c.s.ClusterApplierService] [elk-elasticsearch-client-97d8dd99f-4sfgt] detected_master {elk-elasticsearch-master-0}{eKU_y64SRUCrtkjqCyYm-A}{_QV8vlJZRpe4AS-dFszgAQ}{10.233.69.110}{10.233.69.110:9300}, added {{elk-elasticsearch-master-0}{eKU_y64SRUCrtkjqCyYm-A}{_QV8vlJZRpe4AS-dFszgAQ}{10.233.69.110}{10.233.69.110:9300},{elk-elasticsearch-data-0}{IiuiDnktTcG_lW5rHIy8Ng}{jWLI0IusTEqv1RZBs5XndA}{10.233.69.119}{10.233.69.119:9300},{elk-elasticsearch-master-1}{fj0uI-ibTXeM8IDM6Uh3Tw}{vmWKBJwpQKyPHwhA8_Ur2Q}{10.233.69.129}{10.233.69.129:9300},}, reason: apply cluster state (from master [master {elk-elasticsearch-master-0}{eKU_y64SRUCrtkjqCyYm-A}{_QV8vlJZRpe4AS-dFszgAQ}{10.233.69.110}{10.233.69.110:9300} committed version [1]])
[2020-04-12T05:17:00,488][INFO ][o.e.c.s.ClusterApplierService] [elk-elasticsearch-client-97d8dd99f-4sfgt] added {{elk-elasticsearch-client-97d8dd99f-cl9x4}{-4nfJE9qTX-lczcjH2cbAA}{Xw4bc3ahTj-wEs64a10mbw}{10.233.69.109}{10.233.69.109:9300},}, reason: apply cluster state (from master [master {elk-elasticsearch-master-0}{eKU_y64SRUCrtkjqCyYm-A}{_QV8vlJZRpe4AS-dFszgAQ}{10.233.69.110}{10.233.69.110:9300} committed version [2]])
[2020-04-12T05:17:01,826][INFO ][o.e.c.s.ClusterApplierService] [elk-elasticsearch-client-97d8dd99f-4sfgt] added {{elk-elasticsearch-data-1}{YzzantPxRd23JU3UfED4LA}{gibMh6t2SH-cp3IvZFwrJw}{10.233.69.123}{10.233.69.123:9300},}, reason: apply cluster state (from master [master {elk-elasticsearch-master-0}{eKU_y64SRUCrtkjqCyYm-A}{_QV8vlJZRpe4AS-dFszgAQ}{10.233.69.110}{10.233.69.110:9300} committed version [5]])
[2020-04-12T05:17:33,824][WARN ][o.e.d.r.a.a.i.RestGetMappingAction] [elk-elasticsearch-client-97d8dd99f-4sfgt] [types removal] The parameter include_type_name should be explicitly specified in get mapping requests to prepare for 7.0. In 7.0 include_type_name will default to 'false', which means responses will omit the type name in mapping definitions.
[2020-04-12T05:17:33,828][WARN ][o.e.d.r.a.a.i.RestGetIndexTemplateAction] [elk-elasticsearch-client-97d8dd99f-4sfgt] [types removal] The parameter include_type_name should be explicitly specified in get template requests to prepare for 7.0. In 7.0 include_type_name will default to 'false', which means responses will omit the type name in mapping definitions.
[2020-04-12T05:17:38,309][INFO ][o.e.c.s.ClusterApplierService] [elk-elasticsearch-client-97d8dd99f-4sfgt] added {{elk-elasticsearch-master-2}{Vxdt5B3LQAGjykP6AKiJoQ}{ZBLoOGeJQkem4_vI2rKtPw}{10.233.69.142}{10.233.69.142:9300},}, reason: apply cluster state (from master [master {elk-elasticsearch-master-0}{eKU_y64SRUCrtkjqCyYm-A}{_QV8vlJZRpe4AS-dFszgAQ}{10.233.69.110}{10.233.69.110:9300} committed version [9]])

curl example.com:30001

{
  "name" : "elk-elasticsearch-client-97d8dd99f-4sfgt",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "M04WseoxQty2lQ7qJmmlOw",
  "version" : {
    "number" : "6.8.2",
    "build_flavor" : "oss",
    "build_type" : "docker",
    "build_hash" : "b506955",
    "build_date" : "2019-07-24T15:24:41.545295Z",
    "build_snapshot" : false,
    "lucene_version" : "7.7.0",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}

How can I find more information about the indices in my Elasticsearch cluster?

-- kasa99
elastic-stack
elasticsearch
kibana
kubernetes

0 Answers