I deploy Elastic Search and Fluentd in same namespace test
, and write below config to ensure Fluentd can visit Elastic serach:
livenessProbe:
failureThreshold: 5
httpGet:
host: elasticsearch-logging
path: /
port: 9200
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
This didn't work, so I used the whole DNS name, which still failed:
Readiness probe failed: Get http://elasticsearch-logging.test.svc.cluster.local:9200/: dial tcp: lookup elasticsearch-logging.test.svc.cluster.local: no such host
I removed liveness part and use curl
in Fluentd pod, that works:
root@fluentd-es-2dvmf:/# curl http://elasticsearch-logging:9200/
{
"name" : "elasticsearch-logging-0",
"cluster_name" : "skydiscovery-es-cluster",
"cluster_uuid" : "fr3oSzpHT_qP9HQJ1WygnA",
"version" : {
"number" : "6.2.4",
"build_hash" : "ccec39f",
"build_date" : "2018-04-12T20:37:28.497551Z",
"build_snapshot" : false,
"lucene_version" : "7.2.1",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
Why do they behave differently?
Is there any way to do this?
That is an unspoken rules about probe, details refer to PR. It is not related to any wrong settings.
Map pod ip and service name to /etc/hosts
can work since host can connent to pod ip but can not resolve service name.
Please verify your deployment.
In description we can see "livenessProbe" while in "error section" there is "Readiness probe".
From the "fluentd" pod (cluster IP) perspective both service name and ip address are known (as you can see performing curl by name or by ip address).
From the Kubelet (node) perspective (during the "liveness probe") plase use "service ip address" instead.
For testing purposes you can add "service ip address" and "service name" into your known /etc/hosts file. Please share with your results.