I've created a Kubernetes Service whose backend nodes aren't part of the Cluster but a fixed set of nodes (having fixed IPs), so I've also created an Endpoints resource with the same name:
apiVersion: v1
kind: Service
metadata:
name: elk-svc
spec:
ports:
- port: 9200
targetPort: 9200
protocol: TCP
---
kind: Endpoints
apiVersion: v1
metadata:
name: elk-svc
subsets:
-
addresses:
- { ip: 172.21.0.40 }
- { ip: 172.21.0.41 }
- { ip: 172.21.0.42 }
ports:
- port: 9200
Description of Service and Endpoints:
$ kubectl describe svc elk-svc
Name: elk-svc
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"elk-svc","namespace":"default"},"spec":{"ports":[{"port":9200,"protocol":"TCP"...
Selector: <none>
Type: ClusterIP
IP: 10.233.17.18
Port: <unset> 9200/TCP
Endpoints: 172.21.0.40:9200,172.21.0.41:9200,172.21.0.42:9200
Session Affinity: None
Events: <none>
$ kubectl describe ep elk-svc
Name: elk-svc
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Endpoints","metadata":{"annotations":{},"name":"elk-svc","namespace":"default"},"subsets":[{"addresses":[{"ip":"172.21.0.40"...
Subsets:
Addresses: 172.21.0.40,172.21.0.41,172.21.0.42
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
<unset> 9200 TCP
Events: <none>
My pods are able to communicate with ElasticSearch using the internal cluster IP 10.233.17.18. Everything goes fine !
My question is about if there's any way to have some kind of healthCheck mechanism for that Service I've created so if one of my ElasticSearch nodes goes down, ie: 172.21.0.40, then the Service is aware of that and and will no longer route traffic to that node, but to the others. Is that possible ?
Thanks.
My suggestion will be to have a reverse proxy such as nginx or haproxy infront of the elastic nodes which will do health check for those nodes.
This is not supported in k8s.
For more clarification refer this issue raised on your requirement :: https://github.com/kubernetes/kubernetes/issues/77738#issuecomment-491560980
For this use-case best practice would be to use a loadbalancer like haproxy