How to implement liveness and readiness endpoints for a gRPC service?

7/27/2018

I have a gRPC service which listens on a port using a tcp listener. This service is Dockerized and eventually I want to run it in a Kubernetes cluster.

I was wondering what is the best way to implement liveness and readiness probes for checking the health of my service?

  1. Should I run a separate http server in another goroutine and respond to /health and /ready paths?
  2. Or, should I also have gRPC calls for liveness and readiness of my service and use a gRPC client to query these endpoints?!
-- moorara
docker
go
grpc
kubernetes

1 Answer

7/27/2018

Previously I've run a separate http server inside the app, just for healthchecks (this was because AWS application load balancers only have http checking, I don't know about kube).

If you run the http server as a separate routine and the grpc server on the main goroutine then you should avoid the situation where the grpc server goes down and http is still 200 - OK (assuming you don't yet have a means for http to healthcheck your grpc).

You could also use a heatbeat pattern of goroutines, that are controlled by the http server and accept heartbeats from the grpc server to make sure that it's all OK.

If you run 2 servers, they will need to be running on different ports, this can be an issue for some schedulers (like ECS) that expects 1 port for a service. There are examples and packages that will allow you to multiplex multiple protocols onto the same port. I guess kube supports multiple port services so this might not be a problem.

Link to example of multiplexing:

https://github.com/gdm85/grpc-go-multiplex/blob/master/greeter_multiplex_server/greeter_multiplex_server.go

-- Zak
Source: StackOverflow