Handling kafka clients updates in kubernetes

5/29/2019

I have a Kafka cluster running on AWS MSK with Kafka producer and consumer go clients running in kubernetes. The producer is responsible for sending the stream of data to Kafka. I need help solving the following problems:

  1. Let's say, there is some code change in producer code and have to redeploy it in kubernetes. How can I do that? Since the data is continuously generated, I cannot just simply stop the already running producer and deploy the updated one. In this case, I will lose the data between the update process.

  2. Sometimes due to a panic(golang) in the code, the client crashes, but since it is running as a pod, kubernetes restarts it. I am not able to understand as to whether it's a good thing or bad.

Thanks

-- Piyush Kumar
apache-kafka
aws-msk
go
kubernetes

1 Answer

6/9/2019

For your first question, I would suggest having rolling update of your deployment in the cluster. For second, that is the general behavior of deployments in kubernetes. I could think of an external monitoring solution that de-deploys your application or stops handling requests in case of a panic. It would be better if you could explain why exactly you need such kind of behavior.!

-- Avik Aggarwal
Source: StackOverflow