I use spring-kafka(2.2.4.RELEASE) to consume message from kafka-server. Both kafka clients and servers are deployed in k8s clusters. Normally, it is ok to produce and consume messages on kafka brokers. But kafka clients can't reconnect to brokers when kafka-brokers are upgraded.
As I know, kafka clients reconnection has a bug when bootstrap-servers
is virtual ip( detail is here). My problem is alike the vip bug.
In my situation, bootstrap-servers
address is k8s kafka service name:port, and when kafka-brokers are upgraded, the real ip corresponding to the kafka servcie name will change. So kafka client will never reconnect successfully. How can I fix this?
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.10", GitCommit:"098570796b32895c38a9a1c9286425fb1ececa18", GitTreeState:"clean", BuildDate:"2018-08-02T17:19:54Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.10", GitCommit:"098570796b32895c38a9a1c9286425fb1ececa18", GitTreeState:"clean", BuildDate:"2018-08-02T17:11:51Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
> kubectl get svc -o wide -nbingotestdev|grep kafkadev
kafkadev ClusterIP None <none> 9091/TCP 1y app=kafkadev
kafkadev-out NodePort 10.68.206.93 <none> 9091:37142/TCP 257d app=kafkadev
> kubectl get pod -o wide -nbingotestdev|grep kafkadev
kafkadev-0 1/1 Running 0 15h 172.20.10.59 10.171.113.45
kafkadev-1 1/1 Running 0 15h 172.20.13.95 10.171.113.33
kafkadev-2 1/1 Running 0 15h 172.20.2.173 10.171.113.62
bootstrap-servers = kafkadev:9091
bootstrap-servers = 10.68.206.93:9091
You'll have to ensure that you always have a statically assigned IP set that gets returned as the advertised listeners when the consumers fetch the bootstrap servers, whether that be via an external DNS service or using k8s api client directly to inspect the running Kafka services, then fetch all addresses to build up your bootstrap-server string