Configuring kafka connect with multi brokers

1/23/2021

Steps

  • I have used two kafka brokers and I have started zookeeper,kafka server and kafka connect services.
  • I have one source type kafka connector which can be used for getting data from Database.
  • If i start the connectorconnector 1 by using the rest API, then it will hit any one kafka server Server 1 using load balancer.After that server 1 will store and running the connector.But server 2 does not know the connector connector 1 which is running in the server 1.

Expectation

  • So if the kafka server 1 is down, then the another kafka server 2 should be able to run the connector in the failed kafka server 1.

  • While starting the connector, kafka server should know how many connectors are in running, so that if any one broker failed to do the job then another server will be able to continue the job.

Reality

  • Another Kafka server 2 which is not doing the job as per the requirement.

is there any thing to make it by configuration setup with kafka?.

Kindly suggest me some ideas.

Kafka Server 1

enter image description here

Kafka Server 2

enter image description here

-- Rabeesh
apache-kafka
apache-kafka-connect
kubernetes

1 Answer

1/24/2021

It appears that you have started all processes in single pods.

You should run Kafka, Zookeeper, and Connect all as separate services in different pods.

I suggest you refer the Confluent or Strimzi sites to find Kafka Kubernetes Helm Charts / Operators


But to answer the question - You could give one or more broker to connect-distributed.properties bootstrap.server value. Then each broker is connected to as part of the Kafka cluster, and will reconnect in the event that one broker is unavailable

"Kakfa servers" (brokers) do not run Connectors

If you want to run a cluster of connect workers, you also need to setup their rest.advertised.listener address so that they can communicate with each other.

-- OneCricketeer
Source: StackOverflow