How to create connectors for Kafka-connect on Kubernetes?

2/1/2020

I am deploying Kafka-connect on Google Kubernetes Engine (GKE) using cp-kafka-connect Helm chart in distributed mode.

A working Kafka cluster with broker and zookeeper is already running on the same GKE cluster. I understand I can create connectors by sending post requests to http://localhost:8083/connectors endpoint once it is available. However, Kafka-connect container goes into RUNNING state and then starts loading the jar files and till all the jar files are loaded the endpoint mentioned above is unreachable.

I am looking for a way to automate the step of manually exec the pod, check if the endpoint is ready and then send the post requests. I have a shell script that has a bunch of curl -X POST requests to this endpoint to create the connectors and also have config files for these connectors which work fine with standalone mode (using Confluent platform show in this confluent blog).

Now there are only two ways to create the connector:

  1. Somehow identify when the container is actually ready (when the endpoint has started listening) and then run the shell script containing the curl requests
  2. OR use the configuration files as we do in standalone mode (Example: $ <path/to/CLI>/confluent local load connector_name -- -d /connector-config.json)

Which of the above approach is better?

Is the second approach (config files) even doable with distributed mode?

  • If YES: How to do that?
  • If NO: How to successfully do what is explained in the first approach?

EDIT: With reference to his github issue(thanks to @cricket_007's answer below) I added the following as the container command and connectors got created after the endpoint gets ready:

...
command:
  - /bin/bash
  - -c
  - |
    /etc/confluent/docker/run &
    echo "Waiting for Kafka Connect to start listening on kafka-connect  "
    while : ; do
      curl_status=`curl -s -o /dev/null -w %{http_code} http://localhost:8083/connectors`
      echo -e `date` " Kafka Connect listener HTTP state: " $curl_status " (waiting for 200)"
      if [ $curl_status -eq 200 ] ; then
        break
      fi
      sleep 5
    done
    echo -e "\n--\n+> Creating Kafka Connector(s)"
    /tmp/scripts/create-connectors.sh
    sleep infinity
...

/tmp/scripts/create-connectors.sh is a script mounted externally containing a bunch of POST requests using CURL to the Kafka-connect API.

-- Amit Yadav
apache-kafka
apache-kafka-connect
kubernetes

1 Answer

2/1/2020

confluent local doesn't interact with a remote Connect cluster, such as one in Kubernetes.

Please refer to the Kafka Connect REST API

You'd connect to it like any other RESTful api running in the cluster (via a Nodeport, or an Ingress/API Gateway for example)

the endpoint mentioned above is unreachable.

Localhost is the physical machine you're typing the commands into, not the remote GKE cluster

Somehow identify when the container is actually ready

Kubernetes health checks are responsible for that

kubectl get services

there are only two ways to create the connector

That's not true. You could additional run Landoop's Kafka Connect UI or Confluent Control Center in your cluster to point and click.

But if you have local config files, you could also write code to interact with the API

Or try and see if you can make a PR for this issue

https://github.com/confluentinc/cp-docker-images/issues/467

-- OneCricketeer
Source: StackOverflow