I'm trying to setup Kafka in a Kubernetes cluster using helm.
I've used the Confluent helm chart, which is quite complete, to install Kafka.
I then tried to see how elastic Kafka connect sink is configured. One point particularly strikes me : the confluent load elasticsearch-sink
command. How can I have it done reproducibely when container is started ? Furthermore, to configure elastic, I have to set a properties file. Am I right to use a ConfigMap ? I'm however confused, because I'll have to change YAML configuration produced by helm, which doesn't seems so reproducable ...
Has someone any advice ?
One point particularly strikes me : the
confluent load elasticsearch-sink
command
The confluent
command is meant to be used on localhost
development / getting started envionments only. It therefore wouldn't know about Kubernetes (e.g. you would need an Ingress controller to expose the Connect REST API).
Am I right to use a ConfigMap ?
I don't think that is correct. At least not outside of an Operator Framework for Kafka Connect (if one existed).
Connect is configured through a JSON REST API.
Therefore, also isn't available to load a configuration at startup. It needs to be POST'd manually, then that config will be persisted in the CONNECT_CONFIG_STORAGE_TOPIC
of Kafka.
Internally, confluent load
is literally a curl -XPOST -H 'Content-Type: application/json' -d${file} localhost:8083/connectors