Running Kafka Cluster with multiple Brokers on local minikube

4/30/2018

For testing purposes I try to create a Kafka Cluster on my local minikube. The Cluster must be reachable from outside of the Kubernetes.

When I produce/consume from inside the pods there is no problem, everything works just fine.

When I produce from my local machine with

bin/kafka-console-producer.sh --topic mytopic --broker-list 192.168.99.100:32767

where 192.168.99.100 is my minikube-ip and 32767 is the node port of the kafka service.

I get the following Error Message:

>testmessage
>[2018-04-30 11:55:04,604] ERROR Error when sending message to topic ams_stream with key: null, value: 11 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for ams_stream-0: 1506 ms has passed since batch creation plus linger time

When I consume from my local machine I get the following warnings:

[2018-04-30 10:22:30,680] WARN Connection to node 2 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2018-04-30 10:23:46,057] WARN Connection to node 8 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2018-04-30 10:25:01,542] WARN Connection to node 2 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2018-04-30 10:26:17,008] WARN Connection to node 5 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)

The Broker IDs are right, so it looks like I can at least reach the brokers


Edit:

I think that the Problem may be, that the service is routing me "randomly" to any of my brokers, but he needs to route me to the leader of the topic. Could this be the Problem? Does anybody know a way around this Problem?


Additional Information:

I'm using the wurstmeister/kafka and digitalwonderland/zookeeper images

I started using the DellEMC Tutorial (and the linked one from defuze.org)

This did not work out for me so I made some changes in the kafka-service.yml (1) and the kafka-cluster.yml (2)

kafka-service.yml

  • added a fixed NodePort
  • removed id from the selector

kafka-cluster.yml

  • added replicas to the specification
  • removed id from the label
  • changed the broker id to be generated by the last number from the IP
  • replaced deprecated values advertised_host_name / advertised_port with
    • listeners ( pod-ip:9092 ) for communication inside the k8s
    • advertised_listeners ( minikube-ip:node-port ) for communication with applications outside the kubernetes

1 - kafka-service.yml:

---
apiVersion: v1
kind: Service
metadata:
  name: kafka-service
  labels:
    name: kafka
spec:
  type: NodePort
  ports:
  - port: 9092
    nodePort: 32767
    targetPort: 9092
    protocol: TCP
  selector:
    app: kafka
  type: LoadBalancer

2 - kafka-cluster.yml:

---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: kafka-b
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: kafka
    spec:
      containers:
      - name: kafka
        image: wurstmeister/kafka
        ports:
        - containerPort: 9092
        env:
        - name: HOSTNAME_COMMAND
          value: "ifconfig |grep 'addr:172' |cut -d':' -f 2 |cut -d ' ' -f 1"
        - name: KAFKA_ZOOKEEPER_CONNECT
          value: zk1:2181
        - name: BROKER_ID_COMMAND
          value: "ifconfig |grep 'inet addr:172' | cut -d'.' -f '4' | cut -d' ' -f '1'"
        - name: KAFKA_ADVERTISED_LISTENERS
          value: "INTERNAL://192.168.99.100:32767"
        - name: KAFKA_LISTENERS
          value: "INTERNAL://_{HOSTNAME_COMMAND}:9092"
        - name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
          value: "INTERNAL:PLAINTEXT"
        - name: KAFKA_INTER_BROKER_LISTENER_NAME
          value: "INTERNAL"
        - name: KAFKA_CREATE_TOPICS
          value: mytopic:1:3
-- T-Sona
apache-kafka
apache-zookeeper
kubernetes
minikube

0 Answers