I am trying to set up basic Kafka with K8s. However, every time I try to connect from the data generation application with Kafka to the Kafka service in K8s I get this exception in the Kafka logs:
2019-02-04 12:11:28 ERROR Sender:235 kafka-producer-network-thread | avro_data - [Producer clientId=avro_data] Uncaught error in kafka producer I/O thread:
java.lang.IllegalStateException: No entry found for connection 1001
at org.apache.kafka.clients.ClusterConnectionStates.nodeState(ClusterConnectionStates.java:330)
at org.apache.kafka.clients.ClusterConnectionStates.disconnected(ClusterConnectionStates.java:134)
at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:921)
at org.apache.kafka.clients.NetworkClient.access$700(NetworkClient.java:67)
at org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater.maybeUpdate(NetworkClient.java:1086)
at org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater.maybeUpdate(NetworkClient.java:971)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:533)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:309)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:233)
at java.lang.Thread.run(Thread.java:748
Here is producer logs:
[Producer clientId=avro_data] Initialize connection to node 192.168.99.100:32092 (id: -1 rack: null) for sending metadata request
Updated cluster metadata version 2 to Cluster(id = MpP-9JVnQ4a78VTtCzTm3Q, nodes = [kafka-broker-0.kafka-headless.default.svc.cluster.local:9092 (id: 1001 rack: null)], partitions = [Partition(topic = avro_topic, partition = 0, leader = 1001, replicas = [1001], isr = [1001], offlineReplicas = [])], controller = kafka-broker-0.kafka-headless.default.svc.cluster.local:9092 (id: 1001 rack: null))
[Producer clientId=avro_data] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms.
What could be the problem with Kafka setup or application connection?
I try to connect to the Kafka node port service:
props.put("bootstrap.servers", "192.168.99.100:32092")
props.put("client.id", "avro_data")
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer")
props.put("value.serializer", "io.confluent.kafka.serializers.KafkaAvroSerializer")
props.put("schema.registry.url", "http://192.168.99.100:32081")
Kafka setup looks like this:
apiVersion: v1
kind: Service
metadata:
name: kafka-headless
spec:
ports:
- port: 9092
clusterIP: None
selector:
app: kafka
---
apiVersion: v1
kind: Service
metadata:
name: kafka-np
spec:
ports:
- port: 32092
protocol: TCP
targetPort: 9092
nodePort: 32092
selector:
app: kafka
type: NodePort
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: kafka
name: kafka-broker
spec:
serviceName: kafka-headless
selector:
matchLabels:
app: kafka
replicas: 1
template:
metadata:
labels:
app: kafka
spec:
containers:
- name: kafka
image: confluentinc/cp-kafka:5.0.1
env:
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper-headless:2181
- name: MINIKUBE_IP
value: 192.168.99.100
- name: KAFKA_ADVERTISED_LISTENERS
value: PLAINTEXT://kafka-broker-0.kafka-headless.default.svc.cluster.local:9092,EXTERNAL://192.168.99.100:32092
- name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
value: PLAINTEXT:PLAINTEXT,EXTERNAL:PLAINTEXT
ports:
- containerPort: 9092
I ran into this issue while using the bitnami kafka & zookeeper images, switching to the confluent ones (version 4.0.0) solved this in my case. Although you're already using the confluent images, try the below images/versions in your docker-compose.yml instead to iron out a bug in the version you're using.
confluentinc/cp-zookeeper:4.0.0
confluentinc/cp-kafka:4.0.0