My aim is to start a kafka topic with multiple partitions on kubernetes. To do that I deploy the following .yml file:
apiVersion: v1
kind: Namespace
metadata:
name: kafka
---
apiVersion: v1
kind: Pod
metadata:
name: kubernetes-kafka
namespace: kafka
labels:
k8s-app: kubernetes-kafka
spec:
containers:
- name: zookeeper
image: zookeeper
env:
- name: ZOO_MY_ID
value: "1"
- name: kafka
image: wurstmeister/kafka
env:
- name: KAFKA_ADVERTISED_HOST_NAME
value: "kubernetes-cluster.nt"
- name: KAFKA_ADVERTISED_PORT
value: 30001
- name: KAFKA_ZOOKEEPER_CONNECT
value: "localhost:2181"
- name: KAFKA_BROKER_ID
value: "1"
- name: KAFKA_CREATE_TOPICS
value: write:20:1
---
apiVersion: v1
kind: Service
metadata:
name: kubernetes-kafka
namespace: kafka
spec:
type: NodePort
selector:
k8s-app: kubernetes-kafka
ports:
- name: zk-client
port: 2181
protocol: TCP
- name: zk-follower
port: 2888
protocol: TCP
- name: zk-election
port: 3888
protocol: TCP
- name: zk-admin
port: 8080
protocol: TCP
- name: kafka-client
port: 9092
nodePort: 30001
protocol: TCP
I expect this code to create a kafka
server on kubernetes
, whose write
topic can be accessible via host:port = kubernetes-cluster.nt:30001
However, although kubernetes service and pod are started (kubectl get pods --all-namespaces
and kubectl get services --all-namespaces
commands list entries with the name kubernetes-kafka), the kafka topic is not created:
kafkacat -b kubernetes-cluster.nt:30001 -L
which should list all topics returns that 0 topics are created:
Metadata for all topics (from broker 1: kubernetes-cluster.nt:30001/1):
1 brokers:
broker 1 at kubernetes-cluster.nt:30001
0 topics:
What am I doing wrong?
Ensure that Kafka external port and k8s service nodePort are consistent, Other services call k8s-service:nodeport. write this config_kafka_in_kubernetes, hope to help U !
I would use a operator to run Kafka in Kubernetes. I would recommend Strimzi kafka operator.