I successfully deployed Kafka to Kubernetes on local Docker (gcp & minikube) using Yolean/kubernetes-kafka & Helm chart
and tested topic production successfully from within the cluster using this python script:
#!/usr/bin/env python
from kafka import KafkaConsumer, KafkaProducer
KAFKA_TOPIC = 'demo'
# KAFKA_BROKERS = 'localhost:32400' # see step 1
# from inside the cluster in a different namespace
# KAFKA_BROKERS = 'bootstrap.kafka.svc.cluster.local:9092'
KAFKA_BROKERS = 'kafka.kafka.svc.cluster.local:9092'
print('KAFKA_BROKERS: ' + KAFKA_BROKERS)
producer = KafkaProducer(bootstrap_servers=KAFKA_BROKERS)
messages = [b'hello kafka', b'Falanga', b'3 test messages']
for m in messages:
print(f"sending: {m}")
producer.send(KAFKA_TOPIC, m)
producer.flush()
On helm I used this option to enable external use:
helm install --name kafka --set external.enabled=true --namespace kafka incubator/kafka
and on the original repo I used:
kubectl apply -f ./outside-0.yml
The resulting services have endpoints and node ports but the script doesn't work from outside the cluster.
here is the original service (branch master)
➜ ~ kubectl describe svc outside-0 --namespace kafka
Name: outside-0
Namespace: kafka
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied- configuration={"apiVersion":"v1","kind":"Service","metadata": {"annotations":{},"name":"outside-0","namespace":"kafka"},"spec":{"ports": [{"nodePort":32400,"port":3240...
Selector: app=kafka,kafka-broker-id=0
Type: NodePort
IP: 10.99.171.133
LoadBalancer Ingress: localhost
Port: <unset> 32400/TCP
TargetPort: 9094/TCP
NodePort: <unset> 32400/TCP
Endpoints: 10.1.3.63:9094
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
here is the helm service description:
Name: kafka-0-external
Namespace: kafka
Labels: app=kafka
chart=kafka-0.9.2
heritage=Tiller
pod=kafka-0
release=kafka
Annotations: dns.alpha.kubernetes.io/internal=kafka.cluster.local
external- dns.alpha.kubernetes.io/hostname=kafka.cluster.local
Selector: app=kafka,pod=kafka-0,release=kafka
Type: NodePort
IP: 10.103.70.223
LoadBalancer Ingress: localhost
Port: external-broker 19092/TCP
TargetPort: 31090/TCP
NodePort: external-broker 31090/TCP
Endpoints: 10.1.2.231:31090
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
The local docker node does not have an externalIP field:
kubectl describe node docker-for-desktop | grep IP
InternalIP: 192.168.65.3
I followed the instruction on the outside Readme i.e.
& discovered that the local docker node has no externalIP field
How can I connect to kafka from outside the cluster on docker? Does this work on GKE or other deployments?
The service is exposing the pod to the internal Kubernetes network. In order to expose the service (which exposes the pod) to the internet, you need to set up an Ingress that points to the service.
Ingresses are basically the equivalent of Apache/Nginx for Kubernetes. You can read up on how to do it at the following URL:
https://kubernetes.io/docs/concepts/services-networking/ingress/
Alternatively, you can expose a pod on the node network by defining the service type
as a NodePort
and assigning your specific port to it. It should be something like the following:
apiVersion: v1 kind: Service metadata: name: nginx labels: name: nginx spec: type: NodePort ports: - port: 80 nodePort: 31090 name: http