How to fix "crashLoopBackoff" while creating the kafka container

5/14/2019

I'm setting up the kafka and zookeeper cluster with high availability.I have 2 kafka brokers(pod1,pod2) and 3 zookeeper(pod1,pod2,pod3).The setup is working fine. When I enter into the one kafka broker(pod1) I'm able to produce and consume the message. But when I enter to the other kafka broker(pod2) I'm not able to get any messages though I have set replication factor to two.So I have added volumes in the container spec, now I'm not able to create any pod getting the crashloopbackoff.

When I checked the logs the information is as follows: Bad request to server. Container is not able to create.

kafka_pod.yaml contains the kafka deployment.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: kafka
  labels:
    app: kafka
spec:
  replicas: 2
  selector:
    matchLabels:
      app: kafka
  template:
    metadata:
      labels:
        app: kafka
    spec:
      hostname: kafka
      containers:
      - name: kafka
        image: wurstmeister/kafka:2.11-1.0.2
        ports:
        - containerPort: 9092
          protocol: TCP
        env:
         - name: KAFKA_ADVERTISED_HOST_NAME
           value: kafka
         - name: KAFKA_ADVERTISED_PORT 
           value: "9092"
         - name: KAFKA_ZOOKEEPER_CONNECT
           value: zookeeper:2181
         - name: KAFKA_OFFSET_TOPIC_REPLICATION_FACTOR
           value: "2"
         - name: KAFKA_AUTO_CREATE_TOPICS_ENABLE
           value: "true"
         - name: KAFKA_LOG_DIRS
           value: /opt/kafka
        volumeMounts:
        - name: socket
          mountPath: /var/run/docker.sock
        - name: logdir
          mountPath: /opt/kafka
      volumes:
      - name: socket
        hostPath:
         path: /var/run/docker.sock
      - name: logdir
        hostPath:
         path: ~/datadir

zookeeper_pod.yaml contains the following.
apiVersion: v1
kind: Service
metadata:
  name: zookeeper
spec:
  ports:
  - port: 2181
  selector:
    app: zookeeper
  clusterIP: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: zookeeper
  labels:
    app: zookeeper
spec:
  replicas: 3
  selector:
    matchLabels:
      app: zookeeper
  template:
    metadata:
      labels:
        app: zookeeper
    spec:
      nodeName: akshatha-ha
      containers:
      - name: zookeeper
        image: wurstmeister/zookeeper
        ports:
        - containerPort: 2181
          protocol: TCP

I need to deploy the kafka with two broker and zookeeper with 3 server.When one of the server goes down the other should able to provide the data.

-- Radha
apache-kafka
kubernetes

1 Answer

5/14/2019

Use StatefulSets to deploy Kafka and zookeeper. There is a good tutorial on zookeeper StatefulSets on Kubernetes.io website. Follow that.

Avoid hostpath if you are not running single node cluster. Use persistent volume or ephemeral storage. If you are on version 1.14 then consider local persistent volumes for statefulsets

-- P Ekambaram
Source: StackOverflow