I have a kubernetes cluster running on multiple local (bare metal/physcal) machines. I want to deploy kafka on the cluster, but I can't figure out how to use strimzi with my configuration.
I tried to follow the tutorial on the quickstart page : https://strimzi.io/docs/quickstart/master/
Got my zookeeper pods pending at point 2.4. Creating a cluster
:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling <unknown> default-scheduler pod has unbound immediate PersistentVolumeClaims
Warning FailedScheduling <unknown> default-scheduler pod has unbound immediate PersistentVolumeClaims
I usually use hostpath
for my volumes, I don't know what's going on with this...
EDIT: I tried to create a StorageClass using Arghya Sadhu's commands, but the problem still there.
The description of my PVC :
kubectl describe -n my-kafka-project persistentvolumeclaim/data-my-cluster-zookeeper-0
Name: data-my-cluster-zookeeper-0
Namespace: my-kafka-project
StorageClass: local-storage
Status: Pending
Volume:
Labels: app.kubernetes.io/instance=my-cluster
app.kubernetes.io/managed-by=strimzi-cluster-operator
app.kubernetes.io/name=strimzi
strimzi.io/cluster=my-cluster
strimzi.io/kind=Kafka
strimzi.io/name=my-cluster-zookeeper
Annotations: strimzi.io/delete-claim: false
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Mounted By: my-cluster-zookeeper-0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal WaitForFirstConsumer 72s (x66 over 16m) persistentvolume-controller waiting for first consumer to be created before binding
And my pod:
kubectl describe -n my-kafka-project pod/my-cluster-zookeeper-0
Name: my-cluster-zookeeper-0
Namespace: my-kafka-project
Priority: 0
Node: <none>
Labels: app.kubernetes.io/instance=my-cluster
app.kubernetes.io/managed-by=strimzi-cluster-operator
app.kubernetes.io/name=strimzi
controller-revision-hash=my-cluster-zookeeper-7f698cf9b5
statefulset.kubernetes.io/pod-name=my-cluster-zookeeper-0
strimzi.io/cluster=my-cluster
strimzi.io/kind=Kafka
strimzi.io/name=my-cluster-zookeeper
Annotations: strimzi.io/cluster-ca-cert-generation: 0
strimzi.io/generation: 0
Status: Pending
IP:
IPs: <none>
Controlled By: StatefulSet/my-cluster-zookeeper
Containers:
zookeeper:
Image: strimzi/kafka:0.15.0-kafka-2.3.1
Port: <none>
Host Port: <none>
Command:
/opt/kafka/zookeeper_run.sh
Liveness: exec [/opt/kafka/zookeeper_healthcheck.sh] delay=15s timeout=5s period=10s #success=1 #failure=3
Readiness: exec [/opt/kafka/zookeeper_healthcheck.sh] delay=15s timeout=5s period=10s #success=1 #failure=3
Environment:
ZOOKEEPER_NODE_COUNT: 1
ZOOKEEPER_METRICS_ENABLED: false
STRIMZI_KAFKA_GC_LOG_ENABLED: false
KAFKA_HEAP_OPTS: -Xms128M
ZOOKEEPER_CONFIGURATION: autopurge.purgeInterval=1
tickTime=2000
initLimit=5
syncLimit=2
Mounts:
/opt/kafka/custom-config/ from zookeeper-metrics-and-logging (rw)
/var/lib/zookeeper from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from my-cluster-zookeeper-token-hgk2b (ro)
tls-sidecar:
Image: strimzi/kafka:0.15.0-kafka-2.3.1
Ports: 2888/TCP, 3888/TCP, 2181/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
Command:
/opt/stunnel/zookeeper_stunnel_run.sh
Liveness: exec [/opt/stunnel/stunnel_healthcheck.sh 2181] delay=15s timeout=5s period=10s #success=1 #failure=3
Readiness: exec [/opt/stunnel/stunnel_healthcheck.sh 2181] delay=15s timeout=5s period=10s #success=1 #failure=3
Environment:
ZOOKEEPER_NODE_COUNT: 1
TLS_SIDECAR_LOG_LEVEL: notice
Mounts:
/etc/tls-sidecar/cluster-ca-certs/ from cluster-ca-certs (rw)
/etc/tls-sidecar/zookeeper-nodes/ from zookeeper-nodes (rw)
/var/run/secrets/kubernetes.io/serviceaccount from my-cluster-zookeeper-token-hgk2b (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-my-cluster-zookeeper-0
ReadOnly: false
zookeeper-metrics-and-logging:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: my-cluster-zookeeper-config
Optional: false
zookeeper-nodes:
Type: Secret (a volume populated by a Secret)
SecretName: my-cluster-zookeeper-nodes
Optional: false
cluster-ca-certs:
Type: Secret (a volume populated by a Secret)
SecretName: my-cluster-cluster-ca-cert
Optional: false
my-cluster-zookeeper-token-hgk2b:
Type: Secret (a volume populated by a Secret)
SecretName: my-cluster-zookeeper-token-hgk2b
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling <unknown> default-scheduler 0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind.
Warning FailedScheduling <unknown> default-scheduler 0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind.
Yeah it sounds to me that there is something missing on Kubernetes at infrastructure level. You should provide PersistentVolumes which are used for static assign to PVCs or as already mentioned by Arghya you can provide StorageClasses for dynamic assignment.
You need to have a PersistentVolume fulfilling the constraints of the PersistentVolumeClaim.
Use local storage. Using a local storage class:
$ cat <<EOF
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
EOF | kubectl apply -f -
You need to configure a default storageClass in your cluster so that the PersistentVolumeClaim can take the storage from there.
$ kubectl patch storageclass local-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'