I am trying to run kafka with kubeless but I get this error pod has unbound immediate PersistentVolumeClaims. I have created a persistent volume using rook and ceph and trying to use this perisistent volume with kubeless kafka. However when I run the code I get "pod has unbound persistent volume claims"
What am I doing wrong here?
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: datadir
labels:
kubeless: kafka
spec:
storageClassName: rook-block
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: zookeeper
labels:
kubeless: zookeeper
spec:
storageClassName: rook-block
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
apiVersion: v1
kind: Service
metadata:
name: kafka
namespace: kubeless
spec:
ports:
- port: 9092
selector:
kubeless: kafka
---
apiVersion: v1
kind: Service
metadata:
name: zoo
namespace: kubeless
spec:
clusterIP: None
ports:
- name: peer
port: 9092
- name: leader-election
port: 3888
selector:
kubeless: zookeeper
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
labels:
kubeless: kafka-trigger-controller
name: kafka-trigger-controller
namespace: kubeless
spec:
selector:
matchLabels:
kubeless: kafka-trigger-controller
template:
metadata:
labels:
kubeless: kafka-trigger-controller
spec:
containers:
- env:
- name: KUBELESS_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: KUBELESS_CONFIG
value: kubeless-config
image: kubeless/kafka-trigger-controller:v1.0.2
imagePullPolicy: IfNotPresent
name: kafka-trigger-controller
serviceAccountName: controller-acct
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: kafka-controller-deployer
rules:
- apiGroups:
- ""
resources:
- services
- configmaps
verbs:
- get
- list
- apiGroups:
- kubeless.io
resources:
- functions
- kafkatriggers
verbs:
- get
- list
- watch
- update
- delete
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kafka-controller-deployer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kafka-controller-deployer
subjects:
- kind: ServiceAccount
name: controller-acct
namespace: kubeless
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: kafkatriggers.kubeless.io
spec:
group: kubeless.io
names:
kind: KafkaTrigger
plural: kafkatriggers
singular: kafkatrigger
scope: Namespaced
version: v1beta1
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: kafka
namespace: kubeless
spec:
serviceName: broker
template:
metadata:
labels:
kubeless: kafka
spec:
containers:
- env:
- name: KAFKA_ADVERTISED_HOST_NAME
value: broker.kubeless
- name: KAFKA_ADVERTISED_PORT
value: "9092"
- name: KAFKA_PORT
value: "9092"
- name: KAFKA_DELETE_TOPIC_ENABLE
value: "true"
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper.kubeless:2181
- name: ALLOW_PLAINTEXT_LISTENER
value: "yes"
image: bitnami/kafka:1.1.0-r0
imagePullPolicy: IfNotPresent
livenessProbe:
initialDelaySeconds: 30
tcpSocket:
port: 9092
name: broker
ports:
- containerPort: 9092
volumeMounts:
- mountPath: /bitnami/kafka/data
name: datadir
initContainers:
- command:
- sh
- -c
- chmod -R g+rwX /bitnami
image: busybox
imagePullPolicy: IfNotPresent
name: volume-permissions
volumeMounts:
- mountPath: /bitnami/kafka/data
name: datadir
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: broker
namespace: kubeless
spec:
clusterIP: None
ports:
- port: 9092
selector:
kubeless: kafka
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: zoo
namespace: kubeless
spec:
serviceName: zoo
template:
metadata:
labels:
kubeless: zookeeper
spec:
containers:
- env:
- name: ZOO_SERVERS
value: server.1=zoo-0.zoo:2888:3888:participant
- name: ALLOW_ANONYMOUS_LOGIN
value: "yes"
image: bitnami/zookeeper:3.4.10-r12
imagePullPolicy: IfNotPresent
name: zookeeper
ports:
- containerPort: 2181
name: client
- containerPort: 2888
name: peer
- containerPort: 3888
name: leader-election
volumeMounts:
- mountPath: /bitnami/zookeeper
name: zookeeper
initContainers:
- command:
- sh
- -c
- chmod -R g+rwX /bitnami
image: busybox
imagePullPolicy: IfNotPresent
name: volume-permissions
volumeMounts:
- mountPath: /bitnami/zookeeper
name: zookeeper
volumeClaimTemplates:
- metadata:
name: zookeeper
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: zookeeper
namespace: kubeless
spec:
ports:
- name: client
port: 2181
selector:
kubeless: zookeeper
vagrant@ubuntu-xenial:~/infra/ansible/scripts/kubeless-kafka-trigger$ kubectl get pod -n kubeless
NAME READY STATUS RESTARTS AGE
kafka-0 0/1 Pending 0 8m44s
kafka-trigger-controller-7cbd54b458-pccpn 1/1 Running 0 8m47s
kubeless-controller-manager-5bcb6757d9-nlksd 3/3 Running 0 3h34m
zoo-0 0/1 Pending 0 8m42s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 45s (x10 over 10m) default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 2 times
kubectl describe pod kafka-0 -n kubeless
Name: kafka-0
Namespace: kubeless
Priority: 0
Node: <none>
Labels: controller-revision-hash=kafka-c498d7f6
kubeless=kafka
statefulset.kubernetes.io/pod-name=kafka-0
Annotations: <none>
Status: Pending
IP:
Controlled By: StatefulSet/kafka
Init Containers:
volume-permissions:
Image: busybox
Port: <none>
Host Port: <none>
Command:
sh
-c
chmod -R g+rwX /bitnami
Environment: <none>
Mounts:
/bitnami/kafka/data from datadir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-wj8vx (ro)
Containers:
broker:
Image: bitnami/kafka:1.1.0-r0
Port: 9092/TCP
Host Port: 0/TCP
Liveness: tcp-socket :9092 delay=30s timeout=1s period=10s #success=1 #failure=3
Environment:
KAFKA_ADVERTISED_HOST_NAME: broker.kubeless
KAFKA_ADVERTISED_PORT: 9092
KAFKA_PORT: 9092
KAFKA_DELETE_TOPIC_ENABLE: true
KAFKA_ZOOKEEPER_CONNECT: zookeeper.kubeless:2181
ALLOW_PLAINTEXT_LISTENER: yes
Mounts:
/bitnami/kafka/data from datadir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-wj8vx (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
datadir:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: datadir-kafka-0
ReadOnly: false
default-token-wj8vx:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-wj8vx
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 45s (x10 over 10m) default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I got it working.. For someone who faces the same problem this would be useful..
This uses rook-ceph storage kubeless kafka
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: kafka
namespace: kubeless
labels:
kubeless: kafka
spec:
storageClassName: rook-block
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: zookeeper
namespace: kubeless
labels:
kubeless: zookeeper
spec:
storageClassName: rook-block
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: kafka
namespace: kubeless
spec:
ports:
- port: 9092
selector:
kubeless: kafka
---
apiVersion: v1
kind: Service
metadata:
name: zoo
namespace: kubeless
spec:
clusterIP: None
ports:
- name: peer
port: 9092
- name: leader-election
port: 3888
selector:
kubeless: zookeeper
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
labels:
kubeless: kafka-trigger-controller
name: kafka-trigger-controller
namespace: kubeless
spec:
selector:
matchLabels:
kubeless: kafka-trigger-controller
template:
metadata:
labels:
kubeless: kafka-trigger-controller
spec:
containers:
- env:
- name: KUBELESS_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: KUBELESS_CONFIG
value: kubeless-config
image: kubeless/kafka-trigger-controller:v1.0.2
imagePullPolicy: IfNotPresent
name: kafka-trigger-controller
serviceAccountName: controller-acct
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: kafka-controller-deployer
rules:
- apiGroups:
- ""
resources:
- services
- configmaps
verbs:
- get
- list
- apiGroups:
- kubeless.io
resources:
- functions
- kafkatriggers
verbs:
- get
- list
- watch
- update
- delete
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kafka-controller-deployer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kafka-controller-deployer
subjects:
- kind: ServiceAccount
name: controller-acct
namespace: kubeless
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: kafkatriggers.kubeless.io
spec:
group: kubeless.io
names:
kind: KafkaTrigger
plural: kafkatriggers
singular: kafkatrigger
scope: Namespaced
version: v1beta1
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: kafka
namespace: kubeless
spec:
serviceName: broker
template:
metadata:
labels:
kubeless: kafka
spec:
containers:
- env:
- name: KAFKA_ADVERTISED_HOST_NAME
value: broker.kubeless
- name: KAFKA_ADVERTISED_PORT
value: "9092"
- name: KAFKA_PORT
value: "9092"
- name: KAFKA_DELETE_TOPIC_ENABLE
value: "true"
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper.kubeless:2181
- name: ALLOW_PLAINTEXT_LISTENER
value: "yes"
image: bitnami/kafka:1.1.0-r0
imagePullPolicy: IfNotPresent
livenessProbe:
initialDelaySeconds: 30
tcpSocket:
port: 9092
name: broker
ports:
- containerPort: 9092
volumeMounts:
- mountPath: /bitnami/kafka/data
name: kafka
initContainers:
- command:
- sh
- -c
- chmod -R g+rwX /bitnami
image: busybox
imagePullPolicy: IfNotPresent
name: volume-permissions
volumeMounts:
- mountPath: /bitnami/kafka/data
name: kafka
volumes:
- name: kafka
persistentVolumeClaim:
claimName: kafka
---
apiVersion: v1
kind: Service
metadata:
name: broker
namespace: kubeless
spec:
clusterIP: None
ports:
- port: 9092
selector:
kubeless: kafka
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: zoo
namespace: kubeless
spec:
serviceName: zoo
template:
metadata:
labels:
kubeless: zookeeper
spec:
containers:
- env:
- name: ZOO_SERVERS
value: server.1=zoo-0.zoo:2888:3888:participant
- name: ALLOW_ANONYMOUS_LOGIN
value: "yes"
image: bitnami/zookeeper:3.4.10-r12
imagePullPolicy: IfNotPresent
name: zookeeper
ports:
- containerPort: 2181
name: client
- containerPort: 2888
name: peer
- containerPort: 3888
name: leader-election
volumeMounts:
- mountPath: /bitnami/zookeeper
name: zookeeper
initContainers:
- command:
- sh
- -c
- chmod -R g+rwX /bitnami
image: busybox
imagePullPolicy: IfNotPresent
name: volume-permissions
volumeMounts:
- mountPath: /bitnami/zookeeper
name: zookeeper
volumes:
- name: zookeeper
persistentVolumeClaim:
claimName: zookeeper
---
apiVersion: v1
kind: Service
metadata:
name: zookeeper
namespace: kubeless
spec:
ports:
- name: client
port: 2181
selector:
kubeless: zookeeper
Got the same error in my minikube. Forgot to create volumes for my statefulSets.
Created PVC. Need to pay attention to storageClassName, check througt availiable (i did it at dashboard).
{
"kind": "PersistentVolumeClaim",
"apiVersion": "v1",
"metadata": {
"name": "XXXX",
"namespace": "kube-public",
"labels": {
"kubeless": "XXXX"
}
},
"spec": {
"storageClassName": "hostpath",
"accessModes": [
"ReadWriteOnce"
],
"resources": {
"requests": {
"storage": "1Gi"
}
}
}
}
I got persistence volumes. Then i edited statefulSet:
"volumes": [
{
"name": "XXX",
"persistentVolumeClaim": {
"claimName": "XXX"
}
}
Added "persistentVolumeClaim" attribute, dropped pod, waited until new pod created.