Failed to bind to volume when installing RabbitMQ on K8S

8/31/2020

I'm trying to install rabbit-mq using helm, but installation fails because of volume issues.

This is my storage class:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate

This is my persistent volume:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: main-pv
spec:
  capacity:
    storage: 100Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Delete
  storageClassName: local-storage
  local:
    path: /media/2TB-DATA/k8s-pv
  nodeAffinity:
    required:
      nodeSelectorTerms:
        - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
                - node-dev

This is the output to list my storage and pv:

# kubectl get storageclass
NAME                      PROVISIONER                    RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
local-storage (default)   kubernetes.io/no-provisioner   Delete          Immediate           false                  14m
# kubectl get pv
NAME      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS    REASON   AGE
main-pv   100Gi      RWX            Delete           Available           local-storage            40m

After I install rabbit-mq:

helm install rabbitmq bitnami/rabbitmq

The pod is in Pending state, and I see this error:

# kubectl describe pvc
Name:          data-rabbitmq-0
Namespace:     default
StorageClass:
Status:        Pending
Volume:
Labels:        app.kubernetes.io/instance=rabbitmq
               app.kubernetes.io/name=rabbitmq
Annotations:   <none>
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Filesystem
Mounted By:    rabbitmq-0
Events:
  Type    Reason         Age                     From                         Message
  ----    ------         ----                    ----                         -------
  Normal  FailedBinding  3m20s (x4363 over 18h)  persistentvolume-controller  no persistent volumes available for this claim and no storage class is set

What am I doing wrong?

-- Moshe Shaham
kubernetes
rabbitmq

1 Answer

9/3/2020

Maybe platform related. Where did you try to do that? Im asking cause just cant reproduce on GKE - it works fine

Cluster version, labels, nodes

kubectl get nodes --show-labels
NAME                                       STATUS   ROLES    AGE   VERSION           LABELS
gke-cluster-1-default-pool-82008fd9-8x81   Ready    <none>   96d   v1.14.10-gke.36   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/fluentd-ds-ready=true,beta.kubernetes.io/instance-type=n1-standard-1,beta.kubernetes.io/os=linux,cloud.google.com/gke-nodepool=default-pool,cloud.google.com/gke-os-distribution=cos,failure-domain.beta.kubernetes.io/region=europe-west4,failure-domain.beta.kubernetes.io/zone=europe-west4-b,kubernetes.io/arch=amd64,kubernetes.io/hostname=gke-cluster-1-default-pool-82008fd9-8x81,kubernetes.io/os=linux,test=node
gke-cluster-1-default-pool-82008fd9-qkp7   Ready    <none>   96d   v1.14.10-gke.36   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/fluentd-ds-ready=true,beta.kubernetes.io/instance-type=n1-standard-1,beta.kubernetes.io/os=linux,cloud.google.com/gke-nodepool=default-pool,cloud.google.com/gke-os-distribution=cos,failure-domain.beta.kubernetes.io/region=europe-west4,failure-domain.beta.kubernetes.io/zone=europe-west4-b,kubernetes.io/arch=amd64,kubernetes.io/hostname=gke-cluster-1-default-pool-82008fd9-qkp7,kubernetes.io/os=linux,test=node
gke-cluster-1-default-pool-82008fd9-tlc7   Ready    <none>   96d   v1.14.10-gke.36   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/fluentd-ds-ready=true,beta.kubernetes.io/instance-type=n1-standard-1,beta.kubernetes.io/os=linux,cloud.google.com/gke-nodepool=default-pool,cloud.google.com/gke-os-distribution=cos,failure-domain.beta.kubernetes.io/region=europe-west4,failure-domain.beta.kubernetes.io/zone=europe-west4-b,kubernetes.io/arch=amd64,kubernetes.io/hostname=gke-cluster-1-default-pool-82008fd9-tlc7,kubernetes.io/os=linux,test=node

PV, Storageclass

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: main-pv
spec:
  capacity:
    storage: 10Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Delete
  storageClassName: local-storage
  local:
    path: /mnt/data
  nodeAffinity:
    required:
      nodeSelectorTerms:
        - matchExpressions:
            - key: test
              operator: In
              values:
                - node-test

Installing chart:

helm install rabbitmq bitnami/rabbitmq
...
kubectl get pods
NAME                   READY   STATUS    RESTARTS   AGE
...
pod/rabbitmq-0         1/1     Running   0          3m40s
...



kubectl describe pod rabbitmq-0
Name:           rabbitmq-0
Namespace:      default
Priority:       0
Node:           gke-cluster-1-default-pool-82008fd9-tlc7/10.164.0.29
Start Time:     Thu, 03 Sep 2020 07:34:10 +0000
Labels:         app.kubernetes.io/instance=rabbitmq
                app.kubernetes.io/managed-by=Helm
                app.kubernetes.io/name=rabbitmq
                controller-revision-hash=rabbitmq-8687f4cb9f
                helm.sh/chart=rabbitmq-7.6.4
                statefulset.kubernetes.io/pod-name=rabbitmq-0
Annotations:    checksum/secret: 433e8ea7590e8d9f1bb94ed2f55e6d9b95f8abef722a917b97a9e916921d7ac5
                kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container rabbitmq
Status:         Running
IP:             10.16.2.13
IPs:            <none>
Controlled By:  StatefulSet/rabbitmq
Containers:
  rabbitmq:
    Container ID:   docker://b1a567522f50ac4c0663db2d9eca5fd8721d9a3d900ac38bb58f0cae038162f2
    Image:          docker.io/bitnami/rabbitmq:3.8.7-debian-10-r0
    Image ID:       docker-pullable://bitnami/rabbitmq@sha256:9abd53aeef6d222fec318c97a75dd50ce19c16b11cb83a3e4fb91c4047ea0d4d
    Ports:          5672/TCP, 25672/TCP, 15672/TCP, 4369/TCP
    Host Ports:     0/TCP, 0/TCP, 0/TCP, 0/TCP
    State:          Running
      Started:      Thu, 03 Sep 2020 07:34:34 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:      100m
    Liveness:   exec [/bin/bash -ec rabbitmq-diagnostics -q check_running] delay=120s timeout=20s period=30s #success=1 #failure=6
    Readiness:  exec [/bin/bash -ec rabbitmq-diagnostics -q check_running] delay=10s timeout=20s period=30s #success=1 #failure=3
    Environment:
      BITNAMI_DEBUG:            false
      MY_POD_IP:                 (v1:status.podIP)
      MY_POD_NAME:              rabbitmq-0 (v1:metadata.name)
      MY_POD_NAMESPACE:         default (v1:metadata.namespace)
      K8S_SERVICE_NAME:         rabbitmq-headless
      K8S_ADDRESS_TYPE:         hostname
      RABBITMQ_FORCE_BOOT:      no
      RABBITMQ_NODE_NAME:       rabbit@$(MY_POD_NAME).$(K8S_SERVICE_NAME).$(MY_POD_NAMESPACE).svc.cluster.local
      K8S_HOSTNAME_SUFFIX:      .$(K8S_SERVICE_NAME).$(MY_POD_NAMESPACE).svc.cluster.local
      RABBITMQ_MNESIA_DIR:      /bitnami/rabbitmq/mnesia/$(RABBITMQ_NODE_NAME)
      RABBITMQ_LDAP_ENABLE:     no
      RABBITMQ_LOGS:            -
      RABBITMQ_ULIMIT_NOFILES:  65536
      RABBITMQ_USE_LONGNAME:    true
      RABBITMQ_ERL_COOKIE:      <set to the key 'rabbitmq-erlang-cookie' in secret 'rabbitmq'>  Optional: false
      RABBITMQ_USERNAME:        user
      RABBITMQ_PASSWORD:        <set to the key 'rabbitmq-password' in secret 'rabbitmq'>  Optional: false
      RABBITMQ_PLUGINS:         rabbitmq_management, rabbitmq_peer_discovery_k8s, rabbitmq_auth_backend_ldap
Mounts:
      /bitnami/rabbitmq/conf from configuration (rw)
      /bitnami/rabbitmq/mnesia from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from rabbitmq-token-mclhw (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  data-rabbitmq-0
    ReadOnly:   false
  configuration:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      rabbitmq-config
    Optional:  false
  rabbitmq-token-mclhw:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  rabbitmq-token-mclhw
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason                  Age    From                                               Message
  ----    ------                  ----   ----                                               -------
  Normal  Scheduled               6m42s  default-scheduler                                  Successfully assigned default/rabbitmq-0 to gke-cluster-1-default-pool-82008fd9-tlc7
  Normal  SuccessfulAttachVolume  6m36s  attachdetach-controller                            AttachVolume.Attach succeeded for volume "pvc-8145821b-ed09-11ea-b464-42010aa400e3"
  Normal  Pulling                 6m32s  kubelet, gke-cluster-1-default-pool-82008fd9-tlc7  Pulling image "docker.io/bitnami/rabbitmq:3.8.7-debian-10-r0"
  Normal  Pulled                  6m22s  kubelet, gke-cluster-1-default-pool-82008fd9-tlc7  Successfully pulled image "docker.io/bitnami/rabbitmq:3.8.7-debian-10-r0"
  Normal  Created                 6m18s  kubelet, gke-cluster-1-default-pool-82008fd9-tlc7  Created container rabbitmq
  Normal  Started                 6m18s  kubelet, gke-cluster-1-default-pool-82008fd9-tlc7  Started container rabbitmq
-- Vit
Source: StackOverflow