Kubernetes PersistentVolumeClaim fails with "failed to get secret from [""/""]"

5/19/2020


I'm struggling with dynamic provisioning of Persistent Volumes on kubernetes cluster (one master node and one worker node). I had a same problem when i was trying to create postgresql cluster with pgo (crunchydata), and now, when i am trying to install prometheus with Helm.

I created a default storageClass and enabled DefaultStorageClass admission controller in kubernetes API server as mentioned in documentation and it helped with below error

Warning  FailedScheduling  <unknown>  default-scheduler  running "VolumeBinding" filter plugin for pod "prometheus-1589893794-server-69b8c4c5d4-4zfg2": pod has unbound immediate PersistentVolumeClaims

but now
kubectl describe pvc shows:

Name:          prometheus-1589896240-alertmanager
Namespace:     monitoring
StorageClass:  default
Status:        Pending
Volume:
Labels:        app=prometheus
               app.kubernetes.io/managed-by=Helm
               chart=prometheus-11.3.0
               component=alertmanager
               heritage=Helm
               release=prometheus-1589896240
Annotations:   meta.helm.sh/release-name: prometheus-1589896240
               meta.helm.sh/release-namespace: monitoring
               volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/storageos
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Filesystem
Mounted By:    prometheus-1589896240-alertmanager-767dcb88d9-9gd48
Events:
  Type     Reason              Age               From                         Message
  ----     ------              ----              ----                         -------
  Warning  ProvisioningFailed  8s (x3 over 35s)  persistentvolume-controller  Failed to provision volume with StorageClass "default": failed to get secret from [""/""]

I couldn't find ANYTHING about this error. Persistent Volume Claim needs some kind of secret in kubernetes to dynamicaly create and bind Persistent Volume?

Please help!

Info: My master node OS: Ubuntu Server 18.04 Worker node OS: CentOS 7

kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:58:59Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:50:46Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}

default storageclass:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: default
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
  labels:
    addonmanager.kubernetes.io/mode: EnsureExists
provisioner: kubernetes.io/storageos
allowVolumeExpansion: true
-- Patryk S
kubectl
kubernetes
kubernetes-helm
persistent-volume-claims
persistent-volumes

0 Answers