Kubernetes "RETAIN" PVs are not getting auto mounted on restart

5/15/2020

Hello Kubernetes Champs, We recently moved to "Retain" mode for volumes. Everything was working fine and unfortunately OS crashed. After restart all the pods are running successfully except the ones which uses PVCs and PVs. I checked the PV and they seem to be fine

pvc-df5b3877-762e-4ce1-b45f-7f98cba4a4ab 1Gi RWO Retain Bound default/pvc-svcmaps example-nfs 21d

PVC also looks good

pvc-svcmaps Bound pvc-df5b3877-762e-4ce1-b45f-7f98cba4a4ab 1Gi RWO example-nfs 21d

But the pod is stuck and not restarting svcmaps-949f87f58-jg5wz 0/1 Init:Unknown 0 6h48m

Below is the error inside the pod when I run cmd (k describe po svcmaps-949f87f58-jg5wz)

  Type     Reason       Age                      From           Message
  ----     ------       ----                     ----           -------
  Warning  FailedMount  3m24s (x194 over 6h39m)  kubelet, site  Unable to mount volumes for pod "svcmaps-949f87f58-jg5wz_default(cf9ebf48-43cc-4f21-bcd6-480732c5fa43)": timeout expired waiting for volumes to attach or mount for pod "default"/"svcmaps-949f87f58-jg5wz". list of unmounted volumes=[svcmaps-storage]. list of unattached volumes=[svcmaps-storage default-token-ppqgg]````

Is there something we should which I am missing when we use the "reclaimPolicy: Retain" ?

Below is the storage yaml we are using

kind: ServiceAccount
metadata:
  name: nfs-provisioner
---
kind: Service
apiVersion: v1
metadata:
  name: nfs-provisioner
  labels:
    app: nfs-provisioner
spec:
  ports:
    - name: nfs
      port: 2049
    - name: mountd
      port: 20048
    - name: rpcbind
      port: 111
    - name: rpcbind-udp
      port: 111
      protocol: UDP
  selector:
    app: nfs-provisioner
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-provisioner
spec:
  selector:
    matchLabels:
      app: nfs-provisioner
  replicas: 1
  strategy:
    type: Recreate 
  template:
    metadata:
      labels:
        app: nfs-provisioner
    spec:
      serviceAccount: nfs-provisioner
      containers:
        - name: nfs-provisioner
          image: quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2
          ports:
            - name: nfs
              containerPort: 2049
            - name: mountd
              containerPort: 20048
            - name: rpcbind
              containerPort: 111
            - name: rpcbind-udp
              containerPort: 111
              protocol: UDP
          securityContext:
            capabilities:
              add:
                - DAC_READ_SEARCH
                - SYS_RESOURCE
          args:
            - "-provisioner=example.com/nfs"
          env:
            - name: POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            - name: SERVICE_NAME
              value: nfs-provisioner
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          imagePullPolicy: "IfNotPresent"
          volumeMounts:
            - name: export-volume
              mountPath: /export
      volumes:
        - name: export-volume
          hostPath:
            path: /srv
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  annotations:
    storageclass.beta.kubernetes.io/is-default-class: "true"
  name: example-nfs
reclaimPolicy: Retain
provisioner: example.com/nfs
mountOptions:
  - vers=4.1
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  creationTimestamp: null
  name: cluster-admin-0
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: nfs-provisioner
  namespace: default
-- Sumer
docker
k3s
kubectl
kubernetes

0 Answers