kubectl pods are running but persistent volumes seems to be unclaimed?

8/15/2017

kubectl get pods:

NAME                         READY     STATUS    RESTARTS   AGE
wordpress-2942163230-47xzl   3/3       Running   0          20m

kubectl get pv:

NAME      CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS      CLAIM     STORAGECLASS   REASON    AGE
pv0001    30Gi       RWX           Retain          Available                                      30m

kubectl get pvc:

NAME            STATUS    VOLUME    CAPACITY   ACCESSMODES   STORAGECLASS   AGE
task-pv-claim   Pending                                      manual         26m

Why is the task-pv-claim not claimed? Here is my deployment config:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: wordpress
  labels:
    app: wordpress
spec:
  replicas: 2
  selector:
    matchLabels:
      app: wordpress
  template:
    metadata:
      labels:
        app: wordpress
    spec:
      terminationGracePeriodSeconds: 30
      containers:
        - image: eu.gcr.io/abcxyz/wordpress:deploy-1502795865
          name: wordpress
          imagePullPolicy: "Always"
          env:
            - name: WORDPRESS_HOST
              value: localhost
            - name: WORDPRESS_DB_USERNAME
              valueFrom:
                secretKeyRef:
                  name: cloudsql-db-credentials
                  key: username
          volumeMounts:
            - name: wordpress-persistent-storage
              mountPath: /var/www/html
        - image: eu.gcr.io/abcxyz/nginx:deploy-1502795865
          name: nginx
          imagePullPolicy: "Always"
          ports:
            - containerPort: 80
              name: nginx
          volumeMounts:
            - name: wordpress-persistent-storage
              mountPath: /var/www/html
              readOnly: true
        - image: gcr.io/cloudsql-docker/gce-proxy:1.09
          name: cloudsql-proxy
          command: ["/cloud_sql_proxy", "--dir=/cloudsql",
                    "-instances=abcxyz:europe-west1:wordpressdb2=tcp:3306",
                    "-credential_file=/secrets/cloudsql/credentials.json"]
          volumeMounts:
            - name: cloudsql-instance-credentials
              mountPath: /secrets/cloudsql
              readOnly: true
            - name: ssl-certs
              mountPath: /etc/ssl/certs
            - name: cloudsql
              mountPath: /cloudsql
      volumes:
        - name: wordpress-persistent-storage
          persistentVolumeClaim:
            claimName: "task-pv-claim"
        - name: cloudsql-instance-credentials
          secret:
            secretName: cloudsql-instance-credentials
        - name: ssl-certs
          hostPath:
            path: /etc/ssl/certs
        - name: cloudsql
          emptyDir:

If I do a kubectl describeI get the following output:

Volumes:
  wordpress-persistent-storage:
    Type:   GCEPersistentDisk (a Persistent Disk resource in Google Compute Engine)
    PDName: wordpress-disk
    FSType: ext4
    Partition:  0
    ReadOnly:   false

Which was actually the previous config... Strange as I did do a kubectl apply with the config, as well as deleting the pod (so it would restart with the new config).

-- Chris Stryczynski
kubernetes
persistent-volumes

1 Answer

8/15/2017

It seems I had to delete the deployment kubectl delete deployment wordpress and not just the pod.

-- Chris Stryczynski
Source: StackOverflow