Containers won't start with PV attached

11/2/2018

First time using GCE, previously used k8s in AWS with kops.

I have a PV and PVC setup, both of which are status bound.

And I have my first deployment/pod trying to get running, yaml config for most of this is mostly copied from a working setup in AWS.

When I remove the volumes from the deployment, it starts up and enters running state.

With volumes attached, it stalls at: Start time: Not started yet Phase: Pending Status: ContainerCreating

Nothing in logs at all for the container, not a single line.

Edit: finally found something useful in the pod events rather than container logs

MountVolume.SetUp failed for volume "tio-pv-ssl" : mount failed: exit status 1 Mounting command: systemd-run Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/c64b2284-de81-11e8-9ead-42010a9400a0/volumes/kubernetes.io~nfs/tio-pv-ssl --scope -- /home/kubernetes/containerized_mounter/mounter mount -t nfs 10.148.0.6:/ssl /var/lib/kubelet/pods/c64b2284-de81-11e8-9ead-42010a9400a0/volumes/kubernetes.io~nfs/tio-pv-ssl Output: Running scope as unit: run-r68f0f0ac5bf54be2b47ac60d9e533712.scope Mount failed: mount failed: exit status 32 Mounting command: chroot Mounting arguments: [/home/kubernetes/containerized_mounter/rootfs mount -t nfs 10.148.0.6:/ssl /var/lib/kubelet/pods/c64b2284-de81-11e8-9ead-42010a9400a0/volumes/kubernetes.io~nfs/tio-pv-ssl] Output: mount.nfs: access denied by server while mounting 10.148.0.6:/ssl

NFS server 10.148.0.6 was setup using https://cloud.google.com/launcher/docs/single-node-fileserver Seems to be running fine and /ssl folder is present under NFS root (/data/ssl)

Kubectl status

kubectl get pv
NAME              CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                             STORAGECLASS   REASON    AGE
tio-pv-ssl        1000Gi     RWX            Retain           Bound     core/tio-pv-claim-ssl             standard                 17m

kubectl get pvc --namespace=core
NAME                 STATUS    VOLUME         CAPACITY   ACCESS MODES   STORAGECLASS   AGE
tio-pv-claim-ssl     Bound     tio-pv-ssl     1000Gi     RWX            standard       18m

kubectl get pods --namespace=core
NAME                                READY     STATUS              RESTARTS   AGE
proxy-deployment-64b9cdb55d-8htjf   0/1       ContainerCreating   0          13m

Volume Yaml

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: tio-pv-ssl
spec:
  capacity:
    storage: 1000Gi
  storageClassName: standard
  accessModes:
    - ReadWriteMany
  nfs:
    server: 10.148.0.6
    path: "/ssl"
---                   
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: tio-pv-claim-ssl
  namespace: core
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Mi
  volumeName: tio-pv-ssl
  storageClassName: standard

Deployment yaml

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: proxy-deployment
spec:
  replicas: 1 
  template:
    metadata:
      labels:
        app: proxy
    spec:
      containers:
      - name: proxy-ctr
        image: asia.gcr.io/xyz/nginx-proxy:latest
        resources:
          limits:
            cpu: "500m"
            memory: 1024Mi
          requests:
            cpu: 100m
            memory: 256Mi
        ports:
        - containerPort: 80
        - containerPort: 443
        volumeMounts:
          - name: tio-ssl-storage
            mountPath: "/etc/nginx/ssl"
      volumes:
        - name: tio-ssl-storage
          persistentVolumeClaim:
            claimName: tio-pv-claim-ssl
  strategy:
    type: "RollingUpdate"
    rollingUpdate:
      maxUnavailable: 1
      maxSurge: 1
---
apiVersion: v1
kind: Service
metadata:
  name: proxyservice
  namespace: core
  labels:
    app: proxy
spec:
  ports:
  - port: 80
    name: port-http
    protocol: TCP
  - port: 443
    name: port-https
    protocol: TCP
  selector:
    app: proxy
  type: LoadBalancer
-- Mark Walker
google-kubernetes-engine
kubernetes

1 Answer

11/2/2018

Solved my own issue once found where the logs were hidden.

path: "/ssl"

should have been full path on the server, not relative to the nfs data folder

path: "/data/ssl"
-- Mark Walker
Source: StackOverflow