Setting GCP FileStorage and Kubernetes

7/3/2019

How do you mount the FileStorage to the Kubernetes pod in GCP

I did follow the documentation but the pods still pending

I did:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: <some name>
spec:
  capacity:
    storage: 50Gi
  accessModes:
  - ReadWriteMany
  nfs:
    path: /
    server: <filestorage_ip with this format xx.xxx.xxx.xx>
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: <some name>
  namespace: <some name>
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: ""
  resources:
    requests:
      storage: 50Gi
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: <some name>
  name: <some name>
  labels:
    app: <some name>
spec:
  replicas: 2
  selector:
    matchLabels:
      app: <some name>
  template:
    metadata:
      labels:
        app: <some name>
    spec:
      containers:
      - name: <some name>
        image: gcr.io/somepath/<some name>@sha256:<some hash>
        ports:
        - containerPort: 80 
        volumeMounts:
          - name: <some name>
            mountPath: /var/www/html
        imagePullPolicy: Always
      restartPolicy: Always
      volumes:
        - name: <some name>
          persistentVolumeClaim:
            claimName: <some name>
            readOnly: false

Running kubectl -n <some name> describe pods returns:

Events:
  Type     Reason       Age                     From                                                        Message
  ----     ------       ----                    ----                                                        -------
  Warning  FailedMount  23m (x52 over 3h21m)    kubelet, gke-<some name>-default-pool-<some hash>  Unable to mount volumes for pod "<some name>-<some hash>_<some name>(<some hash>)": timeout expired waiting for volumes to attach or mount for pod "<some name>"/"<some name>-<some hash>". list of unmounted volumes=[<some name>-persistent-storage]. list of unattached volumes=[<some name>-persistent-storage default-token-<some hash>]
  Warning  FailedMount  3m5s (x127 over 3h21m)  kubelet, gke-<some name>-default-pool-<some hash>  (combined from similar events): MountVolume.SetUp failed for volume "<some name>-storage" : mount failed: exit status 1
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/<some path>/volumes/kubernetes.io~nfs/<some name>-storage --scope -- /home/kubernetes/containerized_mounter/mounter mount -t nfs <filestorage_ip with this format xx.xxx.xxx.xx>:/ /var/lib/kubelet/pods/<some hash>/volumes/kubernetes.io~nfs/<some name>-storage
Output: Running scope as unit: run-<some hash>.scope
Mount failed: mount failed: exit status 32
Mounting command: chroot
Mounting arguments: [/home/kubernetes/containerized_mounter/rootfs mount -t nfs <filestorage_ip with this format xx.xxx.xxx.xx>:/ /var/lib/kubelet/pods/<some hash>/volumes/kubernetes.io~nfs/<some name>-storage]
Output: mount.nfs: access denied by server while mounting <filestorage_ip with this format xx.xxx.xxx.xx>:/

It seems that the pod can't access de the IP of the FileStorage service In the documentation says that needs to be on the same VPC

"Authorized network * Filestore instances can only be accessed from machines on an authorized VPC network. Select the network from which you need access."

But I don't know how to add the Kubernetes cluster to the VPC

Any suggestions?

-- Yafar Valverde
google-cloud-filestore
google-cloud-platform
kubernetes
vpc

1 Answer

7/5/2019

I found the problem

The PersistentVolume can't be mount in path: / It needs the "Fileshare properties" field that makes you fill in on the creation Now works with multiple pods!

-- Yafar Valverde
Source: StackOverflow