CephFS failing to mount on Kubernetes

1/13/2020

I set up a Ceph cluster and mounted manually using the sudo mount -t command following the official documentation, and I checked the status of my Ceph cluster - no problems there. Now I am trying to mount my CephFS on Kubernetes but my pod is stuck in ContainerCreating when I run the kubectl create command because it is failing to mount. I looked at many related problems/solutions online but nothing works.

As reference, I am following this guide: https://medium.com/velotio-perspectives/an-innovators-guide-to-kubernetes-storage-using-ceph-a4b919f4e469

My setup consists of 5 AWS instances, and they are as follows:

Node 1: Ceph Mon

Node 2: OSD1 + MDS

Node 3: OSD2 + K8s Master

Node 4: OSD3 + K8s Worker1

Node 5: CephFS + K8s Worker2

Is it okay to stack K8s on top of the same instance as Ceph? I am pretty sure that is allowed, but if that is not allowed, please let me know.

In the describe pod logs, this is the error/warning:

Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /root/userone/kubelet/pods/bbf28924-3639-11ea-879d-0a6b51accf30/volumes/kubernetes.io~cephfs/pvc-4777686c-3639-11ea-879d-0a6b51accf30 --scope -- mount -t ceph -o name=kubernetes-dynamic-user-4d05a2df-3639-11ea-b2d3-5a4147fda646,secret=AQC4whxeqQ9ZERADD2nUgxxOktLE1OIGXThBmw== 172.31.15.110:6789:/pvc-volumes/kubernetes/kubernetes-dynamic-pvc-4d05a269-3639-11ea-b2d3-5a4147fda646 /root/userone/kubelet/pods/bbf28924-3639-11ea-879d-0a6b51accf30/volumes/kubernetes.io~cephfs/pvc-4777686c-3639-11ea-879d-0a6b51accf30
Output: Running scope as unit run-2382233.scope.
couldn't finalize options: -34

These are my .yaml files:

Provisioner:

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: test-provisioner-dt
  namespace: test-dt
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update", "create"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["kube-dns","coredns"]
    verbs: ["list", "get"]
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["create", "get", "delete"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: test-provisioner-dt
  namespace: test-dt
subjects:
  - kind: ServiceAccount
    name: test-provisioner-dt
    namespace: test-dt
roleRef:
  kind: ClusterRole
  name: test-provisioner-dt
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: test-provisioner-dt
  namespace: test-dt
rules:
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["create", "get", "delete"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---

StorageClass:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: postgres-pv
  namespace: test-dt
provisioner: ceph.com/cephfs
parameters:
  monitors: 172.31.15.110:6789
  adminId: admin
  adminSecretName: ceph-secret-admin-dt
  adminSecretNamespace: test-dt
  claimRoot: /pvc-volumes

PVC:

apiVersion: v1
metadata:
  name: postgres-pvc
  namespace: test-dt
spec:
  storageClassName: postgres-pv
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 2Gi

Output of kubectl get pv and kubectl get pvc show the volumes are bound and claimed, no errors. Output of the provisioner pod logs all show success/no errors.

Please help!

-- JYCH
ceph
cephfs
kubernetes

0 Answers