I'm trying to setup my Kubernetescluster with a Ceph Cluster using a storageClass, so that with each PVC a new PV is created automatically inside the ceph cluster.
But it doesn't work. I've tried a lot and read a lot of documentation and tutorials and can't figure out, what went wrong.
I've created 2 secrets, for the ceph admin user and an other user kube
, which I created with this command to grant access to a ceph osd pool.
Creating the pool: sudo ceph osd pool create kube 128
Creating the user: sudo ceph auth get-or-create client.kube mon 'allow r' \ osd 'allow class-read object_prefix rbd_children, allow rwx pool=kube' \ -o /etc/ceph/ceph.client.kube.keyring
After that I exported both the keys and converted them to Base64 with: sudo ceph auth get-key client.admin | base64
and sudo ceph auth get-key client.kube | base64
I used those values inside my secret.yaml to create kubernetes secrets.
apiVersion: v1
kind: Secret
type: "kubernetes.io/rbd"
metadata:
name: ceph-secret
data:
key: QVFCb3NxMVpiMVBITkJBQU5ucEEwOEZvM1JlWHBCNytvRmxIZmc9PQo=
And another one named ceph-user-secret.
Then I created a storage class to use the ceph cluster
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/rbd
parameters:
monitors: publicIpofCephMon1:6789,publicIpofCephMon2:6789
adminId: admin
adminSecretName: ceph-secret
pool: kube
userId: kube
userSecretName: ceph-kube-secret
fsType: ext4
imageFormat: "2"
imageFeatures: "layering"
To test my setup I created a PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-eng
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
But it gets stuck in the pending state:
#kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
pvc-eng Pending standard 25m
Also, no images are created inside the ceph kube pool. Do you have any recommendations how to debug this problem?
I tried to install the ceph-common ubuntu package on all kubernetes nodes. I switched the kube-controller-manager docker image with an image provided by AT&T which includes the ceph-common package.
https://github.com/att-comdev/dockerfiles/tree/master/kube-controller-manager
Network is fine, I can access my ceph cluster from inside a pod and from every kubernetes host.
I would be glad if anyone has any ideas!
As an expansion on the accepted answer.....
RBD is a remote block device. ie an external hard drive like iSCSI. The filesystem is interpreted by the client container so can only be written by a single user or corruption will happen.
CephFS is a network aware filesystem similar to NFS or SMB/CIFS. That allows multiple writes to different files. The filesystem is interpreted by the Ceph server so can accept writes from multiple clients.
You must use annotation: ReadWriteOnce
. As you can see https://kubernetes.io/docs/concepts/storage/persistent-volumes/ (Persistent Volumes section) RBD devices does not support ReadWriteMany mode. Choose different volume plugin (CephFS, for example) if you need read and write data from PV by several pods.