kubernetes with ceph rbd

6/15/2018

I want to use ceph rbd with kubernetes.

I have a kubernetes 1.9.2 and ceph 12.2.5 cluster and on my k8s nodes I have installed ceph-common pag.

[root@docker09 manifest]# ceph auth get-key client.admin|base64
QVFEcmxmcGFmZXlZQ2hBQVFJWkExR0pXcS9RcXV4QmgvV3ZFWkE9PQ==
[root@docker09 manifest]# cat ceph-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret
data:
  key: QVFEcmxmcGFmZXlZQ2hBQVFJWkExR0pXcS9RcXV4QmgvV3ZFWkE9PQ==

kubectl create -f ceph-secret.yaml 

Then:

[root@docker09 manifest]# cat ceph-pv.yaml |grep -v "#"
apiVersion: v1
kind: PersistentVolume
metadata:
  name: ceph-pv
spec:
  capacity:
    storage: 2Gi
  accessModes:
    - ReadWriteOnce
  rbd:
    monitors:
      - 10.211.121.61:6789
      - 10.211.121.62:6789
      - 10.211.121.63:6789
    pool: rbd
    image: ceph-image
    user: admin
    secretRef:
      name: ceph-secret
    fsType: ext4
    readOnly: false
  persistentVolumeReclaimPolicy: Recycle

[root@docker09 manifest]# rbd info  ceph-image
rbd image 'ceph-image':
    size 2048 MB in 512 objects
    order 22 (4096 kB objects)
    block_name_prefix: rbd_data.341d374b0dc51
    format: 2
    features: layering
    flags:
    create_timestamp: Fri Jun 15 15:58:04 2018

[root@docker09 manifest]# cat task-claim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: ceph-claim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
[root@docker09 manifest]# kubectl get pv,pvc
NAME         CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                STORAGECLASS   REASON    AGE
pv/ceph-pv   2Gi        RWO            Recycle          Bound     default/ceph-claim                            54m
pv/host      10Gi       RWO            Retain           Bound     default/hostv                                 24d

NAME             STATUS    VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc/ceph-claim   Bound     ceph-pv   2Gi        RWO                           53m
pvc/hostv        Bound     host      10Gi       RWO                           24d

I create a pod use this pvc .

[root@docker09 manifest]#  cat ceph-pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: ceph-pod2
spec:
  containers:
  - name: ceph-busybox
    image: busybox
    command: ["sleep", "60000"]
    volumeMounts:
    - name: ceph-vol1
      mountPath: /usr/share/busybox
      readOnly: false
  volumes:
  - name: ceph-vol1
    persistentVolumeClaim:
      claimName: ceph-claim

[root@docker09 manifest]# kubectl get pod ceph-pod2 -o wide
NAME        READY     STATUS              RESTARTS   AGE       IP        NODE
ceph-pod2   0/1       ContainerCreating   0          14m       <none>    docker10

The pod is still in ContainerCreating status.

[root@docker09 manifest]# kubectl describe  pod ceph-pod2
Events:
  Type     Reason                 Age               From               Message
  ----     ------                 ----              ----               -------
  Normal   Scheduled              15m               default-scheduler  Successfully assigned ceph-pod2 to docker10
  Normal   SuccessfulMountVolume  15m               kubelet, docker10  MountVolume.SetUp succeeded for volume "default-token-85rc7"
  Warning  FailedMount            1m (x6 over 12m)  kubelet, docker10  Unable to mount volumes for pod "ceph-pod2_default(56af9345-7073-11e8-aeb6-1c98ec29cbec)": timeout expired waiting for volumes to attach/mount for pod "default"/"ceph-pod2". list of unattached/unmounted volumes=[ceph-vol1]

I don't know why this happening, need your help... Best regards.

-- Damien
docker
kubernetes

1 Answer

7/11/2018

rbd -v (included in ceph-common) should return the same version as your cluster. You should also check the messages of kubelet.

-- Darx Kies
Source: StackOverflow