Able to get basic Rook/Ceph example to work, but all data apparently sits on a single node

4/18/2019

Using Rook 0.9.3 I was able to bring up a Ceph-based directory for a MySQL-database on a three-node Kubernetes cluster (1 master, two workers) simply as follows:

kubectl create -f cluster/examples/kubernetes/ceph/operator.yaml
kubeclt create -f cluster/examples/kubernetes/ceph/cluster.yaml
vim cluster/examples/kubernetes/ceph/storageclass.yaml # change xfs to ext4
kubectl create -f cluster/examples/kubernetes/ceph/storageclass.yaml
kubectl create -f cluster/examples/kubernetes/mysql.yaml

When I now bash into the pod wordpress-mysql-* I can see that /var/lib/mysql is mounted from /dev/rbd1. If I create a random file in this directory and then delete the pod, the file has persisted when a new instance of the pod comes up.

My first worker contains these directories in /var/lib/rook: mon-a mon-c mon-d osd0 rook-ceph. My second worker contains only one directory in /var/lib/rook: mon-b. This and other evidence (from df) suggest that Rook (and by extension Ceph) stores all of its file data (e.g. all blocks that constitute the mounted /var/lib/mysql) in /var/lib/rook/osd0, i.e. once on a single node.

I would have expected that blocks are distributed across several nodes so that when one node (the first worker, in my case) fails, data access is still available. Is this a naive expectation? If not, how can I configure Rook accordingly? Also, I have second, unformatted disks on both worker nodes, and I would prefer for Rook/Ceph to use those. How can this be accomplished?

-- rookie099
ceph
kubernetes
kubernetes-rook

1 Answer

4/18/2019

for using other partition as osd you should change cluster.yml and add

nodes:
- name: "kube-node1"
  devices:
  - name: "sdb"
- name: "kube-node2"
  devices:
  - name: "sdb"
- name: "kube-node3"
  devices:
  - name: "sdb"
-- yasin lachini
Source: StackOverflow