I have a Hadoop cluster running on a local cloud, and each data nodes has 8 disks and all disks are allocated to Hadoop. I also want to setup a Kubernetes cluster on these nodes and use local storage. For this purpose, I decided to use a directory in one of disks per data nodes for Kubernetes persistent claims. I'm aware that might create a disk contention but probably I will handle it in the future.
My question is that, if I want to use rook to handle the storage on Kubernetes since that directory has already a file system (which here is HDFS), is it feasible? I mean does rook accept this directory on each data nodes?
Thanks,