Troubles while installing GlusterFS on Kubernetes cluster using Heketi

10/22/2019

I try to install GlusterFS on my kubernetes cluster using heketi. I start gk-deploy but it shows that pods aren't found:

Using Kubernetes CLI.
Using namespace "default".
Checking for pre-existing resources...
 GlusterFS pods ... not found.
 deploy-heketi pod ... not found.
 heketi pod ... not found.
 gluster-s3 pod ... not found.
Creating initial resources ... Error from server (AlreadyExists): error when creating "/heketi/gluster-kubernetes/deploy/kube-templates/heketi-service-account.yaml": serviceaccounts "heketi-service-account" already exists
Error from server (AlreadyExists): clusterrolebindings.rbac.authorization.k8s.io "heketi-sa-view" already exists
clusterrolebinding.rbac.authorization.k8s.io/heketi-sa-view not labeled
OK
node/sapdh2wrk1 not labeled
node/sapdh2wrk2 not labeled
node/sapdh2wrk3 not labeled
daemonset.extensions/glusterfs created
Waiting for GlusterFS pods to start ... pods not found.

I've started gk-deploy more than once.

I have 3 nodes in my kubernetes cluster and it seems like pods can't start up on none of them, but I don't understand why. Pods are created but aren't ready:

kubectl get pods
NAME                      READY   STATUS              RESTARTS   AGE
glusterfs-65mc7           0/1     Running             0          16m
glusterfs-gnxms           0/1     Running             0          16m
glusterfs-htkmh           0/1     Running             0          16m
heketi-754dfc7cdf-zwpwn   0/1     ContainerCreating   0          74m

Here is a log of one GlusterFS Pod, it ends with a warning:

Events:
  Type     Reason     Age                 From                 Message
  Normal   Scheduled  19m                 default-scheduler    Successfully assigned default/glusterfs-65mc7 to sapdh2wrk1
  Normal   Pulled     19m                 kubelet, sapdh2wrk1  Container image "gluster/gluster-centos:latest" already present on machine
  Normal   Created    19m                 kubelet, sapdh2wrk1  Created container
  Normal   Started    19m                 kubelet, sapdh2wrk1  Started container
  Warning  Unhealthy  13m (x12 over 18m)  kubelet, sapdh2wrk1  Liveness probe failed: /usr/local/bin/status-probe.sh
failed check: systemctl -q is-active glusterd.service
  Warning  Unhealthy  3m58s (x35 over 18m)  kubelet, sapdh2wrk1  Readiness probe failed: /usr/local/bin/status-probe.sh
failed check: systemctl -q is-active glusterd.service

Glusterfs-5.8-100.1 is installed and started up on every node including master. What is the reason why Pods don't start up?

-- Nadya
glusterfs
kubernetes

0 Answers