redis-ha : Unable to create specified number of masters in cluster

11/2/2017

I am trying to create a cluster set up of 3 master, 3 slaves and 3 sentinels using below command.

helm install --set replicas.master=3 --set replicas.slave=3 stable/redis-ha

But I see that only 1 master is getting created. Helm --version 0.2.3 GIT repo : https://github.com/kubernetes/charts/tree/master/stable/redis-ha

Below are logs from helm.

=>     NAME                         DESIRED  CURRENT  AGE
=>    eloping-fox-redis-ha-master  3        1        9s

Am I missing something or there is some issue ? I have tried this multiple items and each time only 1 master is getting created.

I am trying this on windows machine using VM/Minikube/Docker.

PS C:\Users\rootus> helm install --set replicas.master=3 --set replicas.slave=3  stable/redis-ha
NAME:   eloping-fox
LAST DEPLOYED: Wed Nov  1 16:34:58 2017
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1beta1/Deployment
NAME                           DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
eloping-fox-redis-ha           3        3        3           0          9s
eloping-fox-redis-ha-sentinel  3        3        3           0          9s

==> v1beta1/StatefulSet
NAME                         DESIRED  CURRENT  AGE
eloping-fox-redis-ha-master  3        1        9s

==> v1/Pod(related)
NAME                                            READY  STATUS             RESTARTS  AGE
eloping-fox-redis-ha-167683871-2rhn8            0/1    ContainerCreating  0         9s
eloping-fox-redis-ha-167683871-cmjjk            0/1    ContainerCreating  0         9s
eloping-fox-redis-ha-167683871-jf4sn            0/1    ContainerCreating  0         9s
eloping-fox-redis-ha-sentinel-2596454939-9qq06  0/1    ContainerCreating  0         9s
eloping-fox-redis-ha-sentinel-2596454939-ngwcf  0/1    ContainerCreating  0         9s
eloping-fox-redis-ha-sentinel-2596454939-pwkbx  0/1    ContainerCreating  0         9s

==> v1/Service
NAME                  TYPE       CLUSTER-IP  EXTERNAL-IP  PORT(S)    AGE
redis-sentinel        ClusterIP  10.0.0.122  <none>       26379/TCP  9s
eloping-fox-redis-ha  ClusterIP  10.0.0.149  <none>       6379/TCP   9s


NOTES:
Redis cluster can be accessed via port 6379 on the following DNS name from within your cluster:
eloping-fox-redis-ha.default.svc.cluster.local

To connect to your Redis server:

1. Run a Redis pod that you can use as a client:

   kubectl exec -it eloping-fox-redis-ha-master-0 bash

2. Connect using the Redis CLI:

  redis-cli -h eloping-fox-redis-ha.default.svc.cluster.local

\=================================================

-- user121618
boot2docker
docker
kubernetes-helm
redis

1 Answer

11/4/2017

Everything works as expected with stable/redis-ha helm chart.

It seems it is an issue with your minikube environment.

By default minikube starts VM with 2 CPU and 2048M RAM.

Default CPU and Memory resources from stable/redis-ha helm chart are the following:

resources:
  master:
    requests:
      memory: 200Mi
      cpu: 100m
    limits:
      memory: 700Mi
  slave:
    requests:
      memory: 200Mi
      cpu: 100m
    limits:
      memory: 200Mi
  sentinel:
    requests:
      memory: 200Mi
      cpu: 100m
    limits:
      memory: 200Mi

When you deploy stable/redis-ha helm chart with 3 masters and 3 slaves it creates only 1 master, because of the lack of resources on your VM with minikube:

$ kubectl get pod
NAME                                                   READY     STATUS    RESTARTS   AGE
melting-armadillo-redis-ha-2438719374-8ghdn            1/1       Running   0          2m
melting-armadillo-redis-ha-2438719374-rlq24            1/1       Running   0          2m
melting-armadillo-redis-ha-2438719374-zlg4p            1/1       Running   0          2m
melting-armadillo-redis-ha-master-0                    2/2       Running   0          2m
melting-armadillo-redis-ha-master-1                    0/2       Pending   0          4s
melting-armadillo-redis-ha-sentinel-1377673986-004m8   1/1       Running   0          2m
melting-armadillo-redis-ha-sentinel-1377673986-gcpj2   1/1       Running   0          2m
melting-armadillo-redis-ha-sentinel-1377673986-jh73w   1/1       Running   0          2m

Pod of the second redis master has Pending state because of:

  FirstSeen LastSeen    Count   From            SubObjectPath   Type        Reason          Message
  --------- --------    -----   ----            -------------   --------    ------          -------
  16s       1s      6   default-scheduler           Warning     FailedScheduling    No nodes are available that match all of the following predicates:: Insufficient memory (1).

So you have two ways to fix your issue:

  1. Create your minikube environment with at least 4096M RAM.
  2. Deploy stable/redis-ha helm chart with 3 masters and 3 slaves with decreased memory resources.

First way is:

Start minikube with 4096M RAM:

$ minikube start --memory 4096
Starting local Kubernetes v1.7.5 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.

Deploy stable/redis-ha helm chart with 3 masters and 3 slaves:

$ helm install --set replicas.master=3 --set replicas.slave=3 stable/redis-ha 

Finally we get:

$ kubectl get pod
NAME                                                 READY     STATUS    RESTARTS   AGE
maudlin-ladybug-redis-ha-1801622981-brmqp            1/1       Running   0          3m
maudlin-ladybug-redis-ha-1801622981-klhr1            1/1       Running   0          3m
maudlin-ladybug-redis-ha-1801622981-mpf3j            1/1       Running   0          3m
maudlin-ladybug-redis-ha-master-0                    2/2       Running   0          3m
maudlin-ladybug-redis-ha-master-1                    2/2       Running   0          1m
maudlin-ladybug-redis-ha-master-2                    2/2       Running   0          1m
maudlin-ladybug-redis-ha-sentinel-3633913943-f8x2c   1/1       Running   0          3m
maudlin-ladybug-redis-ha-sentinel-3633913943-ltvk4   1/1       Running   0          3m
maudlin-ladybug-redis-ha-sentinel-3633913943-xwclg   1/1       Running   0          3m

Second Way is:

Deploy stable/redis-ha helm chart with 3 masters and 3 slaves and decreased memory resources:

helm install --set replicas.master=3 --set replicas.slave=3 --set resources.master.requests.memory=100Mi --set resources.slave.requests.memory=100Mi --set resources.sentinel.requests.memory=100Mi stable/redis-ha

Finally we get:

$ kubectl get pod
NAME                                                       READY     STATUS    RESTARTS   AGE
exacerbated-jellyfish-redis-ha-3444643229-085f4            1/1       Running   0          43s
exacerbated-jellyfish-redis-ha-3444643229-bl221            1/1       Running   0          43s
exacerbated-jellyfish-redis-ha-3444643229-qx62b            1/1       Running   0          43s
exacerbated-jellyfish-redis-ha-master-0                    2/2       Running   0          43s
exacerbated-jellyfish-redis-ha-master-1                    2/2       Running   0          36s
exacerbated-jellyfish-redis-ha-master-2                    2/2       Running   0          29s
exacerbated-jellyfish-redis-ha-sentinel-1441222589-czsvx   1/1       Running   0          43s
exacerbated-jellyfish-redis-ha-sentinel-1441222589-ql6n6   1/1       Running   0          43s
exacerbated-jellyfish-redis-ha-sentinel-1441222589-qql1f   1/1       Running   0          43s
-- nickgryg
Source: StackOverflow