Redis Pods are not able to join Redis cluster

1/20/2020

I want to create redis cluster of 6 nodes in kubernetes. I am running kubernetes using Minikube.

Below is my implementation for creating the 6 node cluster.

kind: StatefulSet
metadata:
  generation: 1
  labels:
    app: demo-app
  name: demo-app
  namespace: default
spec:
  podManagementPolicy: OrderedReady
  replicas: 6
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: demo-app
  serviceName: ""
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: demo-app
    spec:
      containers:
      - command:
        - redis-server
        - --port 6379
        - --cluster-enabled yes
        - --cluster-node-timeout 5000
        - --appendonly yes
        - --appendfilename appendonly-6379.aof
        image: redis:latest
        imagePullPolicy: Always
        name: demo-app
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
         - name: redis-pvc
           mountPath: /var
      - image: nginx:1.12
        imagePullPolicy: IfNotPresent
        name: redis-exporter
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
  updateStrategy:
    rollingUpdate:
      partition: 0
    type: RollingUpdate

  volumeClaimTemplates:
    - metadata: 
        name: redis-pvc
      spec: 
          accessModes: 
           - ReadWriteOnce
          resources:
             requests:
                 storage: 1Gi

After creation of stateful sets I am executing redis create cluster command from inside one of pods.

 redis-cli --cluster create 172.17.0.4:6379 172.17.0.5:6379  172.17.0.6:6379  172.17.0.7:6379  172.17.0.8:6379  172.17.0.9:6379 --cluster-replicas 1

These all are the ips of the pods.With this I am able to Start my cluster. But once I manually delete a single pod using

kubernetes delete pod <podname> 

For example deleting the redis node with IP address: 172.17.0.6:6379 which was supposed to be master.After deleting the redis cluster state is:

127.0.0.1:6379> cluster nodes
1c8c238c58d99181018b37af44c2ebfe049e4564 172.17.0.9:6379@16379 slave 4b75e95772887e76eb3d0c9518d13def097ce5fd 0 1579496695000 6 connected
96e6be88d29d847aed9111410cb0f790db068d0e 172.17.0.8:6379@16379 slave 0db23edf54bb57f7db1e2c9eb182ce956229d16e 0 1579496696596 5 connected
c8be98b16a8fa7c1c9c2d43109abafefc803d345 172.17.0.7:6379@16379 master - 0 1579496695991 7 connected 10923-16383
0db23edf54bb57f7db1e2c9eb182ce956229d16e 172.17.0.4:6379@16379 myself,master - 0 1579496694000 1 connected 0-5460
4daae1051e6a72f2ffc0675649e9e2dad9430fc4 172.17.0.6:6379@16379 master,fail - 1579496680825 1579496679000 3 disconnected
4b75e95772887e76eb3d0c9518d13def097ce5fd 172.17.0.5:6379@16379 master - 0 1579496695000 2 connected 5461-10922

and after sometime it changes to:

127.0.0.1:6379> cluster nodes
1c8c238c58d99181018b37af44c2ebfe049e4564 172.17.0.9:6379@16379 slave 4b75e95772887e76eb3d0c9518d13def097ce5fd 0 1579496697529 6 connected
96e6be88d29d847aed9111410cb0f790db068d0e 172.17.0.8:6379@16379 slave 0db23edf54bb57f7db1e2c9eb182ce956229d16e 0 1579496696596 5 connected
c8be98b16a8fa7c1c9c2d43109abafefc803d345 172.17.0.7:6379@16379 master - 0 1579496698031 7 connected 10923-16383
0db23edf54bb57f7db1e2c9eb182ce956229d16e 172.17.0.4:6379@16379 myself,master - 0 1579496697000 1 connected 0-5460
4daae1051e6a72f2ffc0675649e9e2dad9430fc4 :0@0 master,fail,noaddr - 1579496680825 1579496679000 3 disconnected
4b75e95772887e76eb3d0c9518d13def097ce5fd 172.17.0.5:6379@16379 master - 0 1579496697028 2 connected 5461-10922

As redis cluster provide automatic failover but the pod's redis is unable to join automatically to the cluster?

Or Should I join that pod manually into the cluster?

-- Supermacy
kubernetes
kubernetes-statefulset
minikube
redis

2 Answers

1/24/2020

I have solved this issue and created a redis cluster using this stateful set yaml. The problem was that I was not mounting the cluster config file in persistent volume. The cluster config file contains the location of other nodes. Now the cluster config files will persist across the pods restarts.

As redis cluster works on gossip protocol. It only needs one active node to get the configuration of the whole cluster.

Now the final configuration of the stateful set are:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  generation: 1
  labels:
    app: demo-app
  name: demo-app
  namespace: default
spec:
  podManagementPolicy: OrderedReady
  replicas: 6 
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: demo-app
  serviceName: ""
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: demo-app
    spec:
      containers:
      - command:
        - redis-server
        - --port 6379
        - --cluster-enabled yes
        - --cluster-node-timeout 5000
        - --appendonly yes
        - --cluster-config-file /var/cluster-config.conf
        - --appendfilename appendonly-6379.aof
        image: redis
        imagePullPolicy: Always
        name: demo-app
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
         - name: redis-pvc
           mountPath: /var
      - image: nginx:1.12
        imagePullPolicy: IfNotPresent
        name: redis-exporter
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
  updateStrategy:
    rollingUpdate:
      partition: 0
    type: RollingUpdate

  volumeClaimTemplates:
    - metadata: 
        name: redis-pvc
      spec: 
          accessModes: 
           - ReadWriteOnce
          resources:
             requests:
                 storage: 1Gi

only changes which i have done is adding --cluster-config-file /var/cluster-config.conf argument while starting the redis-server.

-- Supermacy
Source: StackOverflow

1/20/2020

I'd highly recommend considering a HA option for this using Sentinel instead of the cluster command in Redis. Sentinel is designed to do exactly this.

Overall in my experience, the architecture of Redis doesn't sit well inside of the Kubernetes networking. Telling Redis instances where your slaves are, especially programatically can be a nightmare (as you've seen with having to manually trigger a cluster), especially when you consider pod to pod communication does not conform to Kubernetes networking heirarchy.

I'm not confident with how the cluster command will act inside of Kubernetes especially with the ephemeral nature of pods.

I actually maintain a helm chart that tries to circumvent these problems. This provides a mechanism for also broadcasting your Redis externally from the cluster. You can find it here.

To expand on a couple of scenarios on why this wont work:

  1. How would you tell your application to connect to the new master if you lose the original master? Unless you have some abstraction layer querying them individually asking who is the master. Which is more work then is really needed with Sentinel in play, which was built to circumvent this exact problem.

  2. If you delete a slave, since this is bound via the IP, you will lose that slave entirely as a new veth will be created bound to a new IP in the CIDR defined for your cluster. 6 nodes becomes 5. You could get around this by defining your nodes with a /24 address on the CIDR, but then you're basically deploying a node per Redis instance, which seems to defeat the point of an orchstrator.

-- Dandy
Source: StackOverflow