Redis ha helm chart error - NOREPLICAS Not enough good replicas to write

3/26/2019

I am trying to setup redis-ha helm chart on my local kubernetes (docker for windows).

helm values file I am using is,

## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
image:
  repository: redis
  tag: 5.0.3-alpine
  pullPolicy: IfNotPresent
## replicas number for each component
replicas: 3

## Custom labels for the redis pod
labels: {}

## Pods Service Account
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
serviceAccount:
  ## Specifies whether a ServiceAccount should be created
  ##
  create: false
  ## The name of the ServiceAccount to use.
  ## If not set and create is true, a name is generated using the redis-ha.fullname template
  # name:

## Role Based Access
## Ref: https://kubernetes.io/docs/admin/authorization/rbac/
##

rbac:
  create: false

## Redis specific configuration options
redis:
  port: 6379
  masterGroupName: mymaster
  config:
    ## Additional redis conf options can be added below
    ## For all available options see http://download.redis.io/redis-stable/redis.conf
    min-slaves-to-write: 1
    min-slaves-max-lag: 5   # Value in seconds
    maxmemory: "0"       # Max memory to use for each redis instance. Default is unlimited.
    maxmemory-policy: "volatile-lru"  # Max memory policy to use for each redis instance. Default is volatile-lru.
    # Determines if scheduled RDB backups are created. Default is false.
    # Please note that local (on-disk) RDBs will still be created when re-syncing with a new slave. The only way to prevent this is to enable diskless replication.
    save: "900 1"
    # When enabled, directly sends the RDB over the wire to slaves, without using the disk as intermediate storage. Default is false.
    repl-diskless-sync: "yes"
    rdbcompression: "yes"
    rdbchecksum: "yes"

  ## Custom redis.conf files used to override default settings. If this file is
  ## specified then the redis.config above will be ignored.
  # customConfig: |-
      # Define configuration here

  resources: 
    requests:
      memory: 200Mi
      cpu: 100m
    limits:
      memory: 700Mi
      cpu: 250m

## Sentinel specific configuration options
sentinel:
  port: 26379
  quorum: 2
  config:
    ## Additional sentinel conf options can be added below. Only options that
    ## are expressed in the format simialar to 'sentinel xxx mymaster xxx' will
    ## be properly templated.
    ## For available options see http://download.redis.io/redis-stable/sentinel.conf
    down-after-milliseconds: 10000
    ## Failover timeout value in milliseconds
    failover-timeout: 180000
    parallel-syncs: 5

  ## Custom sentinel.conf files used to override default settings. If this file is
  ## specified then the sentinel.config above will be ignored.
  # customConfig: |-
      # Define configuration here

  resources: 
    requests:
      memory: 200Mi
      cpu: 100m
    limits:
      memory: 200Mi
      cpu: 250m

securityContext:
  runAsUser: 1000
  fsGroup: 1000
  runAsNonRoot: true

## Node labels, affinity, and tolerations for pod assignment
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
affinity: {}

# Prometheus exporter specific configuration options
exporter:
  enabled: false
  image: oliver006/redis_exporter
  tag: v0.31.0
  pullPolicy: IfNotPresent

  # prometheus port & scrape path
  port: 9121
  scrapePath: /metrics

  # cpu/memory resource limits/requests
  resources: {}

  # Additional args for redis exporter
  extraArgs: {}

podDisruptionBudget: {}
  # maxUnavailable: 1
  # minAvailable: 1

## Configures redis with AUTH (requirepass & masterauth conf params)
auth: false
# redisPassword:

## Use existing secret containing "auth" key (ignores redisPassword)
# existingSecret:

persistentVolume:
  enabled: true
  ## redis-ha data Persistent Volume Storage Class
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  # storageClass: "-"
  accessModes:
    - ReadWriteOnce
  size: 1Gi
  annotations: {}
init:
  resources: {}

# To use a hostPath for data, set persistentVolume.enabled to false
# and define hostPath.path.
# Warning: this might overwrite existing folders on the host system!
hostPath:
  ## path is evaluated as template so placeholders are replaced
  # path: "/data/{{ .Release.Name }}"

  # if chown is true, an init-container with root permissions is launched to
  # change the owner of the hostPath folder to the user defined in the
  # security context
  chown: true

redis-ha is getting deployed correctly and when I do kubectl get all,

NAME                       READY     STATUS    RESTARTS   AGE
pod/rc-redis-ha-server-0   2/2       Running   0          1h
pod/rc-redis-ha-server-1   2/2       Running   0          1h
pod/rc-redis-ha-server-2   2/2       Running   0          1h

NAME                             TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)              AGE
service/kubernetes               ClusterIP   10.96.0.1        <none>        443/TCP              23d
service/rc-redis-ha              ClusterIP   None             <none>        6379/TCP,26379/TCP   1h
service/rc-redis-ha-announce-0   ClusterIP   10.105.187.154   <none>        6379/TCP,26379/TCP   1h
service/rc-redis-ha-announce-1   ClusterIP   10.107.36.58     <none>        6379/TCP,26379/TCP   1h
service/rc-redis-ha-announce-2   ClusterIP   10.98.38.214     <none>        6379/TCP,26379/TCP   1h

NAME                                  DESIRED   CURRENT   AGE
statefulset.apps/rc-redis-ha-server   3         3         1h

I try to access the redis-ha using Java application, which uses lettuce driver to connect to redis. Sample java code to access redis,

package io.c12.bala.lettuce;

import io.lettuce.core.RedisClient;
import io.lettuce.core.api.StatefulRedisConnection;
import io.lettuce.core.api.sync.RedisCommands;

import java.util.logging.Logger;


public class RedisClusterConnect {

    private static final Logger logger = Logger.getLogger(RedisClusterConnect.class.getName());
    public static void main(String[] args) {
        logger.info("Starting test");

        // Syntax: redis-sentinel://[password@]host[:port][,host2[:port2]][/databaseNumber]#sentinelMasterId
        RedisClient redisClient = RedisClient.create("redis-sentinel://rc-redis-ha:26379/0#mymaster");
        StatefulRedisConnection<String, String> connection = redisClient.connect();


        RedisCommands<String, String> command = connection.sync();
        command.set("Hello", "World");
        logger.info("Ran set command successfully");
        logger.info("Value from Redis - " + command.get("Hello"));

        connection.close();
        redisClient.shutdown();
    }
}

I packaged the application as runnable jar, created a container and pushed it to same kubernetes cluster where redis is running. The application now throws an error.

Exception in thread "main" io.lettuce.core.RedisCommandExecutionException: NOREPLICAS Not enough good replicas to write.
        at io.lettuce.core.ExceptionFactory.createExecutionException(ExceptionFactory.java:135)
        at io.lettuce.core.LettuceFutures.awaitOrCancel(LettuceFutures.java:122)
        at io.lettuce.core.FutureSyncInvocationHandler.handleInvocation(FutureSyncInvocationHandler.java:69)
        at io.lettuce.core.internal.AbstractInvocationHandler.invoke(AbstractInvocationHandler.java:80)
        at com.sun.proxy.$Proxy0.set(Unknown Source)
        at io.c12.bala.lettuce.RedisClusterConnect.main(RedisClusterConnect.java:22)
Caused by: io.lettuce.core.RedisCommandExecutionException: NOREPLICAS Not enough good replicas to write.
        at io.lettuce.core.ExceptionFactory.createExecutionException(ExceptionFactory.java:135)
        at io.lettuce.core.ExceptionFactory.createExecutionException(ExceptionFactory.java:108)
        at io.lettuce.core.protocol.AsyncCommand.completeResult(AsyncCommand.java:120)
        at io.lettuce.core.protocol.AsyncCommand.complete(AsyncCommand.java:111)
        at io.lettuce.core.protocol.CommandHandler.complete(CommandHandler.java:646)
        at io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:604)
        at io.lettuce.core.protocol.CommandHandler.channelRead(CommandHandler.java:556)

I tried with jedis driver too, and with springboot application, getting the same error from the Redis-ha cluster.

** UPDATE ** when I run info command inside redis-cli, I am getting getting

connected_slaves:2
min_slaves_good_slaves:0

Seems the Slaves are not behaving properly. When switched to min-slaves-to-write: 0. Able to read and Write to Redis Cluster.

Any help on this is appreciated.

-- Bala
kubernetes
kubernetes-helm
redis

3 Answers

5/16/2020

If you deploying this Helm chart locally on your computer, you only have 1 node available. If you install the Helm chart with --set hardAntiAffinity=false then it will put the required replica pods all on the same node and thus will startup correctly and not give you that error. This hardAntiAffinity value has a documented default of true:

Whether the Redis server pods should be forced to run on separate nodes.

-- bkaid
Source: StackOverflow

3/28/2019

When I deployed the helm chart with same values to Kubernetes cluster running on AWS, it works fine.

Seems issue with Kubernetes on Docker for Windows.

-- Bala
Source: StackOverflow

1/14/2020

Seems that you have to edit redis-ha-configmap configmap and set min-slaves-to-write 0.

After all redis pod deletion (to apply it) it works like a charm

so :

helm install stable/redis-ha
kubectl edit cm redis-ha-configmap # change min-slaves-to-write from 1 to 0
kubectl delete pod redis-ha-0
-- webofmars
Source: StackOverflow