Is there some way only increase statefulset's replicas and NO decrease the replicas?

10/23/2019

I do not want to decrease the number of pods controlled by StatefulSet, and i think that decreasing pods is a dangerous operation in production env.

so... , is there some way ? thx ~

-- adrian ding
kubernetes
statefulset

2 Answers

10/23/2019

i think that decreasing pods is a dangerous operation in production env.

I agree with you.

As Crou wrote, it is possible to do this operation with kubectl scale statefulsets <stateful-set-name> but this is an imperative operation and it is not recommended to do imperative operations in a production environment.

In a production environment it is better to use a declarative operation, e.g. have the number of replicas in a text file (e.g. stateful-set-name.yaml) and deploy them with kubectl apply -f <stateful-set-name>.yaml with this way of working, it is easy to store the yaml-files in Git so you have full control of all changes and can revert/rollback to a previous configuration. When you store the declarative files in a Git repository you can use a CICD solution e.g. Jenkins or ArgoCD to 1) validate the operation (e.g. not allow decrease) and 2) first deploy to a test-environment and see that it works, before applying the changes to the production environment.

I recommend the book (new edition) Kubernetes Up&Running 2nd ed that describes this procedure in Chapter 18 (new chapter).

-- Jonas
Source: StackOverflow

10/23/2019

I'm not sure if this is what you are looking for but you can scale a StatefulSet

Use kubectl to scale StatefulSets

First, find the StatefulSet you want to scale.

kubectl get statefulsets <stateful-set-name>

Change the number of replicas of your StatefulSet:

kubectl scale statefulsets <stateful-set-name> --replicas=<new-replicas>

To show you an example, I've deployed a 2 pod StatefulSet called web:

$ kubectl get statefulsets.apps web 
NAME   READY   AGE
web    2/2     60s
$ kubectl get pods
NAME    READY   STATUS    RESTARTS   AGE
web-0   1/1     Running   0          63s
web-1   1/1     Running   0          44s

$ kubectl describe statefulsets.apps web
Name:               web
Namespace:          default
CreationTimestamp:  Wed, 23 Oct 2019 13:46:33 +0200
Selector:           app=nginx
Labels:             <none>
Annotations:        kubectl.kubernetes.io/last-applied-configuration:
                      {"apiVersion":"apps/v1","kind":"StatefulSet","metadata":{"annotations":{},"name":"web","namespace":"default"},"spec":{"replicas":2,"select...
Replicas:           824643442664 desired | 2 total
Update Strategy:    RollingUpdate
  Partition:        824643442984
Pods Status:        2 Running / 0 Waiting / 0 Succeeded / 0 Failed
...

Now if we do scale this StatefulSet up to 5 replicas:

$ kubectl scale statefulset web --replicas=5
statefulset.apps/web scaled

$ kubectl get pods
NAME    READY   STATUS    RESTARTS   AGE
web-0   1/1     Running   0          3m41s
web-1   1/1     Running   0          3m22s
web-2   1/1     Running   0          59s
web-3   1/1     Running   0          40s
web-4   1/1     Running   0          27s

$ kubectl get statefulsets.apps web
NAME   READY   AGE
web    5/5     3m56s

You do not have any downtime in already working pods.

-- Crou
Source: StackOverflow