Enabling RBAC on Existing GKE Cluster

7/3/2020

We have been running a cluster on GKE for around three years. As such, legacy authorization is enabled.

The control plane has been getting updated automatically, and our node pools are running a mixture of 1.12 and 1.14.

We have an increasing number of services, and are planning on incrementally adopting istio.

We want to enable a minimal RBAC setup without causing errors and downtime of our services.

I haven't been able to find any guides for how to accomplish this. Some people say just to enable RBAC authorization on the GKE cluster, but I assume that would take down all of our services.

It has also been implied that k8s can run in a hybrid ABAC/RBAC mode, but we can't tell if it is or not!

Is there a good guide for migrating to RBAC for GKE?

-- dacox
google-kubernetes-engine
kubernetes
rbac

1 Answer

7/28/2020

If you cluster is Regional you won't have downtime in your application when upgrade, but if your cluster is single-zonal or multi-zonal the best approach here is:

  1. Add a new node pool
  2. Cordon the old node pool to migrate the applications to the new node pool
  3. Delete the old node pool after all pods are migrated.

It is the safesty way to update your node pool (zonal) without downtimes. Please read the references below to understand in details every step.

References:

https://kubernetes.io/docs/concepts/architecture/nodes/#reliability https://kubernetes.io/docs/reference/kubectl/cheatsheet/#interacting-with-nodes-and-cluster

-- Mr.KoopaKiller
Source: StackOverflow