How to share configuration files between different clusters belonging to the same project in Google cloud Platform?

6/18/2019

I have a cluster with several workloads and different configurations on GCP's Kubernetes Engine.

I want to create a clone of this existing cluster along with cloning all the workloads in it. It turns out, you can clone a cluster but not the workloads.

So, at this point, I'm copying the deployment yaml's of the workloads from the cluster which is working fine, and using them for the newly created workload's in the newly created cluster.

When I'm deploying the pods of this newly created workload, the pods are stuck in the pending state.

In the logs of the container, I can see that the error has something to do with Redis. The error it shows is, Error: Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED 127.0.0.1:6379 at TCPConnectWrap.afterConnect [as oncomplete].

Also, when I'm connected with the first cluster and run the command, kubectl get secrets -n=development, it shows me a bunch of secrets which are supposed to be used by my workload.

However, when I'm connected with the second cluster and run the above kubectl command, I just see one service related secret.

My question is how do I make my workload of the newly created cluster to use the configurations of the already existing cluster.

-- Khadar111
google-cloud-platform
google-kubernetes-engine
kubernetes

1 Answer

6/19/2019

I think there are few things that can be done here:

  1. Try to use kubectl config command and set the same context for both of your clusters. You can find more info here and here

  2. You may also try to use Kubernetes Cluster Federation. But bear in mind that it is still in alpha.

  3. Remember that keeping your config in a version control system is generally a very good idea. You want to store it before the cluster applies defaults while exporting.

Please let me know if that helped.

-- OhHiMark
Source: StackOverflow