How to get Kubernetes secret from one cluster to apply to another?

8/18/2019

For my e2e tests I'm spinning up a separate cluster into which I'd like to import my production TLS certificate. I'm having trouble to switch the context between the two clusters (export/get from one and import/apply (in)to another) because the cluster doesn't seem to be visible.

I extracted a MVCE using a GitLab CI and the following .gitlab-ci.yml where I create a secret for demonstration purposes:

stages:
  - main
  - tear-down

main:
  image: google/cloud-sdk
  stage: main
  script:
    - echo "$GOOGLE_KEY" > key.json
    - gcloud config set project secret-transfer
    - gcloud auth activate-service-account --key-file key.json --project secret-transfer
    - gcloud config set compute/zone us-central1-a
    - gcloud container clusters create secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID --project secret-transfer --machine-type=f1-micro
    - kubectl create secret generic secret-1 --from-literal=key=value
    - gcloud container clusters create secret-transfer-2-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID --project secret-transfer --machine-type=f1-micro
    - gcloud config set container/use_client_certificate True
    - gcloud config set container/cluster secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID
    - kubectl get secret letsencrypt-prod --cluster=secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID -o yaml > secret-1.yml
    - gcloud config set container/cluster secret-transfer-2-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID
    - kubectl apply --cluster=secret-transfer-2-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID -f secret-1.yml

tear-down:
  image: google/cloud-sdk
  stage: tear-down
  when: always
  script:
    - echo "$GOOGLE_KEY" > key.json
    - gcloud config set project secret-transfer
    - gcloud auth activate-service-account --key-file key.json
    - gcloud config set compute/zone us-central1-a
    - gcloud container clusters delete --quiet secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID
    - gcloud container clusters delete --quiet secret-transfer-2-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID

I added secret-transfer-[1/2]-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID before kubectl statements in order to avoid error: no server found for cluster "secret-transfer-1-...-...", but it doesn't change the outcome.

I created a project secret-transfer, activated the Kubernetes API and got a JSON key for the Compute Engine service account which I'm providing in the environment variable GOOGLE_KEY. The output after checkout is

$ echo "$GOOGLE_KEY" > key.json

$ gcloud config set project secret-transfer
Updated property [core/project].

$ gcloud auth activate-service-account --key-file key.json --project secret-transfer
Activated service account credentials for: [131478687181-compute@developer.gserviceaccount.com]

$ gcloud config set compute/zone us-central1-a
Updated property [compute/zone].

$ gcloud container clusters create secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID --project secret-transfer --machine-type=f1-micro
WARNING: In June 2019, node auto-upgrade will be enabled by default for newly created clusters and node pools. To disable it, use the `--no-enable-autoupgrade` flag.
WARNING: Starting in 1.12, new clusters will have basic authentication disabled by default. Basic authentication can be enabled (or disabled) manually using the `--[no-]enable-basic-auth` flag.
WARNING: Starting in 1.12, new clusters will not have a client certificate issued. You can manually enable (or disable) the issuance of the client certificate using the `--[no-]issue-client-certificate` flag.
WARNING: Currently VPC-native is not the default mode during cluster creation. In the future, this will become the default mode and can be disabled using `--no-enable-ip-alias` flag. Use `--[no-]enable-ip-alias` flag to suppress this warning.
WARNING: Starting in 1.12, default node pools in new clusters will have their legacy Compute Engine instance metadata endpoints disabled by default. To create a cluster with legacy instance metadata endpoints disabled in the default node pool, run `clusters create` with the flag `--metadata disable-legacy-endpoints=true`.
WARNING: Your Pod address range (`--cluster-ipv4-cidr`) can accommodate at most 1008 node(s). 
This will enable the autorepair feature for nodes. Please see https://cloud.google.com/kubernetes-engine/docs/node-auto-repair for more information on node autorepairs.
Creating cluster secret-transfer-1-9b219ea8-9 in us-central1-a...
...done.
Created [https://container.googleapis.com/v1/projects/secret-transfer/zones/us-central1-a/clusters/secret-transfer-1-9b219ea8-9].
To inspect the contents of your cluster, go to: https://console.cloud.google.com/kubernetes/workload_/gcloud/us-central1-a/secret-transfer-1-9b219ea8-9?project=secret-transfer
kubeconfig entry generated for secret-transfer-1-9b219ea8-9.
NAME                          LOCATION       MASTER_VERSION  MASTER_IP      MACHINE_TYPE  NODE_VERSION   NUM_NODES  STATUS
secret-transfer-1-9b219ea8-9  us-central1-a  1.12.8-gke.10   34.68.118.165  f1-micro      1.12.8-gke.10  3          RUNNING

$ kubectl create secret generic secret-1 --from-literal=key=value
secret/secret-1 created

$ gcloud container clusters create secret-transfer-2-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID --project secret-transfer --machine-type=f1-micro
WARNING: In June 2019, node auto-upgrade will be enabled by default for newly created clusters and node pools. To disable it, use the `--no-enable-autoupgrade` flag.
WARNING: Starting in 1.12, new clusters will have basic authentication disabled by default. Basic authentication can be enabled (or disabled) manually using the `--[no-]enable-basic-auth` flag.
WARNING: Starting in 1.12, new clusters will not have a client certificate issued. You can manually enable (or disable) the issuance of the client certificate using the `--[no-]issue-client-certificate` flag.
WARNING: Currently VPC-native is not the default mode during cluster creation. In the future, this will become the default mode and can be disabled using `--no-enable-ip-alias` flag. Use `--[no-]enable-ip-alias` flag to suppress this warning.
WARNING: Starting in 1.12, default node pools in new clusters will have their legacy Compute Engine instance metadata endpoints disabled by default. To create a cluster with legacy instance metadata endpoints disabled in the default node pool, run `clusters create` with the flag `--metadata disable-legacy-endpoints=true`.
WARNING: Your Pod address range (`--cluster-ipv4-cidr`) can accommodate at most 1008 node(s). 
This will enable the autorepair feature for nodes. Please see https://cloud.google.com/kubernetes-engine/docs/node-auto-repair for more information on node autorepairs.
Creating cluster secret-transfer-2-9b219ea8-9 in us-central1-a...
...done.
Created [https://container.googleapis.com/v1/projects/secret-transfer/zones/us-central1-a/clusters/secret-transfer-2-9b219ea8-9].
To inspect the contents of your cluster, go to: https://console.cloud.google.com/kubernetes/workload_/gcloud/us-central1-a/secret-transfer-2-9b219ea8-9?project=secret-transfer
kubeconfig entry generated for secret-transfer-2-9b219ea8-9.
NAME                          LOCATION       MASTER_VERSION  MASTER_IP      MACHINE_TYPE  NODE_VERSION   NUM_NODES  STATUS
secret-transfer-2-9b219ea8-9  us-central1-a  1.12.8-gke.10   104.198.37.21  f1-micro      1.12.8-gke.10  3          RUNNING

$ gcloud config set container/use_client_certificate True
Updated property [container/use_client_certificate].

$ gcloud config set container/cluster secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID
Updated property [container/cluster].

$ kubectl get secret secret-1 --cluster=secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID -o yaml > secret-1.yml
error: no server found for cluster "secret-transfer-1-9b219ea8-9"

I'm expecting kubectl get secret to work because both clusters exist and the --cluster argument points to the right cluster.

-- Karl Richter
gcloud
google-kubernetes-engine
kubernetes
kubernetes-secrets

2 Answers

8/18/2019

You probably mean to be using --context rather than --cluster. The context sets both the cluster and user in use. Additionally the context and cluster (and user) names created by GKE are not just the cluster identifier, it's gke_[project]_[region]_[name].

-- coderanger
Source: StackOverflow

8/18/2019

Generally gcloud commands are used to manage gcloud resources and handle how you authenticate with gcloud, whereas kubectl commands affect how you interact with Kubernetes clusters, whether or not they happen to be running on GCP and/or created in GKE. As such, I would avoid doing:

$ gcloud config set container/use_client_certificate True
Updated property [container/use_client_certificate].

$ gcloud config set container/cluster \
  secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID
Updated property [container/cluster].

It's not doing what you probably think it's doing (namely, changing anything about how kubectl targets clusters), and might mess with how future gcloud commands work.

Another consequence of gcloud and kubectl being separate, and in particular kubectl not knowing intimately about your gcloud settings, is that the cluster name from gcloud perspective is not the same as from the kubectl perspective. When you do things like gcloud config set compute/zone, kubectl doesn't know anything about that, so it has to be able to identify clusters uniquely which may have the same name but be in different projects and zone, and maybe not even in GKE (like minikube or some other cloud provider). That's why kubectl --cluster=<gke-cluster-name> <some_command> is not going to work, and it's why you're seeing the error message:

error: no server found for cluster "secret-transfer-1-9b219ea8-9"

As @coderanger pointed out, the cluster name that gets generated in your ~/.kube/config file after doing gcloud container clusters create ... has a more complex name, which currently has a pattern something like gke_[project]_[region]_[name].

So you could run commands with kubectl --cluster gke_[project]_[region]_[name] ... (or kubectl --context [project]_[region]_[name] ... which would be more idiomatic, although both will happen to work in this case since you're using the same service account for both clusters), however that requires knowledge of how gcloud generates these strings for context and cluster names.

An alternative would be to do something like:

$ KUBECONFIG=~/.kube/config1 gcloud container clusters create \
  secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID \
  --project secret-transfer --machine-type=f1-micro

$ KUBECONFIG=~/.kube/config1 kubectl create secret secret-1 --from-literal=key=value

$ KUBECONFIG=~/.kube/config2 gcloud container clusters create \
  secret-transfer-2-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID \
  --project secret-transfer --machine-type=f1-micro

$ KUBECONFIG=~/.kube/config1 kubectl get secret secret-1 -o yaml > secret-1.yml

$ KUBECONFIG=~/.kube/config2 kubectl apply -f secret-1.yml

By having separate KUBECONFIG files that you control, you don't have to guess any strings. Setting the KUBECONFIG variable when creating a cluster will result in creating that file and gcloud putting the credentials for kubectl to access that cluster in that file. Setting the KUBECONFIG environment variable when running kubectl command will ensure kubectl uses the context as set in that particular file.

-- Amit Kumar Gupta
Source: StackOverflow