I would like to move a pod from AWS hosted K8s cluster to GKE (Google). The problem is that on a GKE instance I don't have the AWS metadata in order to assume an IAM role (obviously). But I guess I can do something similar to kube2iam in order to allow the pods to assume roles as if they were running inside AWS. Meaning, to run a daemonset that would simulate the access to the metadata for the pods. I already have a VPN set up between the clouds.
Anyone did this already?
I haven’t tried that yet. But keep in mind that in GKE, IAM roles are associated to accounts (user accounts/service accounts) and not to resources (pod/nodes).
Also, kube2iam looks more like a security solution more than a compatibility solution. Once you have the credentials from the kube2iam node you still having the compatibility issues.
I think a better solution would be to use API calls and deal with the authentication.
A newer and possibly better option for your use case is the GKE Workload Identity feature that Google announces in June of this year: https://www.google.com/amp/s/cloudblog.withgoogle.com/products/containers-kubernetes/introducing-workload-identity-better-authentication-for-your-gke-applications/amp/
It lets you bind GCP IAM SAs to K8s SAs and namespace. Then, any pod that is created with that K8s SA for that namespace will automatically have temporary credentials mounted for the bound IAM SA - and the GCP gcloud SDK auto authenticates when executing gcloud commands from the pod.