We're running a kubernetes cluster with a pod with a container that's provided by Google (prometheus-to-sd
to be precise, we use it to get custom metrics into google cloud monitoring fka stackdriver).
The pod was running fine, until we enabled Workload Identity on the cluster.
The pod currently runs with application default credentials, which used to be a service account that was set up to have the gcp role monitoring.writer
.
When we introduced Workload Identity, the application default credentials changed and the prometheus-to-sd
container started complaining about a missing permission monitoring.timeSeries.create
.
I tried setting GOOGLE_APPLICATION_CREDENTIALS
to a file that's mounted from a gke secret.
I verified the file is there and that the environment var is set (kubectl exec -it $POD /bin/sh
).
I'm trying to debug what's going wrong.
I want to write some go code to verify which gcp service account is running, but all the examples show a Client
which is an http client with the authorization headers setup.
See https://github.com/googleapis/google-api-go-client
For example:
import (
"context"
"fmt"
"golang.org/x/oauth2/google"
)
func main() {
ctx := context.Background()
creds, err := google.FindDefaultCredentials(ctx)
fmt.Printf("%+v\n", *creds)
}
This prints something along the lines:
{ProjectID:projectId TokenSource:0xc000120880 JSON:[/* big chunk of byte values, no idea what these mean */]}
Now I could load the file myself, but that defeats the purpose of testing whether the mechanism in selecting the default credentials.
I've also looked at other languages like nodejs, but basically the same problem shows. The code is running with some credentials and it's not clear which ones are used.