Unable to access Google Cloud Storage from Google Kubernetes Engine using default service account and google cloud libraries

7/16/2019

I wrote an application that has a function to upload image through Google Kubernetes Engine using go lang. Everything else works fine, but I kept running into problems when I try to write image to Google Cloud Storage.

Here is my code in golang that actually uses the google storage api:

func putImage(imageURL string, image multipart.File) bool {
    fmt.Println("Putting into image location : " + imageURL)

    contextBackground := context.Background()
    storageClient, err := storage.NewClient(contextBackground)
    if err != nil {
        fmt.Println("No client.")
        return false
    }

    bucket := storageClient.Bucket("mytestbucketname")
    bucketWriter := bucket.Object(imageURL).NewWriter(contextBackground)
    bucketWriter.ContentType = "image/jpg"
    bucketWriter.CacheControl = "public, max-age=0"

    size, err := io.Copy(bucketWriter, image)
    if err != nil {
        fmt.Println("failed to put image")
        return false
    }

    fmt.Println(size)
    fmt.Println("Successfully put image")
    bucketWriter.Close()
    return true
}

The above function always returns true and size is always greater than 0. However, when I check the bucket, it does not actually have anything inside. So I researched around and realized that the default service account only has a read permission to Cloud Storage. Which is very weird because it should be returning false or size should be 0 or even outputting an error of permission denied.

The official cloud.google.com/go/storage API says Bucket() function returns a BucketHandle that does not actually perform any network operations, which makes sense as to why I was not getting permission denied error (?). And so I decided to check if I am actually getting anything from the Client or Bucket, just to see if the read permission is working. I added the following code :

attr, err := bucket.Attrs(contextBackground)
if err != nil {
    fmt.Println(err.Error())
}
fmt.Println("bucket name : " + attr.Name)

And at this point I started to get fatal errors that shuts down my application. The error I got when trying to retrieve bucket's attributes is this :

Get https://www.googleapis.com/storage/v1/b/mytestbucketname?alt=json&prettyPrint=false&projection=full: oauth2: cannot fetch token: Post https://oauth2.googleapis.com/token: x509: certificate signed by unknown authority

So I thought I need to add ca certificate to my image. But it just does not sound logical to me because if I was running my image in Google Kubernetes Engine and it was accessing my Google Cloud Storage, given that it already has a default service account with read permission, why would I need a certificate? I made a new cluster with version 1.12.8-gke.10 and made sure to disable issue client certificate and made sure I had read permission to Storage and still got the same error. And I added this line to my docker file and still got the same error:

RUN apk --no-cache add ca-certificates && update-ca-certificates

I've been at it for two days and now I'm running out of ideas. My question is what am I doing wrong that keeps giving me "x.509 certificate signed by unknown authority" error when trying to access the Storage bucket attributes from Kubernetes Engine when I am using default permission? Technically getting bucket attributes is just a read function, which I am supposed to be able to do with default permission right? If anyone has any ideas or has ever ran into the same problem, please give me some help! Thanks!

-- andy
go
google-api
google-cloud-storage
google-kubernetes-engine
service-accounts

2 Answers

7/16/2019

If you are using the default cluster service account, you need to make sure the cluster has sufficient scopes to write to storage. By default, GKE clusters only have the read scope which blocks write attempts to GCS buckets.

gke-default:
https://www.googleapis.com/auth/devstorage.read_only
https://www.googleapis.com/auth/logging.write
https://www.googleapis.com/auth/monitoring
https://www.googleapis.com/auth/service.management.readonly
https://www.googleapis.com/auth/servicecontrol
https://www.googleapis.com/auth/trace.append

Without the write scope enabled, regardless of the credentials, you won't be able to write to buckets.

Note: You can use the access token only for scopes that you specified when you created the instance. For example, if the instance has been granted only the https://www.googleapis.com/auth/storage-full scope for Google Cloud Storage, then it can't use the access token to make a request to BigQuery.

If you want to bypass scopes, create a custom service account for the application, download the JWT token and mount these credentials directly into your code. This is the recommended method to authenticate your application against Google APIs

-- Patrick W
Source: StackOverflow

7/18/2019

So I finally figured it out, just leaving this here if anyone ever runs into this problem too. It was my mistake of being very careless, but after reading this question :

Cannot exchange AccessToken from Google API inside Docker container

I was able to narrow it down to a certificate problem. It was actually because I did not install the ca-certificate properly. Because I was using multi stage build in my docker file, I misplaced this line of code :

RUN apk --no-cache add ca-certificates && update-ca-certificates

After placing it correctly it worked!

-- andy
Source: StackOverflow