Migrating Cockroach DB from local machine to GCP Kubernetes Engine

8/15/2018
  • Followed instructions here to create a local 3 node secure cluster
  • Got the go example app running with the following DB connection string to connect to the secure cluster

    sql.Open("postgres", "postgresql://root@localhost:26257/dbname?sslmode=verify-full&sslrootcert=<location of ca.crt>&sslcert=<location of client.root.crt>&sslkey=<location of client.root.key>")

Cockroach DB worked well locally so I decided to move the DB (as in the DB solution and not the actual data) to GCP Kubernetes Engine using the instructions here

Everything worked fine - pods created and could use the built in SQL client from the cloud console.

Now I want to use the previous example app to now connect to this new cloud DB. I created a load balancer using kubectl expose command and got a public ip to use in the code.

How do I get the new ca.crt, client.root.crt, client.root.key files to use in my connection string for the DB running on GCP?

We have 5+ developers and the idea is to have them write code on their local machines and connect to the cloud db using the connection strings and the certificates.

Or is there a better way to let 5+ developers use a single DEV DB cluster running on GCP?

-- samstride
cockroachdb
kubernetes

1 Answer

8/15/2018

The recommended way to run against a Kubernetes CockroachDB cluster is to have your apps run in the same cluster. This makes certificate generation fairly simple. See the built-in SQL client example and its config file.

The config above uses an init container to send a CSR for client certificates and makes them available to the container (in this case just the cockroach sql client, but it would be anything else).

If you wish to run a client outside the kubernetes cluster, the simplest way is to copy the generated certs directly from the client pod. It's recommended to use a non root user:

  • create the user through the SQL command
  • modify the client-secure.yaml config for your new user and start the new client pod
  • approve the CSR for the client certificate
  • wait for the pod to finish initializing
  • copy the ca.crt, client.<username>.crt and client.<username>.key from the pod onto your local machine

Note: the public DNS or IP address of your kubernetes cluster is most likely not included in the node certificates. You either need to modify the list of hostnames/addresses before bringing up the nodes, or change your connection URL to sslmode=verify-ca (see client connection parameters for details).

Alternatively, you could use password authentication in which case you would only need the CA certificate.

-- Marc
Source: StackOverflow