I deployed a CronJob in a GKE cluster to periodically replicate secrets in namespaces (for cert-manager
), but I always get the following error:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Here is my deployment:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: certificate-replicator-cron-job
namespace: default
spec:
jobTemplate:
spec:
template:
metadata:
labels:
app: default
release: default
spec:
automountServiceAccountToken: false
containers:
- command:
- /bin/bash
- -c
- for i in $(kubectl get ns -o json |jq -r ".items[].metadata.name" |grep
"^bf-"); do kubectl get secret -o json --namespace default dev.botfront.cloud-staging-tls
--export |jq 'del(.metadata.namespace)' |kubectl apply -n ${i}-f -; done
image: bitnami/kubectl:latest
name: certificate-replicator-container
restartPolicy: OnFailure
serviceAccountName: sa-certificate-replicator
schedule: '* * * * *'
I also set up a role for the service account:
$ kubectl describe role certificate-replicator-role
Name: certificate-replicator-role
Labels: <none>
Annotations: <none>
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
secrets [] [] [list create get]
namespaces [] [] [list get]
$ kubectl describe rolebinding certificate-replicator-role-binding git:(master|✚4…
Name: certificate-replicator-role-binding
Labels: <none>
Annotations: <none>
Role:
Kind: Role
Name: certificate-replicator-role
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount sa-certificate-replicator default
$ kubectl describe serviceaccount sa-certificate-replicator git:(master|✚4…
Name: sa-certificate-replicator
Namespace: default
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: sa-certificate-replicator-token-ljsfb
Tokens: sa-certificate-replicator-token-ljsfb
Events: <none>
I understand that I could probably create another Docker image with gcloud
preinstalled and authenticate with a service account key, but I'd like to be cloud provider agnostic and also avoid having to authenticate to the cluster since kubectl
is being invoked from inside.
Is that possible?
Gcloud demands that you authenticate in some way. I used a .json file to authenticate a google-cloud's service account every time I wanted to run a kubectl remotely. However, this is a pretty dirty solution.
Instead, I would recomment to use kubernetes api to achieve your goal. Create a role that allows you to operate on namespaces and configmaps resources. Associate it with a service account and then make curls to make the copy from inside the cronjob.
Here is an example for the default namespace.
First create a role and associate it with your service account (default in this example).
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nssc-clusterrole
namespace: default
rules:
- apiGroups: [""]
resources: ["namespaces", "configmaps", "secrets"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: nssc-clusterrolebinding
namespace: default
roleRef:
name: nssc-clusterrole
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
subjects:
- name: default
namespace: default
kind: ServiceAccount
Second, create a secret to test.
---
apiVersion: v1
kind: Secret
metadata:
name: secrets-test
namespace: default
type: Opaque
stringData:
mysecret1: abc123
mysecret2: def456
Third, make a curl request to get your secret.
curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H "Authorization: Bearer $(cat /var/run/secrets/kubernet
es.io/serviceaccount/token)" -H "Accept: application/json" https://kubernetes.default.svc/api/v1/namespaces/default/secrets/sec
rets-test
You will get a json with the content of your secret.
{
"kind": "Secret",
"apiVersion": "v1",
"metadata": {
"name": "secrets-test",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/secrets/secrets-test",
"uid": "...",
"resourceVersion": "...",
"creationTimestamp": "2019-10-26T01:52:29Z",
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{...}\n"
}
},
"data": {
"mysecret1": "base64value",
"mysecret2": "base64value"
},
"type": "Opaque"
}
Fourth, create the secret in a new namespace by changing the json and making a new curl request. Also associate the service account with the role.
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: nssc-clusterrolebinding
namespace: new-namespace
roleRef:
name: nssc-handler-clusterrole
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
subjects:
- name: default
namespace: default
kind: ServiceAccount
{
"apiVersion": "v1",
"data": {
"mysecret1": "Y29udHJvbDEyMyE=",
"mysecret2": "Y29udHJvbDQ1NiE="
},
"kind": "Secret",
"metadata": {
"name": "secrets-test",
"namespace": "new-namespace"
},
"type": "Opaque"
}
curl -X POST -d @test.json --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H "Authorization: Bearer $(cat /va
r/run/secrets/kubernetes.io/serviceaccount/token)" -H "Accept: application/json" -H "Content-Type: application/json" https://kub
ernetes.default.svc/api/v1/namespaces/new-namespace/secrets