I have a K8s deployment that mounts a secret into /etc/google-cloud-account
containing the Google auth JSON file to use from the application. When I try to run the deployment, I get the following error from my pod:
1m 1m 1 kubelet, gke-development-cluster-default-pool-17f531d7-sj4x spec.containers{api} Normal Created Created container with docker id 36b85ec8415a; Security:[seccomp=unconfined]
1m 1m 1 kubelet, gke-development-cluster-default-pool-17f531d7-sj4x spec.containers{api} Warning Failed Failed to start container with docker id 36b85ec8415a with error: Error response from daemon: rpc error: code = 2 desc = "oci runtime error: could not synchronise with container process: mkdir /var/lib/docker/overlay/b4aa81194f72ccb54d88680e766a921ea26f7a4df0f4b32d6030123896b2b203/merged/etc/google-cloud-account: read-only file system"
1m 1m 1 kubelet, gke-development-cluster-default-pool-17f531d7-sj4x Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "api" with RunContainerError: "runContainer: Error response from daemon: rpc error: code = 2 desc = \"oci runtime error: could not synchronise with container process: mkdir /var/lib/docker/overlay/b4aa81194f72ccb54d88680e766a921ea26f7a4df0f4b32d6030123896b2b203/merged/etc/google-cloud-account: read-only file system\""
2m 13s 11 kubelet, gke-development-cluster-default-pool-17f531d7-sj4x spec.containers{api} Warning BackOff Back-off restarting failed docker container
The deployment in question looks like:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
# ...
spec:
replicas: {{ .Values.api.replicaCount }}
template:
# ...
spec:
containers:
- name: {{ .Values.api.name }}
# ...
volumeMounts:
- name: google-cloud-account
mountPath: /etc/google-cloud-account
volumes:
- name: google-cloud-account
secret:
secretName: {{ template "fullname" . }}
items:
- key: google-cloud-credentials
path: credentials.json
I don't know how /etc
in the container would be a read only file system and don't know how to change that.
An alternative to Dave Long's answer are projected volumes:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
# ...
spec:
replicas: {{ .Values.api.replicaCount }}
template:
# ...
spec:
containers:
- name: {{ .Values.api.name }}
# ...
volumeMounts:
- name etc
mountPath: /etc
- name: google-cloud-account
mountPath: /etc/google-cloud-account
- name: odbc
mountPath: /etc
volumes:
- name: config
projected:
sources:
- secret:
name: {{ template "fullname" . }}
items:
- key: google-cloud-credentials
path: google-cloud-account/credentials.json
- configMap:
name: {{ template "fullname" . }}
items:
- key: odbc.ini
path: odbc.ini
As it turns out the error was caused by another volume mount. I left it out of the end code, but my deployment looked more like the following:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
# ...
spec:
replicas: {{ .Values.api.replicaCount }}
template:
# ...
spec:
containers:
- name: {{ .Values.api.name }}
# ...
volumeMounts:
- name: google-cloud-account
mountPath: /etc/google-cloud-account
- name: odbc
mountPath: /etc
volumes:
- name: google-cloud-account
secret:
secretName: {{ template "fullname" . }}
items:
- key: google-cloud-credentials
path: credentials.json
- name: odbc
configMap:
name: {{ template "fullname" . }}
items:
- key: odbc.ini
path: odbc.ini
Mounting odbc
took over the entire /etc
directory. To fix it, I changed the odbc
volumeMount
to:
- name: odbc
mountPath: /etc/odbc.ini
subPath: odbc.ini
Which left everything else in /etc
intact.