I have a docker image that uses a volume to write files:
docker run --rm -v /home/dir:/out/ image:cli args
when I try to run this inside a pod the container exit normally but no file is written.
I don't get it.
The container throw errors if it does not find the volume, for example if I run it without the -v
option it throws:
Unhandled Exception: System.IO.DirectoryNotFoundException: Could not find a part of the path '/out/file.txt'.
But I don't have any error from the container. It finishes like it wrote files, but files do not exist.
I'm quite new to Kubernetes but this is getting me crazy.
Does kubernetes prevent files from being written? or am I missing something obvious?
The whole Kubernetes context is managed by GCP composer-airflow, if it helps...
docker -v: Docker version 17.03.2-ce, build f5ec1e2
If you want to have that behavior in Kubernetes you can use a hostPath
volume.
Essentially you specify it in your pod spec and then the volume is mounted on the node where your pod runs and then the file should be there in the node after the pod exits.
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: image:cli
name: test-container
volumeMounts:
- mountPath: /home/dir
name: test-volume
volumes:
- name: test-volume
hostPath:
path: /out
type: Directory
when I try to run this inside a pod the container exit normally but no file is written
First of all, there is no need to run the docker run
command inside the pod :). A spec file (yaml) should be written for the pod and kubernetes will run the container in the pod using docker for you. Ideally, you don't need to run docker
commands when using kubernetes (unless you are debugging docker-related issues).
This link has useful kubectl
commands for docker users.
If you are used to docker-compose
, refer Kompose
to go from docker-compose
to kubernetes:
Some options to mount a directory on the host as a volume inside the container in kubernetes: