How can I save any changes of containers?

6/3/2019

If I have one ubuntu container and I ssh to it and make one file after the container is destroyed or I reboot the container the new file was destroyed because the kubernetes load the ubuntu image that does not contain my changes. My question is what should I do to save any changes? I know it can be done because some cloud provider do that.

For example:

ssh ubuntu@POD_IP
mkdir new_file
ls 
  new_file
reboot

after reboot I have

ssh ubuntu@POD_IP
ls 

ls shows nothing

But I want to it save my current state. And I want to do it automatically.

If I use docker commit I can not control my images because it makes hundreds of images. because I should make images by every changes.

If I want to use storage I should mount /. but kubernetes does not allow me to mount /. and it gives me this error

Error: Error response from daemon: invalid volume specification: '/var/lib/kubelet/pods/26c39eeb-85d7-11e9-933c-7c8bca006fec/volumes/kubernetes.io~rbd/pvc-d66d9039-853d-11e9-8aa3-7c8bca006fec:/': invalid mount config for type "bind": invalid specification: destination can't be '/'

-- yasin lachini
cloud
docker
kubernetes

2 Answers

6/3/2019

You can try to use docker commit but you will need to ensure that your Kubernetes cluster is picking up the latest image that you committed -

docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]

This is going to create a new image out of your container which you can feed to Kubernetes.

Ref - https://docs.docker.com/engine/reference/commandline/commit/

Update 1 -

In case you want to do it automatically, you might need to store the changed state or the files at a centralized file system like NFS etc & then mount it to all running containers whenever required with the relevant permissions.

K8s ref - https://kubernetes.io/docs/concepts/storage/persistent-volumes/

-- vivekyad4v
Source: StackOverflow

6/3/2019

Docker and Kubernetes don't work this way. Never run docker commit. Usually you have very little need for an ssh daemon in a container/pod and you need to do special work to make both the sshd and the main process both run (and extra work to make the sshd actually be secure); your containers will be simpler and safer if you just remove these.

The usual process involves a technique known as immutable infrastructure. You never change code in an existing container; instead, you change a recipe to build a container, and tell the cluster manager that you want an update, and it will tear down and rebuild everything from scratch. To make changes in an application running in a Kubernetes pod, you typically:

  1. Make and test your code change, locally, with no Docker or Kubernetes involved at all.
  2. docker build a new image incorporating your code change. It should have a unique tag, often a date stamp or a source control commit ID.
  3. (optional but recommended) docker run that image locally and run integration tests.
  4. docker push the image to a registry.
  5. Change the image tag in your Kubernetes deployment spec and kubectl apply (or helm upgrade) it.

Often you'll have an automated continuous integration system do steps 2-4, and a continuous deployment system do the last step; you just need to commit and push your tested change.

Note that when you docker run the image locally in step 3, you are running the exact same image your production Kubernetes system will run. Resist the temptation to mount your local source tree into it and try to do development there! If a test fails at this point, reduce it to the simplest failing case, write a unit test for it, and fix it in your local tree. Rebuilding an image shouldn't be especially expensive.

Your question hints at the unmodified ubuntu image. Beyond some very early "hello world" type experimentation, there's pretty much no reason to use this anywhere other than the FROM line of a Dockerfile. If you haven't yet, you should work through the official Docker tutorial on building and running custom images, which will be applicable to any clustering system. (Skip all of the later tutorials that cover Docker Swarm, if you've already settled on Kubernetes as an orchestrator.)

-- David Maze
Source: StackOverflow