I've deployed a kubernetes infrastructure in my organization and have scaled up my application but at a time it requires custom modification, so I need to change and commit my images again and again and then pull that image to see my changes. Is there are anyway via which I can commit my pods and use it as an instance to replicate to other pods, can some please advice as to how to overcome this problem without committing the base images again and again.
As a general rule, I would think you would be far better off by testing locally rather than "testing in production". That said:
I can imagine volume mounting a shared filesystem such as NFS to enable you to make changes once that propagate out across the related Pods.
Or, if that's not available, then change the command:
of the Pod to clone a repo on top of the underlying image:
containers:
- name: my-pod
image: repo.example.com/the-image:some-tag
command:
- bash
- -c
- |
git clone https://example.com/the/repo.git /into/the/right/directory
# then continue with the Pod's normal boot-up
exec /docker-entrypoint.sh
which has the advantage of at least allowing your changes to be stored somewhere safe before propagating out into the cluster.
Or, if that's not applicable, then one can replicate the path "manually" after making whatever changes you want inside one of the Pods:
$ kubectl exec -it $pod -- bash -il
# vi /the/file/or/whatever
# exit
$ for p in $(get the list of other pod names); do
kubectl exec $pod -- tar -cf - /the/sync/directory | \
kubectl exec $p -- tar -xf - -C /
done
Running the risk, of course, that if all the Pods are wiped out, so are your changes, but it does address your aversion to constructing new docker images each time.