some common jars share to multiple Kubernetes Pods

2/28/2020

My Applicaiton is java based Applicaiton and we did the tomcat dockerized applicaiton . we have four 4 application and created the 4 containers . we have comman library for authentication , in all containers we have moved this jar file to /lib folder and applicaiton is looks file .

but when ever changes happen in the jar file , we need to build and deploy the all the containers . is there way do share the jar file to 4 containers and does not required to build and deploy the all the 4 containers . only need to update the jar

Its Like share the tomcat lib folder to another container in kubernetes and when ever chnages happened to jar file automatically replicated to all container .

-- jagadeeswar Reddy
docker
java
kubernetes

1 Answer

2/28/2020

This is not standard practice and you shouldn't do it. Also, it's operationally tricky.

Docker images generally are self-contained and include all of their dependencies, in your case including the repeated jar file. In the context of a Kubernetes cluster with software under active development, you should make sure every image has a unique image tag, maybe something like a time stamp or source control commit ID, probably assigned by your build system. Then you can update your Deployment with the new image tag; this triggers Kubernetes to pull the new images and restart containers.

This means that if you see a pod running an image tagged 20200228 then you know exactly the software that's in it, including the shared jar, and you can test exactly that image outside your cluster. If you discover something has gone wrong, maybe even in the shared jar, you can change the deployment tag back to 20200227 to get yesterday's build while you fix the problem.

If you're hand-deploying jar files somehow and mounting them as volumes into pods, you lose all of this: you have to restart pods by hand to see the new jar files, you can't test images offline without manually injecting the jar file, and if there is an issue you have multiple things you need to try to revert by hand.


As far as the mechanics go, you would need some sort of Volume that can be read by multiple pods concurrently, and either written to from outside the cluster or writable by a single pod. The discussion of PersistentVolumes has the concept of an access mode, and so you need something that's ReadOnlyMany (and externally accessible) or ReadWriteMany. Depending on the environment you have available to you, your only option might be an NFS server. That's possible, but it's one additional piece to maintain, and you'd have to set it up yourself outside the cluster infrastructure.

-- David Maze
Source: StackOverflow