I have a kubernetes cluster which have a config server pod (spring cloud config) and my app running in 3 different pod for HA, exposed by service-ip. when i change properties in git and commit+push, i have to call http://service-ip/actuator/refresh. The problem is that when i call this url, only 1 pod get updated ( the pod that process the current request).
is there any way to solve it ? i see some options to find the pods using kubectl (answer from 2013), but im looking for more native solution.
When a Pod is replaced during a rolling upgrade it will get its config from the config server during startup. So a more k8s-native solution than getting the individual Pod and refreshing the Pod would be to do a no-op rolling upgrade as was suggested in the question How to recycle pods in Kubernetes
If you are changing config as part of a rolling upgrade and the problem relates to timing then you could use a post-start hook on the Pod to do an additional refresh.
Presumably what you are looking at doing otherwise is using a bash script that lists all the Pods and refreshes them, perhaps by doing a 'kubectl exec -it' to shell into the container and call refresh from within it. I can understand your concern that this is not very 'native' as it is quite manual and you would expect a more automatic solution with k8s or with the config server. Actually you sort of have to choose which 'native' approach you want as the config server's refresh-based approach is rather different from the k8s concept of a configmap and a rolling upgrade. (See e.g. https://dzone.com/articles/configuring-java-apps-with-kubernetes-configmaps-a ) A solution more native to the config server is to do messaging to alert the services that new config is available - see the links at the end of https://dzone.com/articles/spring-cloud-config-server-for-the-impatient