However, now I delete the application pods from namespace A but keep following as intact
When now I try accessing application endpoints, it just fails. I understand there are no active endpoints in namespace A (as pods were deleted) but svc is still available in namespace A and also ingress rules in B has canary enabled with weight 100% , I was expecting that traffic will be routed to pods in namespace B , but that is not happening.
I have compared the configuration of nginx controller before and after deletion of pods in namespace A (with 100 % canary ingress rule intact) using
kubectl exec <nginx-controller-pod-name> -n <namespace> -- curl localhost:10246/configuration/backends
kubectl exec <nginx-controller-pod-name> -n <namespace> -- cat nginx.conf
There is no difference in the o/p before and after deletion of pods in namespace A
NOTE:
Is this the intended behavior ? I am unable to find what is driving this behavior.
You need to perform below before you delete the pods in namespace A.
As described here when you remove pods the endpoints change and endpoints change neither recreate a new nginx.conf file nor reloads it. Rather new list of endpoints sent to a Lua handler running inside Nginx using HTTP POST request. You can check the logs of Lua handler to verify that.In a relatively big clusters with frequently deploying apps this feature saves significant number of Nginx reloads which can otherwise affect response latency, load balancing quality (after every reload Nginx resets the state of load balancing) and so on.When you create a new ingress then it will change nginx.conf and reload it.This should explain why no change in nginx.conf.