Kubernetes Ingress (GCE) causes default service selectors to stop working

12/19/2018

I am attempting to setup a blue/green deployment environment for an application. So far everything has worked fairly well using this spec (simplified to the relevant parts): https://gist.github.com/haleyrc/3c648087ceeb2aa762b7a7b0efefaa3a.

The deployment process involves updating the image for the not-in-use deployment, waiting for the rollout to complete, and then changing the color selector for the service. This all works as intended which I verified by initially setting the service up as a LoadBalancer and repeatedly curling the external IP which just returned the pod name.

If I go through the Ingress, however, everything appears to work when initially setup, but as soon as the deployment occurs, I start getting responses from both blue and green pods. If I delete the not-in-use deployments pods and let them come back up, everything works again until the next deployment.

I have even run the service as a LoadBalancer at the same time as the Ingress was running and curled both simultaneously. The responses from the Service were only from the in-use deployment, while the responses from the Ingress were a mix of in-use and not-in-use.

There is no caching enabled on my backend service and no CDN in place. What's more, doing a kubectl describe ingress backend-ingress shows the correct service backend and the IPs of the correct pods.

Is there something simple I'm missing that could cause responses from pods outside of the selected group but only when passed through a GCE Ingress?

-- Ryan Haley
google-compute-engine
google-kubernetes-engine
kubernetes

0 Answers