How can I make kubernetes readiness check work with ALB ingress whitelist rules

2/12/2020

I have 3 kubernetes clusters (dev, stage, prod) running a variety of front / backend applications. The clusters were all set up using EKS on amazon. To serve traffic to these apps we have an ALB also configured by the kubernetes cluster serving traffic based on some simple routing rules.

The problem I'm trying to solve is that I want to only allow access to our dev / stage environments from specific IP's, just to block external users from accessing them.

I've chosen to do this by setting an ip whitelist in the alb controller https://github.com/kubernetes-sigs/aws-alb-ingress-controller/blob/master/docs/guide/ingress/annotation.md

    alb.ingress.kubernetes.io/healthcheck-path: "/"
    alb.ingress.kubernetes.io/success-codes: "200,404"
    alb.ingress.kubernetes.io/security-group-inbound-cidrs: "xx.xxx.xxx.xxx/32, xx.x.x.x/16"
  labels:
    app: development-alb

My problem is that I also use readiness checks in all of our apps so that k8s does not serve traffic to pods that are not ready. One of our apps, an angular frontend will not pass the readiness check when the whitelist rules are in place. This effectively puts it into a restart loop and wont serve traffic. All other apps / pods spin up fine and join the load balancer as I would expect. With the whitelist in place, but without the readiness check, the app functions as expected. I would like to keep readiness checks in place to provide 0 downtime deploys and mirror production more closely.

How can I keep readiness checks in place while also preventing outside traffic from hitting the cluster?

        readinessProbe:
          httpGet:
            path: /
            port: 4000
          initialDelaySeconds: 90
          periodSeconds: 5
-- Matt H
amazon-web-services
kubernetes

0 Answers