Using HTTP basic authentication for readinessProbe when using GKE Ingress

1/17/2020

I'm using a nosqlclient docker image with GKE. There is a default health-check URL available with the image at /healthcheck . However, when I try to enable authentication for the app, it also enables authentication for this URL. I need to use GKE Ingress along with the app. GKE Ingress requires that I create an HTTP readinessProbe which can return a 200. However, when I try to use this path for readinessCheck, the readiness check fails to work. The strange thing is, no readiness check logs comes up when I run kubectl describe pods <pod_name>. This is part of my deployment yaml file:

...
    spec:
      containers:
      - name: mongoclient
        image: mongoclient/mongoclient:2.2.0
        resources:
          requests:
            memory: "32Mi"
            cpu: "100m"
          limits:
            memory: "128Mi"
            cpu: "500m"
        ports:
        - containerPort: 3000
        env:
        - name: MONGOCLIENT_AUTH
          value: "true"
        - name: MONGOCLIENT_USERNAME
          value: "admin"
        - name: MONGOCLIENT_PASSWORD
          value: "password"
        readinessProbe:
          httpGet:
            httpHeaders:
              - name: "Authorization"
                value: "Basic YWRtaW46cGFzc3dvcmQ="           
            port: 3000
            path: /healthcheck      
          initialDelaySeconds: 60
          timeoutSeconds: 5
...

When I try a curl with authorization from the pod, it returns 200 though:

node@mongoclient-deployment-7c6856d6f6-mkxqh:/opt/meteor/dist/bundle$ curl -i http://localhost:3000/healthcheck
HTTP/1.1 401 Unauthorized
WWW-Authenticate: Basic realm="Authorization Required"
Date: Fri, 17 Jan 2020 18:02:20 GMT
Connection: keep-alive
Transfer-Encoding: chunked

Unauthorizednode@mongoclient-deployment-7c6856d6f6-mkxqh:/opt/meteor/dist/bundle$ curl -i http://admin:password@localhost:3000/healthcheck
HTTP/1.1 200 OK
Date: Fri, 17 Jan 2020 18:02:30 GMT
Connection: keep-alive
Transfer-Encoding: chunked

Server is up and running !
node@mongoclient-deployment-86bc77cc5b-9qg67:/opt/meteor/dist/bundle$ curl -i -H "Authorization: Basic YWRtaW46cGFzc3dvcmQ=" http://localhost:3000/healthcheck
HTTP/1.1 200 OK
Date: Sat, 18 Jan 2020 07:19:49 GMT
Connection: keep-alive
Transfer-Encoding: chunked

Some further information:

> kubectl get pods -l app=mongoclient-app -o wide
NAME                                      READY   STATUS    RESTARTS   AGE     IP            NODE                                            NOMINATED NODE   READINESS GATES
mongoclient-deployment-7c6856d6f6-mkxqh   1/1     Running   0          5m25s   10.28.1.152   **************************************          <none>           0/1

> kubectl describe pods -l app=mongoclient-app
...
    Liveness:   http-get http://:3000/healthcheck delay=70s timeout=5s period=10s #success=1 #failure=3
    Readiness:  http-get http://:3000/healthcheck delay=60s timeout=5s period=10s #success=1 #failure=3
...

I can't find any information on passing such a custom header through Ingress by making use of backend config resource . Even if such a thing works, I'm using this same ingress for other services and meddling with Ingress in such a scenario doesn't seem to be a good thing.

I'm new to GKE and Kubernetes. So, I'm not sure if there are any other places to look for. The pod logs didn't provide much insights about access patterns. How can I proceed in this situation?

Update 1: So, I upgraded a development cluster to 1.15.7-gke.2 as it support custom headers for ingress, and added the following:

apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
  name: mongoclient-backendconfig
spec:
  timeoutSec: 300
  connectionDraining:
    drainingTimeoutSec: 400
  sessionAffinity:
    affinityType: "GENERATED_COOKIE"
    affinityCookieTtlSec: 86400
  customRequestHeaders:
    headers:
    - "Authorization: Basic YWRtaW46cGFzc3dvcmQ="

Though the headers are coming up in the load balancer backend, the readiness check time out:

  Normal  Scheduled                15m                default-scheduler                                   Successfully assigned default/mongoclient-deployment-86bc77cc5b-9qg67 to gke-kubernetes-default-pool-50ccdc3e-d608
  Normal  LoadBalancerNegNotReady  15m (x2 over 15m)  neg-readiness-reflector                             Waiting for pod to become healthy in at least one of the NEG(s): [k8s1-00c7387d-default-mongoclient-mayamd-ai-service-80-292db9f4]
  Normal  Pulled                   15m                kubelet, gke-kubernetes-default-pool-50ccdc3e-d608  Container image "mongoclient/mongoclient:2.2.0" already present on machine
  Normal  Created                  15m                kubelet, gke-kubernetes-default-pool-50ccdc3e-d608  Created container mongoclient
  Normal  Started                  15m                kubelet, gke-kubernetes-default-pool-50ccdc3e-d608  Started container mongoclient
  Normal  LoadBalancerNegTimeout   5m43s              neg-readiness-reflector                             Timeout waiting for pod to become healthy in at least one of the NEG(s): [k8s1-00c7387d-default-mongoclient-mayamd-ai-service-80-292db9f4]. Marking condition "cloud.google.com/load-balancer-neg-ready" to True.
-- Shanu Koyakutty
google-kubernetes-engine
kubernetes
kubernetes-ingress

0 Answers