How to prevent kubernates probing https?

11/17/2018

I'm trying to run a service exposed via port 80 and 443. The SSL termination happens on the pod. I specified only port 80 for liveness probe but for some reasons kubernates is probing https (443) as well. Why is that and how can I stop it probing 443?

Kubernates config

apiVersion: v1
kind: Secret
metadata:
  name: myregistrykey
  namespace: default
data:
  .dockerconfigjson: xxx==
type: kubernetes.io/dockerconfigjson
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: example-com
spec:
  replicas: 0
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 50%
  minReadySeconds: 30 
  template:
    metadata:
      labels:
        app: example-com
    spec:
      imagePullSecrets:
      - name: myregistrykey
      containers:
      - name: example-com
        image: DOCKER_HOST/DOCKER_IMAGE_VERSION
        imagePullPolicy: Always
        ports:
        - containerPort: 80
          protocol: TCP
          name: http
        - containerPort: 443
          protocol: TCP
          name: https
        livenessProbe:
         httpGet:
          scheme: "HTTP"
          path: "/_ah/health"
          port: 80
          httpHeaders:
           - name: Host
             value: example.com
         initialDelaySeconds: 35
         periodSeconds: 35
        readinessProbe:
         httpGet:
          scheme: "HTTP"
          path: "/_ah/health"
          port: 80
          httpHeaders:
           - name: Host
             value: example.com
         initialDelaySeconds: 35
         periodSeconds: 35
        resources:
          requests:
            cpu: 250m
          limits:
            cpu: 500m
---
apiVersion: v1
kind: Service
metadata:
  name: example-com
spec:
  type: LoadBalancer
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
    nodePort: 0
    name: http
  - port: 443
    protocol: TCP
    targetPort: 443
    nodePort: 0
    name: https
  selector:
    app: example-com

The error/logs on pods clearly indicate that kubernates is trying to access the service via https.

 kubectl describe pod example-com-86876875c7-b75hr
Name:               example-com-86876875c7-b75hr
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               aks-agentpool-37281605-0/10.240.0.4
Start Time:         Sat, 17 Nov 2018 19:58:30 +0200
Labels:             app=example-com
                    pod-template-hash=4243243173
Annotations:        <none>
Status:             Running
IP:                 10.244.0.65
Controlled By:      ReplicaSet/example-com-86876875c7
Containers:
  example-com:
    Container ID:   docker://c5eeb03558adda435725a0df3cc2d15943966c3df53e9462e964108969c8317a
    Image:          example-com.azurecr.io/example-com:2018-11-17_19-58-05
    Image ID:       docker-pullable://example-com.azurecr.io/example-com@sha256:5d425187b8663ecfc5d6cc78f6c5dd29f1559d3687ba9d4c0421fd0ad109743e
    Ports:          80/TCP, 443/TCP
    Host Ports:     0/TCP, 0/TCP
    State:          Running
      Started:      Sat, 17 Nov 2018 20:07:59 +0200
    Last State:     Terminated
      Reason:       Error
      Exit Code:    2
      Started:      Sat, 17 Nov 2018 20:05:39 +0200
      Finished:     Sat, 17 Nov 2018 20:07:55 +0200
    Ready:          False
    Restart Count:  3
    Limits:
      cpu:  500m
    Requests:
      cpu:      250m
    Liveness:   http-get http://:80/_ah/health delay=35s timeout=1s period=35s #success=1 #failure=3
    Readiness:  http-get http://:80/_ah/health delay=35s timeout=1s period=35s #success=1 #failure=3
    Environment:
      NABU:                          nabu
      KUBERNETES_PORT_443_TCP_ADDR:  agile-kube-b3e5753f.hcp.westeurope.azmk8s.io
      KUBERNETES_PORT:               tcp://agile-kube-b3e5753f.hcp.westeurope.azmk8s.io:443
      KUBERNETES_PORT_443_TCP:       tcp://agile-kube-b3e5753f.hcp.westeurope.azmk8s.io:443
      KUBERNETES_SERVICE_HOST:       agile-kube-b3e5753f.hcp.westeurope.azmk8s.io
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rcr7c (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  default-token-rcr7c:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-rcr7c
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for300s
Events:
  Type     Reason     Age                    From                  Message
  ----     ------     ----                   ----                  -------
  Normal   Scheduled  10m                    default-scheduler                  Successfully assigned default/example-com-86876875c7-b75hr to aks-agentpool-37281605-0
  Warning  Unhealthy  3m46s (x6 over 7m16s)  kubelet, aks-agentpool-37281605-0  Liveness probe failed: Get https://example.com/_ah/health: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  Normal   Pulling    3m45s (x3 over 10m)    kubelet, aks-agentpool-37281605-0  pulling image "example-com.azurecr.io/example-com:2018-11-17_19-58-05"
  Normal   Killing    3m45s (x2 over 6m5s)   kubelet, aks-agentpool-37281605-0  Killing container with id docker://example-com:Container failed liveness probe.. Container will be killed andrecreated.
  Normal   Pulled     3m44s (x3 over 10m)    kubelet, aks-agentpool-37281605-0  Successfully pulled image "example-com.azurecr.io/example-com:2018-11-17_19-58-05"
  Normal   Created    3m42s (x3 over 10m)    kubelet, aks-agentpool-37281605-0  Created container
  Normal   Started    3m42s (x3 over 10m)    kubelet, aks-agentpool-37281605-0  Started container
  Warning  Unhealthy  39s (x9 over 7m4s)     kubelet, aks-agentpool-37281605-0  Readiness probe failed: Get https://example.com/_ah/health: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
-- mike
azure-kubernetes
google-kubernetes-engine
kubectl
kubernetes

2 Answers

11/19/2018

As per your comments, you are doing an HTTP to HTTPS redirect in the pod and basically, the probe cannot connect to it. If you still want to serve a probe on port 80 you should consider using TCP probes. For example:

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: example-com
spec:
  ...
  minReadySeconds: 30 
  template:
    metadata:
      labels:
        app: example-com
    spec:
      imagePullSecrets:
      - name: myregistrykey
      containers:
      - name: example-com
        ...
        livenessProbe:
         httpGet:
          scheme: "HTTP"
          path: "/_ah/health"
          port: 80
          httpHeaders:
           - name: Host
             value: example.com
         initialDelaySeconds: 35
         periodSeconds: 35
        readinessProbe:
         tcpSocket:
          port: 80
         initialDelaySeconds: 35
         periodSeconds: 35
       ...

Or you can ignore some redirects in your application depending on the URL, just like mentioned in @night-gold's answer.

-- Rico
Source: StackOverflow

11/18/2018

The problem doesn't come from Kubernetes but from your web server. Kubernetes is doing exactly what you are asking, probing the http url but your server is redirecting it to https, that is causing the error.

If you are using apache, you should look here Apache https block redirect or there if you use nginx nginx https block redirect

-- night-gold
Source: StackOverflow