Kubernetes Ingress get Unhealthy backend services on Google Kubernetes Engine

10/3/2018

I'm trying to deploy two services on Google container engine, I have created a cluster with 3 Nodes. My docker images are in private docker hub repo that's why I have created a secret and used in Deployments, The ingress is creating a load balancer in the Google cloud console but it shows that backend services are not healthy and inside the kubernetes section under workloads it says Does not have minimum availability.

I'm new to kubernetes, what can be a problem?

Here are my yamls:

Deployment.yaml:

kind: Deployment
apiVersion: apps/v1
metadata:
  name: pythonaryapp
  labels:
    app: pythonaryapp
spec:
  replicas: 1 #We always want more than 1 replica for HA
  selector:
    matchLabels:
      app: pythonaryapp
  template:
    metadata:
      labels:
        app: pythonaryapp
    spec:
      containers:
      - name: pythonaryapp #1st container
        image: docker.io/arycloud/docker_web_app:pythonaryapp #Dockerhub image
        ports:
        - containerPort: 8080 #Exposes the port 8080 of the container
        env:
        - name: PORT #Env variable key passed to container that is read by app
          value: "8080" # Value of the env port.
        readinessProbe:
          httpGet:
            path: /healthz
            port: 8080
          periodSeconds: 2
          timeoutSeconds: 2
          successThreshold: 2
          failureThreshold: 10
      imagePullSecrets:
         - name: docksecret
---

kind: Deployment
apiVersion: apps/v1
metadata:
  name: pythonaryapp1
  labels:
    app: pythonaryapp1
spec:
  replicas: 1 #We always want more than 1 replica for HA
  selector:
    matchLabels:
      app: pythonaryapp1
  template:
    metadata:
      labels:
        app: pythonaryapp1
    spec:
      containers:
      - name: pythonaryapp1 #1st container
        image: docker.io/arycloud/docker_web_app:pythonaryapp1 #Dockerhub image
        ports:
        - containerPort: 8080 #Exposes the port 8080 of the container
        env:
        - name: PORT #Env variable key passed to container that is read by app
          value: "8080" # Value of the env port.
        readinessProbe:
          httpGet:
            path: /healthz
            port: 8080
          periodSeconds: 2
          timeoutSeconds: 2
          successThreshold: 2
          failureThreshold: 10
      imagePullSecrets:
         - name: docksecret
---

And here's services.yaml:

kind: Service
apiVersion: v1
metadata:
  name: pythonaryapp
spec:
  type: NodePort
  selector:
    app: pythonaryapp
  ports:
  - protocol: TCP
    port: 8080
---

---
kind: Service
apiVersion: v1
metadata:
  name: pythonaryapp1
spec:
  type: NodePort
  selector:
    app: pythonaryapp1
  ports:
  - protocol: TCP
    port: 8080
---

And Here's my ingress.yaml:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: mysvcs
spec:
  rules:
  - http:
      paths:
      - path: /
        backend:
          serviceName: pythonaryapp
          servicePort: 8080
      - path: /<name>
        backend:
          serviceName: pythonaryapp1
          servicePort: 8080

Update:

Here's flask service code:

from flask import Flask

app = Flask(__name__)


@app.route('/')
def hello_world():
    return 'Hello World, from Python Service.', 200




if __name__ == '__main__':
    app.run()

And, on running the container of it's docker image it retunrs 200 sttaus code at the root path /.

Thanks in advance!

-- Abdul Rehman
docker
google-kubernetes-engine
kubernetes
kubernetes-ingress

2 Answers

10/4/2018

In GKE, the ingress is implemented by GCP LoadBalancer. The GCP LB is checking the health of the service by calling it in the service address with the root path '/'. Make sure that your container can respond with 200 on the root, or alternatively change the LB backend service health check route (you can do it in the GCP console)

-- Yehuda
Source: StackOverflow

10/4/2018

Have a look at this post. It might contain helpful tips for your issue. For example I do see a readiness probe but not a liveness probe in your config files.

This post suggests that “Does not have minimum availability” in k8s could be a result of a CrashloopBackoff caused by a failing liveness probe.

-- Notauser
Source: StackOverflow