Kubernetes Ingress not working: default backend 404

1/8/2019

I'm new to Kubernertes and we have one app that can be customized to several costumers.

The deployments are fine: they are running pods correctly. The problem is to access the API outside the cluster.

The AWS routes are being created as expected by Kubernetes Ingress.

The existing ones are working fine, but when I try reach the new one (lets say client09), always return default-backend-404.

Plus, when curl the url, it shows a Kubernetes Ingress Controller Fake Certificate message.

kubectl version Client 1.6 Server 1.9

Also my user does not have full access, so I can't provide any information about nginx controller. We just have the same paste and copy for new costumers, but don't know what might be wrong.

Any thoughts what is wrong?

Service

apiVersion: v1
kind: Service
metadata:
 name: client09-svc
 labels:
   run: client09-deploy
 spec:
   type: ClusterIP
   ports:
     - port: 8080
       targetPort: 8080
       protocol: TCP
       name: api
   selector:
     run: client09-deploy

Deploy

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: client09-deploy
  namespace: default
spec:
  replicas: 1
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
    type: RollingUpdate
  template:
    metadata:
      labels:
        run: client09-deploy
    spec:
      terminationGracePeriodSeconds: 60
      containers:
      - name: client09
        image: myContainer
        ports:
        - containerPort: 8080
          name: api
        readinessProbe:
          httpGet:
            path: /health
            port: 8080
          initialDelaySeconds: 30
          periodSeconds: 10
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
          initialDelaySeconds: 30
          periodSeconds: 10
        imagePullPolicy: Always
        resources:
          limits:
            cpu: 1800m
            memory: 2000Mi
          requests:
            cpu: 400m
            memory: 1000Mi
        volumeMounts:
          - mountPath: /secret-volume
            name: secretvolume
      imagePullSecrets:
        - name: dockerhubkey
      volumes:
        - name: secretvolume
          secret:
            secretName: client09-secret

Ingress

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: my-ingress
  annotations:
    kubernetes.io/ingress.class: nginx
    kubernetes.io/tls-acme: "true"
    ingress.kubernetes.io/ssl-redirect: "true"
    ingress.kubernetes.io/use-port-in-redirects: "true"
  namespace: default
spec:
  tls:
  - hosts:
    - client01.domain.com
    - client02.domain.com
    - client09.domain.com
  secretName: my-ingress-tls
rules:
- host: client01.domain.com
  http:
    paths:
    - backend:
        serviceName: client01-svc
        servicePort: 8080
      path: /
- host: client02.domain.com
  http:
    paths:
    - backend:
        serviceName: client02-svc
        servicePort: 8080
      path: /
- host: client09.domain.com
  http:
    paths:
    - backend:
        serviceName: client09-svc
        servicePort: 8080
      path: /
-- Leonardo
kubernetes
kubernetes-ingress

1 Answer

1/9/2019

Looks like problem with selector. Could you update Service YAML to this:

apiVersion: v1
kind: Service
metadata:
 name: client09-svc
 labels:
   run: client09-deploy
 spec:
   type: ClusterIP
   ports:
     - port: 8080
       targetPort: 8080
       protocol: TCP
       name: api
   selector:
     name: client09-deploy
-- Nick Rak
Source: StackOverflow