Kubernetes deployment of two microservices at same subdomain resulting in frequent and random 404 errors

8/12/2020

We have a Kubernetes deployment consisting of a nodejs front end and an nginx backend. We're finding that the two deployments work fine in Kubernetes individually, but when they are both deployed requests to the front end return a 404 almost exactly 50% of the time.

It's natural to assume there is an issue with our virtual service, but this seems to not be the case, based on the fact that the deployment of the vs/gateway is not sufficient to cause the issue. It also seems that if we deploy a different, unrelated image in the backend, the front-end continues to work without 404 errors.

The app was originally generated via JHipster, and we manually separated the front-end and backend components. The front-end is nodejs, the backend is Java/nginx. The app works locally, but fails in a k8s deployment.

Also, our Kubernetes deployment is in Rancher.

Experiments seem to indicate it is related to something in our back-end deployment, so I'm including our backend deployement.yaml below:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ourapp-be-custom-mount
spec:
  revisionHistoryLimit: 3
  replicas: 1
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 25%
      maxSurge: 25%
  template:
    spec:
      containers:
        - name: ourapp-be-custom-mount
          image: "IMAGE_SET_BY_OVERLAYS_KUSTOMIZATION"
          envFrom:
            - configMapRef:
                name: ourapp-be-config
          ports:
          - name: http
            containerPort: 8080
          resources:
            limits:
              cpu: "0.5"
              memory: "2048Mi"
            requests:
              cpu: "0.1"
              memory: "64Mi"
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - mountPath: /usr/share/h2/data
              name: ourapp-db-vol01-custom-mount

          securityContext:
            runAsNonRoot: true
            runAsUser: 1000
      imagePullSecrets:
        - name: regcred-nexus
      volumes:
      - name: ourapp-db-vol01-custom-mount
        persistentVolumeClaim:
          claimName: ourapp-db-pvc-volume01-custom-mount
      terminationGracePeriodSeconds: 30
-- Adam Wise
istio
jhipster
kubernetes
rancher

1 Answer

8/25/2020

Each service needs to point to a different app. You can verify in Rancher that each service points to a different app. Check your yaml. If using Kustomize, the commonLabels:app can trip you up. Make sure it points to different apps for frontend and backend.

-- Adam Wise
Source: StackOverflow