Error updating Endpoint Slices for Service Node Not Found

8/7/2020

I tried setting up gitea in my local kubernetes cluster. At first it was working I can access the gitea home page. But when I tried to reboot my raspberry pi. I got the below error on my Service

enter image description here

My pod is ok. enter image description here

I'm wondering why I only got this error every time i reboot my device.

here are my configuraiton

kind: Service
apiVersion: v1
metadata:
  name: gitea-service
spec:
  type: NodePort
  selector:
    app: gitea
  ports:
  - name: gitea-http
    port: 3000
    targetPort: 3000
    nodePort: 30000
  - name: gitea-ssh
    port: 22
    targetPort: 22
    nodePort: 30002
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: gitea-deployment
  labels:
    app: gitea
spec:
  replicas: 1
  serviceName: gitea-service-headless
  selector:
    matchLabels:
      app: gitea
  template:
    metadata:
      labels:
        app: gitea
    spec:
      containers:
      - name: gitea
        image: gitea/gitea:1.12.2
        ports:
        - containerPort: 3000
          name: gitea
        - containerPort: 22
          name: git-ssh
        volumeMounts:
        - name: pv-data
          mountPath: /data
      volumes:
      - name: pv-data
        persistentVolumeClaim:
            claimName: gitea-pvc
apiVersion: v1
kind: Service
metadata:
  name: gitea-service-headless
  labels:
    app: gitea-service-headless
spec:
  clusterIP: None
  ports:
  - port: 3000
    name: gitea-http
    targetPort: 3000
  - port: 22
    name: gitea-https
    targetPort: 22    
  selector:
    app: gitea
-- Jayson Gonzaga
gitea
kubernetes

1 Answer

8/10/2020

I'm wondering why I only got this error every time i reboot my device.

Well, lets look at the error:

Error updating Endpoint Slices for Service dev-ops/gitea-service: node "rpi4-a" not found

It look like the error was triggered because: node "rpi4-a" not found. Why is it not found?? While rebooting, the node is down, pod is not working for a moment and this is when service throws the error. When the node boots up, pod starts working but the events are present for one hour (by default) before they get automatically deleted.

So don't worry about it. You rebooted the node so you should expect some errors to appear. Kubernetes tries as hard as it can to keep everything working, so when you trigger the reboot without draining the node, some errors may appear.

-- Matt
Source: StackOverflow