didnt find persistent volumes to bind when attempting to assign to local storage on a pi

11/20/2020

It works on my mac k8s instance, but not in my raspberry pi instance. Essentially im trying to set up a k8s cloud implementation of pihole. That way, I can monitor it, and keep it containerized as opposed to running outside the scope of the application. Ideally, im trying to containerize everything for cleanliness.

I am running on a 2 node Raspberry Pi 4, 4G /ea cluster.

When running the following file on my mac it builds correctly, but on the pi, named: master-pi, it will fail:

Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  44m   default-scheduler  0/2 nodes are available: 1 node(s) didn't find available persistent volumes to bind, 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.
  Warning  FailedScheduling  44m   default-scheduler  0/2 nodes are available: 1 node(s) didn't find available persistent volumes to bind, 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.

The YAML i implemented was pretty simple seemingly:

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pihole-local-etc-volume
  labels:
    directory: etc
spec:
  capacity:
    storage: 1Gi
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  storageClassName: local
  local:
    path: /home/pi/Documents/pihole/etc #Location where it will live.
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - master-pi #docker-desktop # Hosthome where lives.
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pihole-local-etc-claim
spec:
  storageClassName: local
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi # Possibly update to 2Gi later.
  selector:
    matchLabels:
      directory: etc
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pihole-local-dnsmasq-volume
  labels:
    directory: dnsmasq.d
spec:
  capacity:
    storage: 1Gi
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  storageClassName: local
  local:
    path: /home/pi/Documents/pihole/dnsmasq #Location where it will live.
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - master-pi #docker-desktop # Hosthome where lives.
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pihole-local-dnsmasq-claim
spec:
  storageClassName: local
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 500Mi
  selector:
    matchLabels:
      directory: dnsmasq.d
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: pihole
  labels:
    app: pihole
spec:
  replicas: 1
  selector:
    matchLabels:
      app: pihole
  template:
    metadata:
      labels:
        app: pihole
        name: pihole
    spec:
      containers:
      - name: pihole
        image: pihole/pihole:latest
        imagePullPolicy: Always
        env:
        - name: TZ
          value: "America/New_York"
        - name: WEBPASSWORD
          value: "secret"
        volumeMounts:
        - name: pihole-local-etc-volume
          mountPath: "/etc/pihole"
        - name: pihole-local-dnsmasq-volume
          mountPath: "/etc/dnsmasq.d"
      volumes:
      - name: pihole-local-etc-volume
        persistentVolumeClaim:
          claimName: pihole-local-etc-claim
      - name: pihole-local-dnsmasq-volume
        persistentVolumeClaim:
          claimName: pihole-local-dnsmasq-claim
---
apiVersion: v1
kind: Service
metadata:
  name: pihole
spec:
  selector:
    app: pihole
  ports:
  - port: 8000
    targetPort: 80
    name: pihole-admin
  - port: 53
    targetPort: 53
    protocol: TCP
    name: dns-tcp
  - port: 53
    targetPort: 53
    protocol: UDP
    name: dns-udp
  externalIPs:
  - 192.168.10.75 #Static IP I need to assign for the network.

Other notes: I made sure I created the folders previously, and they are both chmod 777. df produces:

pi@master-pi:~/Documents/pihole$ df
Filesystem     1K-blocks     Used Available Use% Mounted on
tmpfs             383100     5772    377328   2% /run
/dev/mmcblk0p2  30450144 14283040  14832268  50% /
tmpfs            1915492        0   1915492   0% /dev/shm
tmpfs               5120        4      5116   1% /run/lock
tmpfs               4096        0      4096   0% /sys/fs/cgroup
/dev/mmcblk0p1    258095   147696    110399  58% /boot/firmware
tmpfs             383096      116    382980   1% /run/user/1000

So i believe the location has the size needed ( /home/pi/Documents/etc ) is just 1G but it looks half full, so ~15G available.

I can give more information, but I am just confused as to why this

-- Fallenreaper
kubernetes
raspberry-pi
raspberry-pi4

1 Answer

11/25/2020

There were 2 things here to learn.

  1. Master nodes do not get scheduled pods. They have enough going on, just organizing. That being said, A since node cluster is both a Master and Slave, where as 2 or more, 1 is a Master and rest are Slaves.

  2. When assigning the path /hello/world for the volume in this case, it will not make the path automatically on the host, which is actually REALLY annoying because if you have N pods, you need ALL Nodes to have that path, in case it is scheduled to a different one. The master determines where things so, so if it passes it to a Node which cant handle it, it will get a backoff error. It is best to put the path on all nodes then.

The key takeaway is that the cluster (master or otherwise) should auto make Node Paths, which just isnt true. One would think that since it has sudo it should be able to say "Mount this here", but doesnt. I need to manually configure each Node to have the paths consumed, which creates provisioning errors.

If i need to spin up MORE nodes ad-hoc, I need to ensure they are all provisioned accordingly, such as adding this particular path. you will need to add that to your own set up routine.

You can read more about hostPath for Volumes here: https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolume

Websites state that hostPath is good for single node clusters, but when dealing with Production or >1 Nodes, you should use NFS or Some other mechanism for storage.

Something else which would benefit would be using Storage Classes for auto provisioning, which is why I personally wanted in the first place: https://kubernetes.io/blog/2016/10/dynamic-provisioning-and-storage-in-kubernetes/

It talks about how you define storage classes, as well as how to request storage size of 30gi for example. This will be used with the claim instead. It is too late, but i will attempt to write up a similar example for the base question.

-- Fallenreaper
Source: StackOverflow