Kubernetes version: 1.13.4 (same problem on 1.13.2).
I self-host the cluster on digitalocean.
OS: coreos 2023.4.0
I have 2 volumes on one node:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: prometheus-pv-volume
labels:
type: local
name: prometheus-pv-volume
spec:
storageClassName: local-storage
capacity:
storage: 100Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
hostPath:
path: "/prometheus-volume"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/monitoring
operator: Exists
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: grafana-pv-volume
labels:
type: local
name: grafana-pv-volume
spec:
storageClassName: local-storage
capacity:
storage: 1Gi
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
hostPath:
path: "/grafana-volume"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/monitoring
operator: Exists
And 2 pvc's using them on a same node. Here is one:
storage:
volumeClaimTemplate:
spec:
storageClassName: local-storage
selector:
matchLabels:
name: prometheus-pv-volume
resources:
requests:
storage: 100Gi
Everything works fine.
kubectl get pv --all-namespaces
output:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
grafana-pv-volume 1Gi RWO Retain Bound monitoring/grafana-storage local-storage 16m
prometheus-pv-volume 100Gi RWO Retain Bound monitoring/prometheus-k8s-db-prometheus-k8s-0 local-storage 16m
kubectl get pvc --all-namespaces
output:
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
monitoring grafana-storage Bound grafana-pv-volume 1Gi RWO local-storage 10m
monitoring prometheus-k8s-db-prometheus-k8s-0 Bound prometheus-pv-volume 100Gi RWO local-storage 10m
The problem is that im getting these log messages every 2 minutes from kube-controller-manager:
W0302 17:16:07.877212 1 plugins.go:845] FindExpandablePluginBySpec(prometheus-pv-volume) -> err:no volume plugin matched
W0302 17:16:07.877164 1 plugins.go:845] FindExpandablePluginBySpec(grafana-pv-volume) -> err:no volume plugin matched
Why do they appear? How can i fix this?
Seems like this is safe to ignore message that was recently removed (Feb 20) and will not occur in future releases: https://github.com/kubernetes/kubernetes/pull/73901