When I try to deploy my microservices locally, I get error regarding volumes. I've trimmed down all other configs and provided only the troubling portion here.
Persistent Volume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: service-1-db-pv
spec:
capacity:
storage: 250Mi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
storageClassName: ''
hostPath:
path: /mnt/wsl/service-1-pv
type: DirectoryOrCreate
Persistent Volume Claim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: service-1-db-pvc
spec:
volumeName: service-1-db-pv
resources:
requests:
storage: 250Mi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
storageClassName: ''
Service:
apiVersion: v1
kind: Service
metadata:
name: service-service-1-db
spec:
selector:
app: service-1-db
ports:
- protocol: TCP
port: 27017
targetPort: 27017
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-service-1-db
spec:
selector:
matchLabels:
app: service-1-db
template:
metadata:
labels:
app: service-1-db
spec:
containers:
- name: service-1-db
image: mongo:latest
volumeMounts:
- name: service-1-db-volume
mountPath: /data/db
resources:
requests:
cpu: 250m
memory: 128Mi
limits:
cpu: 1000m
memory: 256Mi
volumes:
- name: service-1-db-volume
persistentVolumeClaim:
claimName: service-1-db-pvc
When I try to run skaffold run --tail
, I get the following output:
Starting deploy...
- persistentvolume/service-1-db-pv created
- persistentvolumeclaim/service-1-db-pvc created
- service/service-service-1-db created
- deployment.apps/deployment-service-1-db created
Waiting for deployments to stabilize...
- deployment/deployment-service-1-db: 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
- pod/deployment-service-1-db-6f9b896485-mv8qx: 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
- deployment/deployment-service-1-db is ready.
Deployments stabilized in 22.23 seconds
The "pod has unbound PVC" suggests that the PersistentVolumeClaim associated with your Pods are ... not bound. Meaning your volume provisioner is probably still waiting for a confirmation the corresponding volume was created, before marking your PVC as bound.
Considering your last log mentions deployment being ready, then there isn't much to worry about.
One thing you could look for is your StorageClass VolumeBindingMode:
Although if you're creating both your PVC and Deployment relatively simultaneously, this won't change much.
There's nothing critical here. Although if such error persist: maybe something is wrong either with your volume provisioner, or even more likely: your storage provider. Eg, with Ceph, when you're missing monitors: you won't be able to create new volumes - though you may still read/write existing ones.
edit, answering your comment:
There isn't much that can be done.
First: make sure your StorageClass VolumeBindingMode is set to Immediate -- otherwise, there won't be any provisioning before you create a Pod attaching that volume.
Next, you can look into the Operator SDK, or anything that can query the API (Ansible, python, ... shell script), such as you may implement something that would wait for your PVC status to confirm provisioning suceeded.
Then again, there's no guarantee your deployment would always be applied to clusters that offer Immediate volume binding. And there's nothing wrong with OnDemand -- on larger clusters, with lots of users that don't necessarily clean up objects, ... it's not unusual.
Those events you mention arguably are not errors. Even with Immediate binding. It's perfectly normal for the Pod controller to wait for volumes to be properly registered and ready to use.