I have found an orphaned pod in my kubernetes cluster on one node. When I try to clean it up I see the following behavior:
# rm -rf /var/lib/kubelet/pods/a1fce4c0-2f64-11e9-9880-005056aed74b/
rm: cannot remove ā/var/lib/kubelet/pods/a1fce4c0-2f64-11e9-9880-
005056aed74b/volumes/kubernetes.io~fc/harbor-jobservice-pvā: Device or resource busy
Trying to check what's using it I get nothing:
# lsof +D /var/lib/kubelet/pods/a1fce4c0-2f64-11e9-9880-005056aed74b/volumes/kubernetes.io~fc/harbor-jobservice-pv
#
Checking the mountpoints I get nothing:
# mount | grep -i a1fce4c0
# cat /proc/mounts | grep -i a1fce4c0
#
The folder is not a symlink, ls -a
shows it's empty. I was validating a theory that this mount is created by kubelet, but after stopping it, I still haven't been able to clean it up. It doesn't seem to be a docker volume either (by inspecting the volumes from docker volume inspect <volume>
.
Rebooting the node is not what I'm looking for - I'd like to understand the problem, not just work-around it.
Thank you in advance.
EDIT: The persistent volume in scope of that orphaned pod is now bound to a new pod:
# kubectl get pv harbor-jobservice-pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS
CLAIM STORAGECLASS REASON AGE
harbor-jobservice-pv 5Gi RWO Retain Bound
harbor/harbor-jobservice-pvc
This PV/PVC pair is used by
# kubectl get pod -n harbor harbor-harbor-jobservice-6b9c8598c8-x5l64 -o
yaml
...
metadata:
uid: bb60bfdd-4c8e-11e9-8b8e-005056aea3a7
...
spec:
volumes:
- name: job-logs
persistentVolumeClaim:
claimName: harbor-jobservice-pvc
...
So you can see that the pod using this PV is different from the pod in whose folder I have the held folder.