After like 2 weeks of running jobs in a namespace with quota, pods stopped running due to insufficient quota but i could not find any running pods
(⎈ |production:rate-jobs)➜ ~ kubectl get resourcequota -o yaml
apiVersion: v1
items:
- apiVersion: v1
kind: ResourceQuota
metadata:
annotations:
name: rate-jobs-compute-resources
namespace: rate-jobs
resourceVersion: "9644562"
selfLink: /api/v1/namespaces/rate-jobs/resourcequotas/rate-jobs-compute-resources
uid: bd2a4c52-0664-11e8-854d-0050568166d0
spec:
hard:
limits.cpu: "4"
limits.memory: 4Gi
pods: "2"
status:
hard:
limits.cpu: "4"
limits.memory: 4Gi
pods: "2"
used:
limits.cpu: "2"
limits.memory: 4Gi
pods: "1"
kind: List
metadata:
resourceVersion: ""
selfLink: ""
(⎈ |production:rate-jobs)➜ ~ kubectl get pod -n rate-jobs No resources found.
As we can see from the above we have a running pod that is taking 2 CPU and 4 Gi but there is no running pod
How can I get what is being used?
By design when Job is done, it stops pods but doesn't terminate them. Keeping them should allow you to still have access to logs of completed pods.
Stopped pods are not shown by command kubectl get pods
, so you need to use option -a, --show-all
For example:
kubectl get pods -a
Output:
pod-name-xyz01 0/1 Completed 0 11m
To free resources you can manually remove Jobs with their Pods by using kubectl delete job <Your_Job_name>
or only Pods by using kubectl delete pod <Pod-Name>