Kubernetes Pod Evicted due to disk pressure

5/7/2020

I have a k8s environment with one master and two slave nodes. In one of the node two pods(assume pod-A and pod-B) are running and in that, pod-A get evicted due to disk pressure but another one pod-B was running in the same node without evicting. Even though i have checked the node resources(ram and disk space), plenty of the space is available. Also i have checked the docker thing using "docker system df", there it is showing reclaimable space is 48% for images and all remaining thing as 0% reclaimable. So, at-last i have removed all evicted pods of pod-B, it is running fine now.

1)When pod-B is running in the same node why pod-A got evicted?

2)Why pod-B is evicted, when sufficient resources are available?

apiVersion: datas/v1
kind: Deployment
metadata:
  annotations:
    kompose.cmd: kompose convert
    kompose.version: 1.17.0 (0c01409)
  creationTimestamp: null
  labels:
    io.kompose.service: zuul
  name: zuul
spec:
  progressDeadlineSeconds: 2145893647
  replicas: 1
  revisionHistoryLimit: 2145893647
  selector:
    matchLabels:
      io.kompose.service: zuul
  strategy:
    type: Recreate
  template:
    metadata:
      creationTimestamp: null
      labels:
        io.kompose.service: zuul
    spec:
      containers:
      - env:
        - name: DATA_DIR
          value: /data/work/
        - name: log_file_path
          value: /data/work/logs/zuul/
        - name: spring_cloud_zookeeper_connectString
          value: zoo_host:5168
        image: repository/zuul:version
        imagePullPolicy: Always
        name: zuul
        ports:
        - containerPort: 9090
          hostPort: 9090
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /data/work/
          name: zuul-claim0
      dnsPolicy: ClusterFirst
      hostNetwork: true
      nodeSelector:
         disktype: node1
      imagePullSecrets:
      - name: regcred
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - hostPath:
          path: /opt/DATA_DIR
          type: ""
        name: zuul-claim0
status: {}
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    kompose.cmd: kompose convert
    kompose.version: 1.17.0 (0c01409)
  creationTimestamp: null
  labels:
    io.kompose.service: routing
  name: routing
spec:
  progressDeadlineSeconds: 2148483657
  replicas: 1
  revisionHistoryLimit: 2148483657
  selector:
    matchLabels:
      io.kompose.service: routing
  strategy:
    type: Recreate
  template:
    metadata:
      creationTimestamp: null
      labels:
        io.kompose.service: routing
    spec:
      containers:
      - env:
        - name: DATA_DIR
          value: /data/work/
        - name: log_file_path
          value: /data/logs/routing/
        - name: spring_cloud_zookeeper_connectString
          value: zoo_host:5168
        image: repository/routing:version
        imagePullPolicy: Always
        name: routing
        ports:
        - containerPort: 8090
          hostPort: 8090
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /data/work/
          name: routing-claim0
      dnsPolicy: ClusterFirst
      hostNetwork: true
      nodeSelector:
         disktype: node1
      imagePullSecrets:
      - name: regcred
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - hostPath:
          path: /opt/DATA_DIR
          type: ""
        name: routing-claim0
status: {}
-- Andy
amazon-ec2
amazon-web-services
kubernetes
kubernetes-pod

0 Answers