K8s pod unchedulable: x node(s) had volume node affinity conflict

11/13/2018

This question is similar to Kubernetes Pod Warning: 1 node(s) had volume node affinity conflict. However, I wanted to add a bit more color to my particular situation.

I am attempting to use the mongodb helm chart.

I have created a created a Persistent Volume to use for the PV Claim that is created by the pod/chart.

> kubectl describe pv/mongo-store-01
Name:              mongo-store-01
Labels:            <none>
Annotations:       field.cattle.io/creatorId=user-crk5v
                   pv.kubernetes.io/bound-by-controller=yes
Finalizers:        [kubernetes.io/pv-protection]
StorageClass:
Status:            Bound
Claim:             mongodb/mongodb-mongodb
Reclaim Policy:    Retain
Access Modes:      RWO
Capacity:          20Gi
Node Affinity:
  Required Terms:
    Term 0:        hostname in [myhostname]
Message:
Source:
    Type:          HostPath (bare host directory volume)
    Path:          /k8s/volumes/mongo-store-01
    HostPathType:  DirectoryOrCreate
Events:            <none>

When deploying the chart, the mongo PV Claim seems to get correctly bound.

> kubectl -n mongodb describe pvc
Name:          mongodb-mongodb
Namespace:     mongodb
StorageClass:
Status:        Bound
Volume:        mongo-store-01
Labels:        io.cattle.field/appId=mongodb
Annotations:   kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"mongodb-mongodb","namespace":"mongodb"},"spec":{"accessModes":["...
               pv.kubernetes.io/bind-completed=yes
               pv.kubernetes.io/bound-by-controller=yes
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      20Gi
Access Modes:  RWO
Events:        <none>

However, the pod fails to schedule, citing volume node affinity conflict. I am not sure what is causing that.

> kubectl -n mongodb describe pod
Name:           mongodb-mongodb-7b797bb485-b985x
Namespace:      mongodb
Node:           <none>
Labels:         app=mongodb
                pod-template-hash=3635366041
Annotations:    <none>
Status:         Pending
IP:
Controlled By:  ReplicaSet/mongodb-mongodb-7b797bb485
Containers:
  mongodb-mongodb:
    Image:      mongo:3.6.5
    Port:       27017/TCP
    Host Port:  0/TCP
    Requests:
      cpu:      100m
      memory:   256Mi
    Liveness:   exec [mongo --eval db.adminCommand('ping')] delay=30s timeout=5s period=10s #success=1 #failure=6
    Readiness:  exec [mongo --eval db.adminCommand('ping')] delay=5s timeout=5s period=10s #success=1 #failure=6
    Environment:
      MONGODB_EXTRA_FLAGS:
    Mounts:
      /etc/mongo/mongod.conf from config (rw)
      /var/lib/mongo from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-lsnv7 (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      mongodb-mongodb
    Optional:  false
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  mongodb-mongodb
    ReadOnly:   false
  default-token-lsnv7:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-lsnv7
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                   From               Message
  ----     ------            ----                  ----               -------
  Warning  FailedScheduling  33s (x3596 over 25m)  default-scheduler  0/24 nodes are available: 21 node(s) had volume node affinity conflict, 3 node(s) had taints that the pod didn't tolerate.

Why is the scheduler failing due to a volume node affinity conflict, despite the pvc getting appropriately bound to the provided pv?

-- MirroredFate
docker-volume
kubernetes
kubernetes-pvc

0 Answers