My PVCs are in Pending
state all the time.
kubectl describe pvc project-s3-pvc
gives:
Name: project-s3-pvc
Namespace: default
StorageClass: gp2
Status: Pending
Volume:
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"project-s3-pvc","namespace":"default"},"spec":{"ac...
volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/aws-ebs
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
Events: <none>
Mounted By: project-s3-86ccd56868-skvv5
✔ /data/project [kubernetes-aws L|✚ 10⚑ 66
kubectl get storageclass
gives:
NAME PROVISIONER AGE
default kubernetes.io/aws-ebs 1h
gp2 (default) kubernetes.io/aws-ebs 1h
I am running 1 node cluster started by Kops:
kops create cluster --node-count 0 --zones eu-west-1a ${NAME} --master-size t2.large
# Change size from 2 to 0, since above node-count does seem to be ignored
kops edit ig --name=${NAME} nodes
kops edit cluster ${NAME}
# Add this to cluster specification
iam:
allowContainerRegistry: true
legacy: false
kops update cluster ${NAME} --yes
kubectl taint nodes --all node-role.kubernetes.io/master-node
Then I add PVCs e.g.:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: bomcheck-s3-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 30Gi
Kops version: Version 1.11.0 (git-2c2042465)
EDIT: When I do try to create PV manually:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-manual
spec:
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
fsType: ext4
persistentVolumeReclaimPolicy: Delete
storageClassName: gp2
capacity:
storage: 30Gi
I am getting the: ValidationError(PersistentVolume.spec.awsElasticBlockStore): missing required field "volumeID" in io.k8s.api.core.v1.AWSElasticBlockStoreVolumeSource;
Does it mean I need to create the volume ahead in the AWS manually? I would like volume to by dynamically provisioned.
Any idea how to debug why the PVC/PV can't be provisioned on behalf of my AWS?
The problem does seem to be related to the fact that I am running just one master node. When I start cluster using: kops create cluster --zones eu-west-1a ${NAME} --master-size t2.large
, so it does start 1 master and 2 nodes. The problem doesn't appear.
I am not sure what's directly the root cause of the problem, since there isn't anything preventing one node from having external EBS volumes. This might be bug in the kops
itself, since just one master node is rather exception instead of rule.
As you can see, there is no Volume:
bound to your PVC it means that automatic volume provisioning failed and PV is not created which should be created with your Dynamic Provisioning configuration. You need to see created PV name on the Volume:
section of describe command. Unfortunately there are no logs or events that show the problem.
I would suggest you for troubleshooting to create storageclass manually with gp2 to see if it works with below yaml then define storageClassName
on your PVC yaml file like below:
storageclass-manual-gp2.yaml:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: gp2-manual
parameters:
type: gp2
provisioner: kubernetes.io/aws-ebs
reclaimPolicy: Delete
volumeBindingMode: Immediate
Apply:
kubectl apply -f storageclass-manual-gp2.yaml
pv-manual.yaml:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-manual
spec:
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
fsType: ext4
capacity:
storage: 30Gi
persistentVolumeReclaimPolicy: Delete
storageClassName: gp2-manual #your new storageclass name
Apply:
kubectl apply -f pv-manual.yaml
Describe PVC:
kubectl describe pvc pvc-gp2-manual
If no success, also I would suggest you to try different storage type for AWS EBS.