I am trying to use the cinder plugin for kubernetes to create both statically defined PVs as well as StorageClasses, but I see no activity between my cluster and cinder for creating/mounting the devices.
Kubernetes Version:
kubectl version
Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.1", GitCommit:"33cf7b9acbb2cb7c9c72a10d6636321fb180b159", GitTreeState:"clean", BuildDate:"2016-10-10T18:19:49Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.1", GitCommit:"33cf7b9acbb2cb7c9c72a10d6636321fb180b159", GitTreeState:"clean", BuildDate:"2016-10-10T18:13:36Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
The command kubelet was started with and its status:
systemctl status kubelet -l
● kubelet.service - Kubelet service
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Active: active (running) since Thu 2016-10-20 07:43:07 PDT; 3h 53min ago
Process: 2406 ExecStartPre=/usr/local/bin/install-kube-binaries (code=exited, status=0/SUCCESS)
Process: 2400 ExecStartPre=/usr/local/bin/create-certs (code=exited, status=0/SUCCESS)
Main PID: 2408 (kubelet)
CGroup: /system.slice/kubelet.service
├─2408 /usr/local/bin/kubelet --pod-manifest-path=/etc/kubernetes/manifests --api-servers=https://172.17.0.101:6443 --logtostderr=true --v=12 --allow-privileged=true --hostname-override=jk-kube2-master --pod-infra-container-image=pause-amd64:3.0 --cluster-dns=172.31.53.53 --cluster-domain=occloud --cloud-provider=openstack --cloud-config=/etc/cloud.conf
Here is my cloud.conf file:
# cat /etc/cloud.conf
[Global]
username=<user>
password=XXXXXXXX
auth-url=http://<openStack URL>:5000/v2.0
tenant-name=Shadow
region=RegionOne
It appears that k8s is able to communicate successfully with openstack. From /var/log/messages:
kubelet: I1020 11:43:51.770948 2408 openstack_instances.go:41] openstack.Instances() called
kubelet: I1020 11:43:51.836642 2408 openstack_instances.go:78] Found 39 compute flavors
kubelet: I1020 11:43:51.836679 2408 openstack_instances.go:79] Claiming to support Instances
kubelet: I1020 11:43:51.836688 2408 openstack_instances.go:124] NodeAddresses(jk-kube2-master) called
kubelet: I1020 11:43:52.274332 2408 openstack_instances.go:131] NodeAddresses(jk-kube2-master) => [{InternalIP 172.17.0.101} {ExternalIP 10.75.152.101}]
My PV/PVC yaml files, and cinder list output:
# cat persistentVolume.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: jk-test
labels:
type: test
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
cinder:
volumeID: 48d2d1e6-e063-437a-855f-8b62b640a950
fsType: ext4
# cat persistentVolumeClaim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
selector:
matchLabels:
type: "test"
# cinder list | grep jk-cinder
| 48d2d1e6-e063-437a-855f-8b62b640a950 | available | jk-cinder | 10 | - | false |
As seen above, cinder reports the device with the ID referenced in the pv.yaml file is available. When I create them, things seem to work:
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
pv/jk-test 10Gi RWO Retain Bound default/myclaim 5h
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
pvc/myclaim Bound jk-test 10Gi RWO 5h
Then I try to create a pod using the pvc, but it fails to mount the volume:
# cat testPod.yaml
kind: Pod
apiVersion: v1
metadata:
name: jk-test3
labels:
name: jk-test
spec:
containers:
- name: front-end
image: example-front-end:latest
ports:
- hostPort: 6000
containerPort: 3000
volumes:
- name: jk-test
persistentVolumeClaim:
claimName: myclaim
And here is the state of the pod:
3h 46s 109 {kubelet jk-kube2-master} Warning FailedMount Unable to mount volumes for pod "jk-test3_default(0f83368f-96d4-11e6-8243-fa163ebfcd23)": timeout expired waiting for volumes to attach/mount for pod "jk-test3"/"default". list of unattached/unmounted volumes=[jk-test]
3h 46s 109 {kubelet jk-kube2-master} Warning FailedSync Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "jk-test3"/"default". list of unattached/unmounted volumes=[jk-test]
I've verified that my openstack provider is exposing cinder v1 and v2 APIs and the previous logs from openstack_instances show the nova API is accessible. Despite that, I never see any attempts on k8s part to communicate with cinder or nova to mount the volume.
Here are what I think are the relevant log messages regarding the failure to mount:
kubelet: I1020 06:51:11.840341 24027 desired_state_of_world_populator.go:323] Extracted volumeSpec (0x23a45e0) from bound PV (pvName "jk-test") and PVC (ClaimName "default"/"myclaim" pvcUID 51919dfb-96c9-11e6-8243-fa163ebfcd23)
kubelet: I1020 06:51:11.840424 24027 desired_state_of_world_populator.go:241] Added volume "jk-test" (volSpec="jk-test") for pod "f957f140-96cb-11e6-8243-fa163ebfcd23" to desired state.
kubelet: I1020 06:51:11.840474 24027 desired_state_of_world_populator.go:241] Added volume "default-token-js40f" (volSpec="default-token-js40f") for pod "f957f140-96cb-11e6-8243-fa163ebfcd23" to desired state.
kubelet: I1020 06:51:11.896176 24027 reconciler.go:201] Attempting to start VerifyControllerAttachedVolume for volume "kubernetes.io/cinder/48d2d1e6-e063-437a-855f-8b62b640a950" (spec.Name: "jk-test") pod "f957f140-96cb-11e6-8243-fa163ebfcd23" (UID: "f957f140-96cb-11e6-8243-fa163ebfcd23")
kubelet: I1020 06:51:11.896330 24027 reconciler.go:225] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/cinder/48d2d1e6-e063-437a-855f-8b62b640a950" (spec.Name: "jk-test") pod "f957f140-96cb-11e6-8243-fa163ebfcd23" (UID: "f957f140-96cb-11e6-8243-fa163ebfcd23")
kubelet: I1020 06:51:11.896361 24027 reconciler.go:201] Attempting to start VerifyControllerAttachedVolume for volume "kubernetes.io/secret/f957f140-96cb-11e6-8243-fa163ebfcd23-default-token-js40f" (spec.Name: "default-token-js40f") pod "f957f140-96cb-11e6-8243-fa163ebfcd23" (UID: "f957f140-96cb-11e6-8243-fa163ebfcd23")
kubelet: I1020 06:51:11.896390 24027 reconciler.go:225] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/secret/f957f140-96cb-11e6-8243-fa163ebfcd23-default-token-js40f" (spec.Name: "default-token-js40f") pod "f957f140-96cb-11e6-8243-fa163ebfcd23" (UID: "f957f140-96cb-11e6-8243-fa163ebfcd23")
kubelet: I1020 06:51:11.896420 24027 config.go:98] Looking for [api file], have seen map[file:{} api:{}]
kubelet: E1020 06:51:11.896566 24027 nestedpendingoperations.go:253] Operation for "\"kubernetes.io/cinder/48d2d1e6-e063-437a-855f-8b62b640a950\"" failed. No retries permitted until 2016-10-20 06:53:11.896529189 -0700 PDT (durationBeforeRetry 2m0s). Error: Volume "kubernetes.io/cinder/48d2d1e6-e063-437a-855f-8b62b640a950" (spec.Name: "jk-test") pod "f957f140-96cb-11e6-8243-fa163ebfcd23" (UID: "f957f140-96cb-11e6-8243-fa163ebfcd23") has not yet been added to the list of VolumesInUse in the node's volume status.
Is there a piece I am missing? I've followed the instructions here: k8s - mysql-cinder-pd example But haven't been able to get any communication. As another datapoint I tried defining a Storage class as provided by k8s, here are the associated StorageClass and PVC files:
# cat cinderStorage.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: gold
provisioner: kubernetes.io/cinder
parameters:
availability: nova
# cat dynamicPVC.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: dynamicclaim
annotations:
volume.beta.kubernetes.io/storage-class: "gold"
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
The StorageClass reports success, but when I try to create the PVC it gets stuck in the 'pending' state and reports 'no volume plugin matched':
# kubectl get storageclass
NAME TYPE
gold kubernetes.io/cinder
# kubectl describe pvc dynamicclaim
Name: dynamicclaim
Namespace: default
Status: Pending
Volume:
Labels: <none>
Capacity:
Access Modes:
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1d 15s 5867 {persistentvolume-controller } Warning ProvisioningFailed no volume plugin matched
This contradicts whats in the logs for plugins that were loaded:
grep plugins /var/log/messages
kubelet: I1019 11:39:41.382517 22435 plugins.go:56] Registering credential provider: .dockercfg
kubelet: I1019 11:39:41.382673 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/aws-ebs"
kubelet: I1019 11:39:41.382685 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/empty-dir"
kubelet: I1019 11:39:41.382691 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/gce-pd"
kubelet: I1019 11:39:41.382698 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/git-repo"
kubelet: I1019 11:39:41.382705 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/host-path"
kubelet: I1019 11:39:41.382712 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/nfs"
kubelet: I1019 11:39:41.382718 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/secret"
kubelet: I1019 11:39:41.382725 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/iscsi"
kubelet: I1019 11:39:41.382734 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/glusterfs"
jk-kube2-master kubelet: I1019 11:39:41.382741 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/rbd"
kubelet: I1019 11:39:41.382749 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/cinder"
kubelet: I1019 11:39:41.382755 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/quobyte"
kubelet: I1019 11:39:41.382762 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/cephfs"
kubelet: I1019 11:39:41.382781 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/downward-api"
kubelet: I1019 11:39:41.382798 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/fc"
kubelet: I1019 11:39:41.382804 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/flocker"
kubelet: I1019 11:39:41.382822 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/azure-file"
kubelet: I1019 11:39:41.382839 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/configmap"
kubelet: I1019 11:39:41.382846 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/vsphere-volume"
kubelet: I1019 11:39:41.382853 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/azure-disk"
And I have the nova and cinder clients installed on my machine:
# which nova
/usr/bin/nova
# which cinder
/usr/bin/cinder
Any help is appreciated, I'm sure I'm missing something simple here.
Thanks!
The cinder volumes work for sure with Kubernetes 1.5.0 and 1.5.3 (I think they also worked on 1.4.6 on which I was first experimenting, I don't know about previous versions).
In your Pod yaml file you were missing: volumeMounts:
section.
Actually, when you already have an existing cinder volume, you can just use a Pod (or Deployment), no PV or PVC is needed. Example: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: vol-test labels: fullname: vol-test spec: strategy: type: Recreate replicas: 1 template: metadata: labels: fullname: vol-test spec: containers: - name: nginx image: "nginx:1.11.6-alpine" imagePullPolicy: IfNotPresent args: - /bin/sh - -c - echo "heey-testing" > /usr/share/nginx/html/index.html && nginx "-g daemon off;" ports: - name: http containerPort: 80 volumeMounts: - name: data mountPath: /usr/share/nginx/html/ volumes: - name: data cinder: volumeID: e143368a-440a-400f-b8a4-dd2f46c51888
This will create a Deployment and a Pod. The cinder volume will be mounted into the nginx container. To verify that you are using a volume, you can edit a file inside nginx container, inside /usr/share/nginx/html/
directory and stop the container. Kubernetes will create a new container and inside it, the files in /usr/share/nginx/html/
directory will be the same as they were in the stopped container.
After you delete the Deployment resource, the cinder volume is not deleted, but it is detached from a vm.
Other possibility, if you already have an existing cinder volume, you can use PV and PVC resources. You said you want to use a storage class, though Kubernetes docs allow not using it:
A PV with no annotation or its class annotation set to "" has no class and can only be bound to PVCs that request no particular class
An example storage-class is: kind: StorageClass apiVersion: storage.k8s.io/v1beta1 metadata: # to be used as value for annotation: # volume.beta.kubernetes.io/storage-class name: cinder-gluster-hdd provisioner: kubernetes.io/cinder parameters: # openstack volume type type: gluster_hdd # openstack availability zone availability: nova
Then, you use your existing cinder volume with ID 48d2d1e6-e063-437a-855f-8b62b640a950 in a PV: apiVersion: v1 kind: PersistentVolume metadata: # name of a pv resource visible in Kubernetes, not the name of # a cinder volume name: pv0001 labels: pv-first-label: "123" pv-second-label: abc annotations: volume.beta.kubernetes.io/storage-class: cinder-gluster-hdd spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain cinder: # ID of cinder volume volumeID: 48d2d1e6-e063-437a-855f-8b62b640a950
Then create a PVC, which labels selector matches the labels of the PV: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: vol-test labels: pvc-first-label: "123" pvc-second-label: abc annotations: volume.beta.kubernetes.io/storage-class: "cinder-gluster-hdd" spec: accessModes: # the volume can be mounted as read-write by a single node - ReadWriteOnce resources: requests: storage: "1Gi" selector: matchLabels: pv-first-label: "123" pv-second-label: abc
and then a Deployment: kind: Deployment metadata: name: vol-test labels: fullname: vol-test environment: testing spec: strategy: type: Recreate replicas: 1 template: metadata: labels: fullname: vol-test environment: testing spec: nodeSelector: "is_worker": "true" containers: - name: nginx-exist-vol image: "nginx:1.11.6-alpine" imagePullPolicy: IfNotPresent args: - /bin/sh - -c - echo "heey-testing" > /usr/share/nginx/html/index.html && nginx "-g daemon off;" ports: - name: http containerPort: 80 volumeMounts: - name: data mountPath: /usr/share/nginx/html/ volumes: - name: data persistentVolumeClaim: claimName: vol-test
After you delete the k8s resources, the cinder volume is not deleted, but it is detached from a vm.
Using a PV lets you set persistentVolumeReclaimPolicy
.
If you don't have a cinder volume created, Kubernetes can create it for you. You have to then provide a PVC resource. I won't describe this variant, since it was not asked for.
I suggest that anyone interested in finding the best option should experiment themselves and compare the methods. Also, I used the labels names like pv-first-label
and pvc-first-label
only for better understanding reasons. You can use e.g. first-label
everywhere.
I get the suspicion that the dynamic StorageClass approach is not working, because the Cinder provisioner is not implemented yet, given the following statement in the docs (http://kubernetes.io/docs/user-guide/persistent-volumes/#provisioner):
Storage classes have a provisioner that determines what volume plugin is used for provisioning PVs. This field must be specified. During beta, the available provisioner types are kubernetes.io/aws-ebs and kubernetes.io/gce-pd
As for why the static method using Cinder volume IDs is not working, I'm not sure. I'm running into the exact same problem. Kubernetes 1.2 seems to work fine, 1.3 and 1.4 do not. This seems to coincide with the major change in PersistentVolume handling in 1.3-beta2 (https://github.com/kubernetes/kubernetes/pull/26801):
A new volume manager was introduced in kubelet that synchronizes volume mount/unmount (and attach/detach, if attach/detach controller is not enabled). (#26801, @saad-ali)
This eliminates the race conditions between the pod creation loop and the orphaned volumes loops. It also removes the unmount/detach from the syncPod() path so volume clean up never blocks the syncPod loop.