I'm using persistent volume claim to store data in container:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-pvc
labels:
type: amazonEBS
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
Declaration in spec:
spec:
volumes:
- name: test-data-vol
persistentVolumeClaim:
claimName: test-pvc
containers:
- name: test
image: my.docker.registry/test:1.0
volumeMounts:
- mountPath: /var/data
name: test-data-vol
When I started it first time, this volume was mounted correctly. But when I Tried to update container image:
- image: my.docker.registry/test:1.0
+ image: my.docker.registry/test:1.1
This volume failed to mount to new pod:
# kubectl get pods
test-7655b79cb6-cgn5r 0/1 ContainerCreating 0 3m
test-bf6498559-42vvb 1/1 Running 0 11m
# kubectl describe test-7655b79cb6-cgn5r
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m5s default-scheduler Successfully assigned test-7655b79cb6-cgn5r to ip-*-*-*-*.us-west-2.compute.internal
Warning FailedAttachVolume 3m5s attachdetach-controller Multi-Attach error for volume "pvc-2312eb4c-c270-11e8-8d4e-065333a7774e" Volume is already exclusively attached to one node and can't be attached to another
Normal SuccessfulMountVolume 3m4s kubelet, ip-*-*-*-*.us-west-2.compute.internal MountVolume.SetUp succeeded for volume "default-token-x82km"
Warning FailedMount 62s kubelet, ip-*-*-*-*.us-west-2.compute.internal Unable to mount volumes for pod "test-7655b79cb6-cgn5r(fab0862c-d1cf-11e8-8d4e-065333a7774e)": timeout expired waiting for volumes to attach/mount for pod "test-7655b79cb6-cgn5r". list of unattached/unmounted volumes=[test-data-vol]
It seems that Kubernetes can't re-attach this volume from one container to another. How to handle it correctly? I need this data on volume to be used by new version of deployment when old version stopped.
Not sure, RollingUpdate
may solve the problem. Since "Rolling Update" is safe way to update the containers images, according to docs. I assume, K8s can handle PV/PVC too.
From the context you provided in your question, I can't tell if your intention was to run a single instance stateful application, or a clustered stateful application.
I ran into this problem recently and from this section in the docs, here's how to go about this...
If you're running a single instance stateful app:
spec.replicas
as 1 if you're using a Deployment
spec.strategy.type
to Recreate
in your Deployment
Sample Deployment
(from the docs):
# application/mysql/mysql-deployment.yaml
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 kind: Deployment metadata: name: mysql spec: selector: matchLabels: app: mysql strategy: type: Recreate template: metadata: labels: app: mysql spec: containers: - image: mysql:5.6 name: mysql env:
# Use secret in real usage
- name: MYSQL\_ROOT\_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
And the sample PersistentVolume
& PersistentVolumeClaim
(from the docs):
# application/mysql/mysql-pv.yaml
apiVersion: v1 kind: PersistentVolume metadata: name: mysql-pv-volume labels: type: local spec: storageClassName: manual capacity: storage: 20Gi accessModes: - ReadWriteOnce hostPath: path: "/mnt/data"
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim spec: storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 20Gi
The obvious underlying matter here is that a rolling update will not work, because there can be no more than one pod running at any time. Setting spec.strategy.type
to Recreate
tells Kubernetes to stop the running pod before deploying a new one, so presumably there will be some downtime, even if minimal.
If you need a clustered stateful application, then using the already mentioned StatefulSet
as a controller type or ReadWriteMany
as a storage type would probably be the way to go.
The issue here is that EBS volumes are ReadWriteOnce
and can only be mounted to a single pod, so when you do the rolling update the old pod holds the volume. For this to work you would either have to use StatefulSet
or you can use any of the ReadWriteMany
PV types.
A Kubernetes Deployment is sometimes better used for stateless pods.
You can always go with the brute force approach which force delete the pod that is holding the volume. Make sure that the Reclaim Policy
is set to Retain
.