I have a Kubernetes deployment for a Scylla database with a volume attached. It has one replica, with the manifest similar to the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: scylla
labels:
app: myapp
role: scylla
spec:
replicas: 1
selector:
matchLabels:
app: myapp
role: scylla
template:
metadata:
labels:
app: myapp
role: scylla
spec:
containers:
- name: scylla
image: scylladb/scylla
imagePullPolicy: Always
volumeMounts:
- mountPath: /var/lib/scylla/data
name: scylladb
volumes:
- name: scylladb
hostPath:
path: /var/myapp/scylla/
type: DirectoryOrCreate
When I perform an update, it will terminate the old pod and start a new pod before the old pod has stopped. This causes the database on the new pod to fail because it can't access the database files stored in the volume (because the old pod is still using it). How can I make it so that only one pod uses the volume at a time? (Short downtime is okay)
You can use Recreate Strategy in Deployment to do that. This will kill all the existing Pods before new ones are created. Ref: Kubernetes doc. So their will be some downtime because of that.
apiVersion: apps/v1
kind: Deployment
metadata:
name: scylla
labels:
app: myapp
role: scylla
spec:
replicas: 1
selector:
matchLabels:
app: myapp
role: scylla
strategy:
type: Recreate
template:
metadata:
labels:
app: myapp
role: scylla
spec:
containers:
- name: scylla
image: scylladb/scylla
imagePullPolicy: Always
volumeMounts:
- mountPath: /var/lib/scylla/data
name: scylladb
volumes:
- name: scylladb
hostPath:
path: /var/myapp/scylla/
type: DirectoryOrCreate