We have an application deployed on GKE that would benefit from having fast temporary storage on disk.
The GKE local SSD feature is almost perfect, however we have multiple pod replicas and would ideally like to support multiple pods on the same node. Mounting the local SSD using hostPath
makes that difficult.
This 2016 SO question mentions the idea of mounting emptyDir
s on the local SSD which would be perfect, but I understand still isn't an option.
There is a 2017 mailing list thread with the same idea, but the answer was still not positive.
The GCP docs for local SSDs were recently updated to describe using them via the PersistentVolume
abstraction, which sounds sort of promising. Could I use that to achieve what I'm after?
The examples seem to show mounting the full local SSD as a PersistentVolume
, when my preference is to use an isolated part of it for each pod. We also don't need the data to be persistent - once the pod is deleted we'd be happy for the data to be deleted as well.
Kubernetes 1.11 added an alpha feature called Downward API support in volume subPath, which allows volumeMount subpaths to be set using the downward API.
I tested this by creating a GKE 1.11 alpha cluster:
gcloud container clusters create jh-test --enable-kubernetes-alpha
--zone=asia-southeast1-a --cluster-version=1.11.3-gke.18
--local-ssd-count=1 --machine-type=n1-standard-2 --num-nodes=2
--image-type=cos --disk-type=pd-ssd --disk-size=20Gi
--no-enable-basic-auth --no-issue-client-certificate
--no-enable-autoupgrade --no-enable-autorepair
I then created a 2-replica deployment with the following config:
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
volumeMounts:
- name: scratch-space
mountPath: /tmp/scratch
subPath: $(POD_NAME)
volumes:
- name: scratch-space
hostPath:
path: "/mnt/disks/ssd0"
If I kubectl exec
'd into each pod, I had a /tmp/scratch
directory that was isolated and very performant.
If I SSHd into the host, then I could see a directory for each pod:
$ ls -l /mnt/disks/ssd0/
drwx--x--x 14 root root 4096 Dec 1 01:49 foo-6dc57cb589-nwbjw
drwx--x--x 14 root root 4096 Dec 1 01:50 foo-857656f4-dzzzl
I also tried applying the deployment to a non-alpha GKE 1.11 cluster, but the SSD content ended up looking like this:
$ ls -l /mnt/disks/ssd0/
drwxr-xr-x 2 root root 4096 Dec 1 04:51 '$(POD_NAME)'
Unfortunately it's not realistic to run our workload on an alpha cluster, so this isn't a pragmatic solution for us yet. We'll have to wait for the feature to reach beta and become available on standard GKE clusters. It does seem to be slowly progressing, although the API will probably change slightly.
For kubernetes 1.14, the syntax for volumeMounts
has changed to use a new subPathExpr
field. The feature remains alpha-only:
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
volumeMounts:
- name: scratch-space
mountPath: /tmp/scratch
subPathExpr: $(POD_NAME)
volumes:
- name: scratch-space
hostPath:
path: "/mnt/disks/ssd0"