I want to (temporarily) use localhost bound directories to persist application state of SonarQube. Below I describe how I achieved this in a self-hosted Kubernetes (1.11.3) cluster.
The problem I encounter is that despite everything working Kubernetes does not use the host path to persist the data (/opt/sonarqube/postgresql
). Upon docker inspect
of the SonarQube containers it uses the binds below.
How can I use the host mounted path for the mounts?
"Binds": [
"/var/lib/kubelet/pods/49666f05-dad0-11e8-95cd-666c474c0e54/volume-subpaths/sonarqube-pv-postgresql/sonarqube/0:/opt/sonarqube/conf",
"/var/lib/kubelet/pods/49666f05-dad0-11e8-95cd-666c474c0e54/volumes/kubernetes.io~configmap/startup:/tmp-script/:ro",
"/var/lib/kubelet/pods/49666f05-dad0-11e8-95cd-666c474c0e54/volume-subpaths/sonarqube-pv-postgresql/sonarqube/2:/opt/sonarqube/data",
"/var/lib/kubelet/pods/49666f05-dad0-11e8-95cd-666c474c0e54/volume-subpaths/sonarqube-pv-postgresql/sonarqube/3:/opt/sonarqube/extensions",
"/var/lib/kubelet/pods/49666f05-dad0-11e8-95cd-666c474c0e54/volumes/kubernetes.io~secret/default-token-zrjdj:/var/run/secrets/kubernetes.io/serviceaccount:ro",
"/var/lib/kubelet/pods/49666f05-dad0-11e8-95cd-666c474c0e54/etc-hosts:/etc/hosts",
"/var/lib/kubelet/pods/49666f05-dad0-11e8-95cd-666c474c0e54/containers/sonarqube/95053a5c:/dev/termination-log"
]
Here is what I did to set up the application
I created a StorageClass
to create PVs that mount local paths:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage-nowait
provisioner: kubernetes.io/no-provisioner
Then I created two PVs to be used with the SonarQube helm chart like this:
apiVersion: v1
kind: PersistentVolume
metadata:
name: sonarqube-pv-postgresql
labels:
type: local
spec:
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
hostPath:
path: /opt/sonarqube/postgresql
type: DirectoryOrCreate
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- myhost
I launched the SonarQube helm chart with this additional config to use the PVs I just created
image:
tag: 7.1
persistence:
enabled: true
storageClass: local-storage
accessMode: ReadWriteOnce
size: 10Gi
postgresql:
persistence:
enabled: true
storageClass: local-storage
accessMode: ReadWriteOnce
size: 10Gi
If you see the docs here
- HostPath (Single node testing only – local storage is not supported in any way and WILL NOT WORK in a multi-node cluster)
Hence, that's probably why you are seeing it in a different place. I tried it myself and my PVC remains in pending state. So you can either use local
like this:
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-pv
labels:
vol=myvolume
spec:
capacity:
storage: 100Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /mnt/disks/ssd1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- example-node
Then you have to create the corresponding PVC:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 100Gi
storageClassName: local-storage
selector:
matchLabels:
vol: "myvolume"
Then in the pod spec:
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: myclaim
You can also use hostPath
directly in the pod spec if you don't care about landing on any node and having different data in each node:
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /data
# this field is optional
type: DirectoryOrCreate