I am attempting to build a pod in kubernetes that has files mounted to the pod from my local system, in a similar way to mounting volumes in docker-compose
files
I have tried the following, in an attempt to mount the local folder ./test
and files to the pod under the /blah/
folder. However kubernetes is complaining that MountVolume.SetUp failed for volume "config-volume" : hostPath type check failed: ./test/ is not a directory
Here is my yaml
file. Am I missing something?
kind: Service
metadata:
name: vol-test
labels:
app: vol-test
spec:
type: NodePort
ports:
- port: 8200
nodePort: 30008
selector:
app: vol-test
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: vol-test
spec:
replicas: 1
selector:
matchLabels:
app: vol-test
template:
metadata:
labels:
app: vol-test
spec:
containers:
- name: vol-test
image: nginx
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: config-volume
mountPath: /blah/
ports:
- containerPort: 8200
volumes:
- name: config-volume
hostPath:
path: ./test/
type: Directory
you can do it like this
kind: PersistentVolume
apiVersion: v1
metadata:
name: config-volume-pv
labels:
type: local
spec:
storageClassName: generic
capacity:
storage: 100Mi
accessModes:
- ReadWriteOnce
hostPath:
path: "/path/to/volume"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
name: config-volume-pvc
spec:
storageClassName: generic
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
status: {}
---
kind: Service
metadata:
name: vol-test
labels:
app: vol-test
spec:
type: NodePort
ports:
- port: 8200
nodePort: 30008
selector:
app: vol-test
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: vol-test
spec:
replicas: 1
selector:
matchLabels:
app: vol-test
template:
metadata:
labels:
app: vol-test
spec:
containers:
- name: vol-test
image: nginx
imagePullPolicy: "IfNotPresent"
volumeMounts:
- mountPath: /blah/
name: ng-data
ports:
- containerPort: 8200
volumes:
- name: ng-data
persistentVolumeClaim:
claimName: config-volume-pvc
If you just want to pass a file or directory to a Pod for the purpose of reading configuration values (which I assume from your choice of volume mount config-volume
) and has no need to update the file/directory, then you can just put the file(s) in a ConfigMap like below:
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-router-config
data:
nginx.conf: |
worker_processes 2;
user nginx;
events {
worker_connections 1024;
}
http {
include mime.types;
charset utf-8;
client_max_body_size 8M;
server {
server_name _;
listen 80 default_server;
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Proto https;
proxy_pass http://drupal:80/ ;
proxy_redirect default;
}
location /api/ {
proxy_pass http://api-gateway:8080/ ;
proxy_redirect default;
}
}
}
Or you can have the file content imported from where you run the kubectl
command and execute (assuming the file name is nginx.conf
):
kubectl create configmap nginx-router-config --from-file=nginx.conf
Then, you can mount the file(s) by adding volumes
and volumeMount
to the Deployment spec:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: nginx-router
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
volumeMounts:
- name: nginx-router-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
volumes:
- name: nginx-router-config
configMap:
name: nginx-router-config
items:
- key: nginx.conf
path: nginx.conf
If you actually want to have read-write access to the file(s) then you can use PersistentVolume and PersistentVolumeClaim as suggested by the other answer, although I will not recommend using hostPath
if you have more than one worker node.