I created a Cloud Filestore instance on GCP, standard one, and put in the same VPC that the cluster I have Kubernetes running on. Using this guide https://cloud.google.com/filestore/docs/accessing-fileshares I tried to access the fileshare instance to serve as the persistent storage for my deployment. The deployment I have is a webapp called Apache OFBiz, a set of business tools that are primarily used for accounting. It's demo and documentation is available online as it is open source. So to test if the data persists when I delete a pod, I created a user on the app after exposing the deployment to a public IP, and attached a domain that I have to that public IP. User is created, then when I deleted the user on the cluster using cloud shell, when the pod was created again I accessed the webapp and it didn't have the user anymore, it was back to its base form. I'm not sure what thing is wrong, whether it's the access to the filestore instance, storing and pulling data from the instance. The webapp has an embedded Apache Derby database, just as a note. I guess my question is also if the guide is enough, or do I have to do anything else to make this work, and if there is something else I need to look at.
So here's my deployment yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "2"
creationTimestamp: "2021-03-19T21:08:27Z"
generation: 2
labels:
app: ofbizvpn
managedFields:
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {}
f:app: {}
f:spec:
f:progressDeadlineSeconds: {}
f:replicas: {}
f:revisionHistoryLimit: {}
f:selector:
f:matchLabels:
.: {}
f:app: {}
f:strategy:
f:rollingUpdate:
.: {}
f:maxSurge: {}
f:maxUnavailable: {}
f:type: {}
f:template:
f:metadata:
f:labels:
.: {}
f:app: {}
f:spec:
f:containers:
k:{"name":"ofbizvpn"}:
.: {}
f:image: {}
f:imagePullPolicy: {}
f:name: {}
f:resources: {}
f:terminationMessagePath: {}
f:terminationMessagePolicy: {}
f:dnsPolicy: {}
f:restartPolicy: {}
f:schedulerName: {}
f:securityContext: {}
f:terminationGracePeriodSeconds: {}
manager: kubectl-create
operation: Update
time: "2021-03-19T21:08:27Z"
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:spec:
f:template:
f:spec:
f:containers:
k:{"name":"ofbizvpn"}:
f:volumeMounts:
.: {}
k:{"mountPath":"ofbiz/data"}:
.: {}
f:mountPath: {}
f:name: {}
f:volumes:
.: {}
k:{"name":"mypvc"}:
.: {}
f:name: {}
f:persistentVolumeClaim:
.: {}
f:claimName: {}
manager: GoogleCloudConsole
operation: Update
time: "2021-03-19T22:11:44Z"
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:deployment.kubernetes.io/revision: {}
f:status:
f:availableReplicas: {}
f:conditions:
.: {}
k:{"type":"Available"}:
.: {}
f:lastTransitionTime: {}
f:lastUpdateTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"Progressing"}:
.: {}
f:lastTransitionTime: {}
f:lastUpdateTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
f:observedGeneration: {}
f:readyReplicas: {}
f:replicas: {}
f:updatedReplicas: {}
manager: kube-controller-manager
operation: Update
time: "2021-03-19T23:19:35Z"
name: ofbizvpn
namespace: default
resourceVersion: "3004167"
selfLink: /apis/apps/v1/namespaces/default/deployments/ofbizvpn
uid: b2e10550-eabe-47fb-8f51-4e9e89f7e8ea
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: ofbizvpn
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: ofbizvpn
spec:
containers:
- image: gcr.io/lithe-joy-306319/ofbizvpn
imagePullPolicy: Always
name: ofbizvpn
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: ofbiz/data
name: mypvc
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: fileserver-claim
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2021-03-19T21:08:28Z"
lastUpdateTime: "2021-03-19T22:11:53Z"
message: ReplicaSet "ofbizvpn-6d458f54cf" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
- lastTransitionTime: "2021-03-19T23:19:35Z"
lastUpdateTime: "2021-03-19T23:19:35Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 2
readyReplicas: 1
replicas: 1
updatedReplicas: 1
Here is my persistent volume yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: fileserver
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
nfs:
path: /fileshare1
server: 10.249.37.194
And here is my persistent volume claim yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: fileserver-claim
spec:
# Specify "" as the storageClassName so it matches the PersistentVolume's StorageClass.
# A nil storageClassName value uses the default StorageClass. For details, see
# https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class-1
accessModes:
- ReadWriteMany
storageClassName: ""
volumeName: fileserver
resources:
requests:
storage: 10Gi
If you want data persistency why not you use StatefulSet
instead of Deployment
. You should better use StatefulSet
.
Deployment
is basically used for stateless application and StatefulSet
for stateful applications. In Deployment
pod's uniqueness is not maintained so when pod created again it is basically do not get previous pod's identify, it gets new name and identity.
StatefulSet is the workload API object used to manage stateful applications. Manages the deployment and scaling of a set of Pods, and provides guarantees about the ordering and uniqueness of these Pods. See k8s doc
A sample StatefulSet
yaml from k8s doc:
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx # has to match .spec.template.metadata.labels
serviceName: "nginx"
replicas: 3 # by default is 1
template:
metadata:
labels:
app: nginx # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "my-storage-class"
resources:
requests:
storage: 1Gi
In the above example:
A Headless Service, named nginx
, is used to control the network domain.
The StatefulSet
, named web
, has a Spec
that indicates that 3 replicas of the nginx
container will be launched in unique Pods
.
The volumeClaimTemplates
will provide stable storage using PersistentVolumes
provisioned by a PersistentVolume
Provisioner.