I have the following requirements for my application to be deployed in Kubernetes Cluster. I am trying to come up with an architecture that resembles my other microservice deployment and not complicated.
I am thinking about having multiple replicas that will connect to the same volume (NAS in my case). The postgres instances will sit behind a service like my application microservices. Application will connect to the service and does not need to know which Postgres instance it is talking with. This simplifies my architecture a great deal as I don't have to worry about setting up Postgres replication.
One issue in this architecture is what happens to data if a Postgres instance goes down after a write request is received. I can introduce a message broker with consumer acknowledgement to handle this scenario, but that has some performance implication.
A sample Postgres K8s deployment configuration is shown below. I will need to add service etc.
What are some pitfall of this architecture? Have anyone implemented something similar?
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 3
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:latest
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: postgres-pv-claim
It is not completely clear to me, but it sounds like you are talking about Shared Disk Failover
To answer your question, the pitfalls are:
I think in order to get scalability and not just fault tolerance, you would need to use replication.