I have a deployment running a pod that needs access to a postgres database I am running in the same cluster as the kubernetes cluster. How do I create a service that selects the deployment such that it has access. My pods keep restarting as the connection times out. I have created firewall rules in the vpc subnet to allow internal communication and have modified pg_hba.conf and postgresql.conf My deployment definition is given below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: api
labels:
name: server
app: api
spec:
replicas: 3
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: gcr.io/api:v1
ports:
- containerPort: 80
env:
- name: DB_HOSTNAME
valueFrom:
secretKeyRef:
name: api-config
key: hostname
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: api-config
key: username
- name: DB_NAME
valueFrom:
secretKeyRef:
name: api-config
key: name
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: api-config
key: password
This is my service definition to expose the database but I don't think I am selecting the deployment. I have followed the example here.
kind: Service
apiVersion: v1
metadata:
name: postgres
label:
spec:
type: ClusterIP
ports:
- port: 5432
targetPort: 5432
---
kind: Endpoints
apiVersion: v1
metadata:
name: postgres
subsets:
- addresses:
- ip: 10.0.0.50
ports:
- port: 5432
You can use the following to expose database to deployment on GKE:
$ kubectl expose deployment name-of-db --type=LoadBalancer --port 80 --target-port 8080