I'm not sure if this is considered to be a best practice, but for ease of management I have created a Deployment that consists of 2 containers (Api Event server and Api server). Api server can send events that need to be processed by Api Event server and returned back. It is easier for me to manage these on in one pod to allow localhost access between them and not worry about defining ClusterIP
services for all my environments.
One of my concerns is that if say Api Event server exits with error, pod will still be active as Api server continues to run. Is there a way to tell kubernetes to terminate a pod if one of it's containers fails?
Here is my deployment, here only port 8080 is exposed to the public via LoadBalancer service. Perhaps I can somehow add liveliness and readiness probe to both of these?
apiVersion: apps/v1
kind: Deployment
metadata:
name: development-api
spec:
replicas: 2
selector:
matchLabels:
app: development-api
template:
metadata:
labels:
app: development-api
spec:
containers:
- name: development-api-server
image: <my-server-image>
ports:
- containerPort: 8080
protocol: TCP
- name: development-events-server
image: <my-events-image>
ports:
- containerPort: 3000
protocol: TCP
Use liveness and readiness probes. https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
In your case
apiVersion: apps/v1
kind: Deployment
metadata:
name: development-api
spec:
replicas: 2
selector:
matchLabels:
app: development-api
template:
metadata:
labels:
app: development-api
spec:
containers:
- name: development-api-server
image: <my-server-image>
ports:
- containerPort: 8080
protocol: TCP
livenessProbe:
tcpSocket:
port: 8080
- name: development-events-server
image: <my-events-image>
ports:
- containerPort: 3000
protocol: TCP
livenessProbe:
tcpSocket:
port: 3000