I have an Angular Universal application with the following Dockerfile:
FROM node:14-alpine
WORKDIR /app
COPY package.json /app
COPY dist/webapp /app/dist/webapp
ENV NODE_ENV "production"
ENV PORT 80
EXPOSE 80
CMD ["npm", "run", "serve:ssr"]
And I can deploy it to a Kubernetes cluster just fine but it keeps getting restarted every 10 minutes or so:
NAME READY STATUS RESTARTS AGE
api-xxxxxxxxx-xxxxx 1/1 Running 0 48m
webapp-xxxxxxxxxx-xxxxx 1/1 Running 232 5d19h
Pod logs are clean and when I describe the pod I just see:
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 22 Sep 2020 15:58:27 -0300
Finished: Tue, 22 Sep 2020 16:20:31 -0300
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Created 3m31s (x233 over 5d19h) kubelet, pool-xxxxxxxxx-xxxxx Created container webapp
Normal Started 3m31s (x233 over 5d19h) kubelet, pool-xxxxxxxxx-xxxxx Started container webapp
Normal Pulled 3m31s (x232 over 5d18h) kubelet, pool-xxxxxxxxx-xxxxx Container image "registry.gitlab.com/..." already present on machine
This is my deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: $CI_PROJECT_NAME
namespace: $KUBE_NAMESPACE
labels:
app: webapp
tier: frontend
spec:
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
selector:
matchLabels:
app: webapp
tier: frontend
template:
metadata:
labels:
app: webapp
tier: frontend
spec:
imagePullSecrets:
- name: gitlab-registry
containers:
- name: $CI_PROJECT_NAME
image: $IMAGE_TAG
ports:
- containerPort: 80
How can I tell the reason it keeps restarting? Thanks!