I'm hosting an Angular website that connects to a C#-backend inside a Kubernetes Cluster. When I use a certain function on the website that I can't describe in more detail, the pod shows status "Completed", then goes into "CrashLoopBackOff" and then restarts. The problem is, there are no jobs set up for this Pod (in fact, I didn't even know Jobs are a thing until one hour ago). So my main question would be: How can a Pod go into the "Completed" status without running any jobs?
My .yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-demo
namespace: my-namespace
labels:
app: my-demo
spec:
replicas: 1
template:
metadata:
name: my-demo-pod
labels:
app: my-demo
spec:
nodeSelector:
"beta.kubernetes.io/os": windows
containers:
- name: my-demo-container
image: myregistry.azurecr.io/my.demo:latest
imagePullPolicy: Always
resources:
limits:
cpu: 1
memory: 800M
requests:
cpu: .1
memory: 300M
ports:
- containerPort: 80
imagePullSecrets:
- name: my-registry-secret
selector:
matchLabels:
app: my-demo
---
apiVersion: v1
kind: Service
metadata:
name: my-demo-service
namespace: my-namespace
spec:
ports:
- protocol: TCP
port: 80
name: my-demo-port
selector:
app: my-demo
Completed
status indicates that the application called by the cmd
or ENTRYPOINT
exited with a non-error (i.e. 0
) status code. This Completed
->CrashLoopBackoff
->Running
cycle usually indicates that the process called on the container start doesn't daemonize itself and exits, which Kubernetes sees as the process 'completing', hence the status.
Check that your ENTRYPOINT
in your Dockerfile or your cmd
in your pod template are calling the right process (with the appropriate flags) for the process to be daemonized. You can also check the logs for the previous pod (i.e. using kubectl logs --previous
) to see what output the application gave