I'm trying to deploy a simple nodejs application via helm kubernetes on Azure Kubernetes Service, but after pulling my image it says CrashLoopBackOff
.
Here's what I have tried so far:
My Dockerfile
:
FROM node:6
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 32000
CMD [ "npm", "start" ]
My server.js
:
'use strict';
const express = require('express');
const PORT = 32000;
const HOST = '0.0.0.0';
const app = express();
app.get('/', (req, res) => {
res.send('Hello world from container.\n');
});
app.listen(PORT, HOST);
console.log(`Running on http://${HOST}:${PORT}`);
I have pushed this image to ACR.
New Update: Here's the complete output of
kubectl describe pod POD_NAME
:
Name: myrel02-mychart06-5dc9d4b86c-kqg4n
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: aks-nodepool1-19665249-0/10.240.0.6
Start Time: Tue, 05 Feb 2019 11:31:27 +0500
Labels: app.kubernetes.io/instance=myrel02
app.kubernetes.io/name=mychart06
pod-template-hash=5dc9d4b86c
Annotations: <none>
Status: Running
IP: 10.244.2.5
Controlled By: ReplicaSet/myrel02-mychart06-5dc9d4b86c
Containers:
mychart06:
Container ID: docker://c239a2b9c38974098bbb1646a272504edd2d199afa50f61d02a0ce335fe60660
Image: registry-1.docker.io/arycloud/docker-web-app:0.5
Image ID: docker-pullable://registry-1.docker.io/arycloud/docker-web-app@sha256:4faab280d161b727e0a6a6d9dfb52b22cf9c6cd7dd07916d6fe164d9af5737a7
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 05 Feb 2019 11:39:56 +0500
Finished: Tue, 05 Feb 2019 11:40:22 +0500
Ready: False
Restart Count: 7
Liveness: http-get http://:http/ delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:http/ delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
KUBERNETES_PORT_443_TCP_ADDR: cluster06-ary-2a187a-dc393b82.hcp.centralus.azmk8s.io
KUBERNETES_PORT: tcp://cluster06-ary-2a187a-dc393b82.hcp.centralus.azmk8s.io:443
KUBERNETES_PORT_443_TCP: tcp://cluster06-ary-2a187a-dc393b82.hcp.centralus.azmk8s.io:443
KUBERNETES_SERVICE_HOST: cluster06-ary-2a187a-dc393b82.hcp.centralus.azmk8s.io
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-gm49w (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-gm49w:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-gm49w
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 10m default-scheduler Successfully assigned default/myrel02-mychart06-5dc9d4b86c-kqg4n to aks-nodepool1-19665249-0
Normal Pulling 10m kubelet, aks-nodepool1-19665249-0 pulling image "registry-1.docker.io/arycloud/docker-web-app:0.5"
Normal Pulled 10m kubelet, aks-nodepool1-19665249-0 Successfully pulled image "registry-1.docker.io/arycloud/docker-web-app:0.5"
Warning Unhealthy 9m30s (x6 over 10m) kubelet, aks-nodepool1-19665249-0 Liveness probe failed: Get http://10.244.2.5:80/: dial tcp 10.244.2.5:80: connect: connection refused
Normal Created 9m29s (x3 over 10m) kubelet, aks-nodepool1-19665249-0 Created container
Normal Started 9m29s (x3 over 10m) kubelet, aks-nodepool1-19665249-0 Started container
Normal Killing 9m29s (x2 over 9m59s) kubelet, aks-nodepool1-19665249-0 Killing container with id docker://mychart06:Container failed liveness probe.. Container will be killed and recreated.
Warning Unhealthy 9m23s (x7 over 10m) kubelet, aks-nodepool1-19665249-0 Readiness probe failed: Get http://10.244.2.5:80/: dial tcp 10.244.2.5:80: connect: connection refused
Normal Pulled 5m29s (x6 over 9m59s) kubelet, aks-nodepool1-19665249-0 Container image "registry-1.docker.io/arycloud/docker-web-app:0.5" already present on machine
Warning BackOff 22s (x33 over 7m59s) kubelet, aks-nodepool1-19665249-0 Back-off restarting failed container
Update:
docker logs CONTAINER_ID
output:
> nodejs@1.0.0 start /usr/src/app
> node server.js
Running on http://0.0.0.0:32000
How can I avoid this issue?
Thanks in advance!
As I can see from kubectl describe pod
command output, your Container inside the Pod has been already completed with exit code 0 (@4c74356b41 mentioned about it in the comments). Reason: Completed
, which states about successful completion without any errors/problems. However the life cycle of the Pod was very short, therefore Kubernetes continuously schedules new Pods but Liveness and Readiness probes are still failing on the container healthiness.
To keep the Pod running, you must specify a task (process) inside to the container for the ability of continuously running. There are lots of discussions and solutions available on how to solve that kind of issue, more hints can be found here.
kubectl logs command only works if the pod is up and running. If they are not, you can use the kubectl events command. It will give you some log of events and sometimes (in my experience) will give you also clues on what is going on.
kubectl get events -n <your_app_namespace> --sort-by='.metadata.creationTimestamp'
By default it does not sort the events, hence the --sort-by flag.