Docker image works, Kubernetes Pod not working. Ubuntu. Log: /bin/sh: [npm,start]: not found

7/23/2020

I'm taking a course that uses Kubernetes and am running into an error when I try to create a pod in Kubernetes.

I'm using Ubuntu, AMD64

I installed microk8s.kubectl following these instructions (https://ubuntu.com/kubernetes/install)

Here's my Dockerfile which runs correctly when I use only Docker.

FROM node:alpine

WORKDIR /app

COPY package.json ./
RUN npm install

COPY ./ ./

CMD ["npm", "start"]

Here's my posts.yaml file, verbatim to the course I'm taking

apiVersion: v1
kind: Pod
metadata:
  name: posts
spec:
  containers:
    - name: posts
      image: emendoza1986/blog_posts:0.0.1

output from kubectl get pods

NAME    READY   STATUS             RESTARTS   AGE
posts   0/1     CrashLoopBackOff   6          10m

output from kubectl logs posts

/bin/sh: [npm,start]: not found

output from kubectl describe pod posts

Name:         posts
Namespace:    default
Priority:     0
Node:         desktope/192.168.0.18
Start Time:   Thu, 23 Jul 2020 10:58:40 -0700
Labels:       <none>
Annotations:  Status:  Running
IP:           10.1.87.20
IPs:
  IP:  10.1.87.20
Containers:
  posts:
    Container ID:   containerd://acb403c53759670370959cfa2cc0939f53126aee889e1f6dc2e831bc4dc22c3c
    Image:          emendoza1986/blog_posts:0.0.1
    Image ID:       docker.io/emendoza1986/blog_posts@sha256:f69b30cf0382d4c273643ac11c505378854b966063974cc57d187718cc0b0fd5
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    127
      Started:      Thu, 23 Jul 2020 10:58:59 -0700
      Finished:     Thu, 23 Jul 2020 10:58:59 -0700
    Ready:          False
    Restart Count:  2
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-2fm2c (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  default-token-2fm2c:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-2fm2c
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  48s                default-scheduler  Successfully assigned default/posts to desktope
  Normal   Pulled     29s (x3 over 47s)  kubelet, desktope  Container image "emendoza1986/blog_posts:0.0.1" already present on machine
  Normal   Created    29s (x3 over 47s)  kubelet, desktope  Created container posts
  Normal   Started    29s (x3 over 47s)  kubelet, desktope  Started container posts
  Warning  BackOff    12s (x4 over 45s)  kubelet, desktope  Back-off restarting failed container

output from microk8s.status

microk8s is running
addons:
dashboard: enabled
dns: enabled
metrics-server: enabled
cilium: disabled
fluentd: disabled
gpu: disabled
helm: disabled
helm3: disabled
host-access: disabled
ingress: disabled
istio: disabled
jaeger: disabled
knative: disabled
kubeflow: disabled
linkerd: disabled
metallb: disabled
prometheus: disabled
rbac: disabled
registry: disabled
storage: disabled

output from microk8s inspect

Inspecting Certificates
Inspecting services
  Service snap.microk8s.daemon-cluster-agent is running
  Service snap.microk8s.daemon-containerd is running
  Service snap.microk8s.daemon-apiserver is running
  Service snap.microk8s.daemon-apiserver-kicker is running
  Service snap.microk8s.daemon-proxy is running
  Service snap.microk8s.daemon-kubelet is running
  Service snap.microk8s.daemon-scheduler is running
  Service snap.microk8s.daemon-controller-manager is running
  Service snap.microk8s.daemon-flanneld is running
  Service snap.microk8s.daemon-etcd is running
  Copy service arguments to the final report tarball
Inspecting AppArmor configuration
Gathering system information
  Copy processes list to the final report tarball
  Copy snap list to the final report tarball
  Copy VM name (or none) to the final report tarball
  Copy disk usage information to the final report tarball
  Copy memory usage information to the final report tarball
  Copy server uptime to the final report tarball
  Copy current linux distribution to the final report tarball
  Copy openSSL information to the final report tarball
  Copy network configuration to the final report tarball
Inspecting kubernetes cluster
  Inspect kubernetes cluster

Building the report tarball
  Report tarball is at /var/snap/microk8s/1503/inspection-report-20200723_112646.tar.gz

I see the error coming from the log but I haven't been able to find a solution. Thank you for your help!

-- Emmanuel Mendoza
docker
kubectl
kubernetes
microk8s
npm

1 Answer

7/23/2020

Thank you for the helpful comments. Originally I had my Dockerfile as CMD 'npm', 'start'. I had fixed it locally to CMD "npm", "start" but I didn't push the new version to docker hub. Pushing the new version fixed the problem.

-- Emmanuel Mendoza
Source: StackOverflow