test image from azure container registry

4/19/2018

I created a simple Docker image from a "Hello World" java application.

This is my Dockerfile

FROM java:8
COPY . /var/www/java  
WORKDIR /var/www/java  
RUN javac HelloWorld.java  
CMD ["java", "HelloWorld"]

I pushed the image (java-app) to Azure Container Registry.

$ az acr repository list --name AContainerRegistry --output tableResult
----------------
java-app

I want to deploy it

amhg$ kubectl run dockerproject --image=acontainerregistry.azurecr.io/java-app:v1 
    deployment.apps "dockerproject" created
amhg$ kubectl expose deployments dockerproject --port=80 --type=LoadBalancer
    service "dockerproject" exposed

and see the pods, the pod is crashed

amhg$ kubectl get pods
    NAME                               READY     STATUS             RESTARTS   AGE
    dockerproject-b6799d879-pt5rx      0/1       CrashLoopBackOff   8          19m

Is there a way to "test"/run the image from the central registry, how come it crashes?

HERE DESCRIBE POD

  amhg$ kubectl describe pod dockerproject-64fbf7649-spc7h 
    Name:           dockerproject-64fbf7649-spc7h
    Namespace:      default
    Node:           aks-nodepool1-39744669-0/10.240.0.4
    Start Time:     Thu, 19 Apr 2018 11:53:58 +0200
    Labels:         pod-template-hash=209693205
                    run=dockerproject
    Annotations:    kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"dockerproject-64fbf7649","uid":"946610e4-43b7-11e8-9537-0a58ac1...
    Status:         Running
    IP:             10.244.0.38
    Controlled By:  ReplicaSet/dockerproject-64fbf7649
    Containers:
      dockerproject:
        Container ID:   docker://1f2a7a6870a37e4d6b53fc834b0d4d3b681e9faaacc3772177a918e66357404e
        Image:          acontainerregistry.azurecr.io/java-app:v1
        Image ID:       docker-pullable://acontainerregistry.azurecr.io/java-app@sha256:eaf6fe53a59de287ad76a18de2c7f05580b1f25153624161aadcc7b8ef47b0c4
        Port:           <none>
        Host Port:      <none>
        State:          Waiting
          Reason:       CrashLoopBackOff
        Last State:     Terminated
          Reason:       Completed
          Exit Code:    0
          Started:      Thu, 19 Apr 2018 12:35:22 +0200
          Finished:     Thu, 19 Apr 2018 12:35:23 +0200
        Ready:          False
        Restart Count:  13
        Environment:    <none>
        Mounts:
          /var/run/secrets/kubernetes.io/serviceaccount from default-token-vkpjm (ro)
    Conditions:
      Type           Status
      Initialized    True 
      Ready          False 
      PodScheduled   True 
    Volumes:
      default-token-vkpjm:
        Type:        Secret (a volume populated by a Secret)
        SecretName:  default-token-vkpjm
        Optional:    false
    QoS Class:       BestEffort
    Node-Selectors:  <none>
    Tolerations:     node.alpha.kubernetes.io/notReady:NoExecute for 300s
                     node.alpha.kubernetes.io/unreachable:NoExecute for 300s
    Events:
      Type     Reason                 Age                 From                               Message
      ----     ------                 ----                ----                               -------
      Normal   Scheduled              43m                 default-scheduler                  Successfully assigned dockerproject2-64fbf7649-spc7h to aks-nodepool1-39744669-0
      Normal   SuccessfulMountVolume  43m                 kubelet, aks-nodepool1-39744669-0  MountVolume.SetUp succeeded for volume "default-token-vkpjm"
      Normal   Pulled                 43m (x4 over 43m)   kubelet, aks-nodepool1-39744669-0  Container image "acontainerregistry.azurecr.io/java-app:v1" already present on machine
      Normal   Created                43m (x4 over 43m)   kubelet, aks-nodepool1-39744669-0  Created container
      Normal   Started                43m (x4 over 43m)   kubelet, aks-nodepool1-39744669-0  Started container
      Warning  FailedSync             8m (x161 over 43m)  kubelet, aks-nodepool1-39744669-0  Error syncing pod
      Warning  BackOff                3m (x184 over 43m)  kubelet, aks-nodepool1-39744669-0  Back-off restarting failed container
-- andreahg
azure
docker
kubectl
kubernetes

1 Answer

4/19/2018

When you run an application in the Pod, Kubernetes expects that it will work all the time as a daemon until you will stop it somehow.

In your details about the pod I see this:

State:          Waiting
  Reason:       CrashLoopBackOff
Last State:     Terminated
  Reason:       Completed
  Exit Code:    0
  Started:      Thu, 19 Apr 2018 12:35:22 +0200
  Finished:     Thu, 19 Apr 2018 12:35:23 +0200

It means that your application exited with code 0 (which means "all is ok") right after start. So, the image was successfully downloaded (registry is OK) and run, but the application exited.

That's why Kubernetes tries to restart the pod all the time.

The only thing I can suggest - find a reason why the application stops and fix it.

-- Anton Kostenko
Source: StackOverflow