Kub Cluster on Azure Container Service routes to 404, while my docker image works fine in my local?

7/13/2018

I've created a docker image on centOS by enabling systemd services and built my image. I created docker-compose.yml file and docker-compose up -d and the image gets built and I can hit my application at localhost:8080/my/app.

I was using this tutorial - https://carlos.mendible.com/2017/12/01/deploy-your-first-service-to-azure-container-services-aks/.

So after I'm done with my docker image, I deployed my Image to Azure Container Registry and then created Azure Container Service (AKS Cluster). Then deploying that same working docker image on to AKS cluster and I get 404 page not found, when I'm trying to access the load balancer public IP. I got into kubernetes machine and tried to curl localhost:8080/my/app, still 404.

I see my services are up and running without any issue inside the Kubernetes pod and configuration is pretty much same as my docker container.

Here is my Dockerfile:

#Dockerfile based on latest CentOS 7 image
FROM c7-systemd-httpd-local

RUN yum install -y epel-release # for nginx
RUN yum install -y initscripts  # for old "service"

ENV container docker

RUN yum install -y bind bind-utils
RUN systemctl enable named.service 

# webserver service
RUN yum install -y nginx
RUN systemctl enable nginx.service

# Without this, init won't start the enabled services and exec'ing and starting
# them reports "Failed to get D-Bus connection: Operation not permitted".
VOLUME /run /tmp

# Don't know if it's possible to run services without starting this
ENTRYPOINT [ "/usr/sbin/init" ] 

VOLUME ["/sys/fs/cgroup"]

RUN mkdir -p /myappfolder
COPY . myappfolder
WORKDIR ./myappfolder

RUN sh ./setup.sh

WORKDIR /

EXPOSE 8080

CMD ["/bin/startServices.sh"]

Here is my Docker-Compose.yml

version: '3'

services:
  myapp:
    build: ./myappfolder
    container_name: myapp
    environment:
      - container=docker
    ports:
      - "8080:8080"
    privileged: true
    cap_add:
      - SYS_ADMIN
    security_opt:
      - seccomp:unconfined
    volumes:
      - /sys/fs/cgroup:/sys/fs/cgroup:ro
    command: "bash -c /usr/sbin/init"

Here is my Kubectl yml file.

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: myapp
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - args:
        - bash
        - -c
        - /usr/sbin/init
        env:
        - name: container
          value: docker
        name: myapp
        image: myapp.azurecr.io/newinstalled_app:v1
        ports:
        - containerPort: 8080
        args: ["--allow-privileged=true"]
        securityContext:
          capabilities:
            add: ["SYS_ADMIN"]
          privileged: true
        #command: ["bash", "-c", "/usr/sbin/init"] 
      imagePullSecrets:
      - name: myapp-test
---
apiVersion: v1
kind: Service
metadata:
  name: myapp
spec:
  type: LoadBalancer
  ports:
  - port: 8080
  selector:
    app: myapp   

I used these commands -

1. az group create --name resource group --location eastus
2. az ask create --resource-group rename --name kubname --node-count 1 --generate-ssh-keys
3. az ask get-credentials --resource-group rename --name kubname
4. kubectl get cs
5. kubectl cluster-info
6. kubectl create -f yamlfile.yml
7. kubectl get po --watch
8. kubectl get svc --watch
9. kubectl get pods
10. kubectl exec -it myapp-66678f7645-2r58w -- bash

entered into pod - its 404.

11. kubectl get svc -> External IP - 104.43.XX.XXX:8080/my/app -> goes to 404.

But my docker-compose up -d -> goes into our application.

Am I missing anything?

-- Siddhartha Thota
azure
azure-container-service
azure-kubernetes
dockerfile
kubernetes

1 Answer

7/27/2018

Figured it out. I need to have loadbalancer pointing to 80 and destination port to 8080.

That's the only change I made and things started working fine.

Thanks!

-- Siddhartha Thota
Source: StackOverflow