Converting Docker-Compose to Kubernetes

1/7/2020

I am learning Kubernetes, and have great trouble understanding the use of names/labels/selectors, and whether pod names and container names should be aligned.

I have a setup with a .NET Core MVC App, a PostgreSQL database and an Nginx reverse proxy.

It is working great with this docker-compose.yml:

version: "3.7"

services:

  reverseproxy:
    build:
      context: ./Nginx
      dockerfile: ../Nginx.dockerfile
    ports:
      - "80:80"
      - "443:443"
    restart: always

  db:
    image: postgres:12.1-alpine
    environment:
      POSTGRES_PASSWORD: "mvcdbsecretpassword"

  mvc:
    depends_on:
      - reverseproxy
      - db
    build:
      context: .
      dockerfile: ./MyMvc.dockerfile
    environment:
      ConnectionStrings.MyMvc: "Host=db;Port=5432;Database=MyMvcDb;Username=postgres;Password=mvcdbsecretpassword"
    expose:
      - "5000"
    restart: always

The MVC app container is built and tagged and pushed to my Docker hub registry. At startup it logs the connection string and it accepts the settings from the docker-compose file (obviously - it is working after all).

I have converted this into six kubernetes yaml files:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgres
  labels:
    name: postgres
spec:
  replicas: 1
  selector:
    matchLabels:
      app: postgres
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
      - name: postgres
        image: postgres:12.1-alpine
        env:
        - name: POSTGRES_PASSWORD
          value: mvcdbsecretpassword
      restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
  name: postgres
  labels:
    app: postgres
spec:
  ports:
  - name: "postgres"
    port: 5432
    targetPort: 5432
  selector:
    app: postgres
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mvc
  labels:
    name: mymvc
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mymvc
  template:
    metadata:
      labels:
        app: mymvc
    spec:
      containers:
      - name: mvc
        image: mymvc:v2
        env:
          - name: ConnectionStrings.MyMvc
            value: "Host=postgres;Port=5432;Database=MyMvcDb;Username=postgres;Password=mvcdbsecretpassword"
      restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
  name: mvc
  labels:
    app: mymvc
spec:
  ports:
  - name: "mvc"
    port: 5000
    targetPort: 5000
  selector:
    app: mymvc
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: reverseproxy-deployment
  labels:
    app: mymvc
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mymvc
  template:
    metadata:
      labels:
        app: mymvc
    spec:
      containers:
      - name: reverseproxy
        image: reverseproxy:v2
        ports:
        - containerPort: 80
        - containerPort: 443
---
apiVersion: v1
kind: Service
metadata:
  name: reverseproxy-service
  labels:
    app: mymvc
spec:
  ports:
  - name: "http"
    port: 80
    targetPort: 80
  - name: "https"
    port: 443
    targetPort: 443
  selector:
    app: mymvc
  type: LoadBalancer

As stated earlier I am very confused about when to use names and when to use labels.

A little guidance will be greatly appreciated.

EDIT: David Maze helped me understand the relationship between names, labels, and selectors. YAML files are updated accordingly.

I also added a service for the mvc app exposing port 5000.

Now the pods are no longer crashing, but I still have no access to the MVC app.

I guess, I should mention that I'm trying to make this run on Docker Desktop on a Windows 10 box.

The reverse proxy made sense in the Compose stack, but I am no longer sure, that it also makes sense in a Kubernetes Cluster, or if I should rather setup some kind of Ingress controller.

Could someone tell me, if it is even possible to test this setup on Docker Desktop?

Running kubectl get nodes -o wide reveals that there is no external IP, but I'm also not sure if the cluster is mirrored to localhost.

-- TheRoadrunner
docker-compose
kubernetes

2 Answers

1/18/2020

What I needed to understand, was the need for an Ingress controller and an Ingress.yaml.

All cloud hosting providers provide their own Ingress Controllers, but testing on Docker Desktop, you have to install one yourself.

The commands to install Nginx-Ingress for Docker Desktop:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.26.2/deploy/static/mandatory.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.26.2/deploy/static/provider/

My sample Ingress.yaml:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: mvc-ingress
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/rewrite-target: /
    nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/ssl-passthrough: "false"
spec:
  tls:
  - hosts:
    - mymvc.local
    secretName: mvcsecret-tls
  rules:
    - host: mymvc.local
      http:
        paths:
        - path: /
          backend:
            serviceName: mvc
            servicePort: 5000
-- TheRoadrunner
Source: StackOverflow

1/8/2020

There are three things that actually matter in this setup:

  1. When your "mvc" pod tries to connect to a host named postgres via the database connection string, that matches the name of a service.
  2. In your two services, the matchLabels: need to be a subset of the labels of a pod; that is, the spec, template, metadata, labels of a deployment.
  3. In the deployments, the matchLabels need to match the labels of the corresponding pod (right next to them in the YAML file).

The other parts (the names of the deployments, the labels of the services and the top-level deployment objects) don't actually matter, but they can be useful for looking things up later.

In the example you show the service labels don't actually match. You need to say e.g.

apiVersion: v1
kind: Service
spec:
  selector:
    matchLabels:
      app: reverseproxy # <-- include "app:" key

If you look at output like kubectl describe service reverseproxy-service (using the object name at the command prompt) you should see a line like Endpoints: <none>; that indicates that the service isn't correctly binding to the matching pods, and a label mismatch like this is a frequent cause.

-- David Maze
Source: StackOverflow