Issues when using AKS to manage containers

8/5/2018

I am using docker + AKS to manage my containers. When I run my containers locally/or on a VM using docker-compose ..my services(which are containerized) can communicate with my databases which are also in containers. The bridge between these containers is created using networks. After I converted the docker-compose file for all of my applications to the respective yaml counterparts and deployed my containers to AKS (single node), my containerized services are not able to reach the database.

All my containers have 3 yaml files

  1. Pvc
  2. deployment(for pods)
  3. svc.

I've gone through many of the getting started with AKS examples and for some reason am not able to figure it out. All application services are exposed publicly using load balancers. My question is more like how do I define which db the application services should connect to now that the concept of networks doesn't exist anymore.

In the examples provided for KS all the the front end services do is create a env and specify the name of the backend service. I tried that as well and my application still doesn't work. Sample that I referred to validate my setup is https://docs.microsoft.com/en-gb/azure/aks/kubernetes-walkthrough#run-the-application.

Any help would be great.

-- Ashish Chettri
azure-kubernetes
kubernetes

1 Answer

8/5/2018

If you need these services internally only, you should not expose it publicly using load balancers.

Kubernetes has two possibilities for service discovery. DNS and environment variables. While DNS is an optional component, I did not see any cluster without it. Also I assume that AKS uses it.

So, for example you have a Postgres database and want to use it somewhere else:

apiVersion: extensions/v1beta1
kind: Deployment

metadata:
  name: postgres
  labels:
    app: postgres

spec:
  replicas: 1

  template:
    metadata:
      labels:
        app: postgres

    spec:
      containers:
      - name: db
        image: postgres:11

        ports:
        - name: postgres
          containerPort: 5432

This creates a deployment with exposes the port 5432. The label app: postgres is also important here, since we need it later to identify the created Pods.

Now we need to create a service for it:

apiVersion: v1
kind: Service

metadata:
  name: postgres
  labels:
    app: postgres

spec:
  type: ClusterIP # default value

  selector:
    app: postgres

  ports:
  - port: 5432

This creates a virtual IP address and registers all ready pods with the label app: postgres to it. Since the name of the service is postgres and it is the default namespace, postgres is now accessible via postgres.default.svc.cluster.local:5432. You can you this address and port in your other application (eg Python) to connect to the database.

-- svenwltr
Source: StackOverflow