Best practice deploying kubernetes deployment to different environments and handling buildnumbers in the config

6/27/2019

I have a deployment config lying with each microservice. This looks something like this:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    io.kompose.service: service_x
  name: service_x
spec:
  replicas: 2
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        io.kompose.service: service_x
    spec:
      containers:
      - env:
        - name: FLASK_ENV
          value: "production"
        image: somewhere/service_x:master_179
        name: service_x
        ports:
        - containerPort: 80
        resources: {}
        volumeMounts:
          - mountPath: /app/service_x/config/deployed
            name: volume-service_xproduction
      restartPolicy: Always
      volumes:
        - name: volume-service_xproduction
          configMap:
            name: service_xproduction
            items:
              - key: production.py
                path: production.py

We have the following Environments Dev, Stage, Production. As you can see the image parameter contains the service, branch and build number. I have several ideas to make this dynamic and being able to deploy for example service_x:development_190 in the dev environment and a different build on stage. But before i'm starting to - maybe - inventing the wheel new i wonder how other people solving this challenge... Btw. We use CircleCI to build the docker images.

My question here is now; whats the best practice to deploy builds in different environments?

  • building the deployment.yml for each build?
  • using variables/templates?
  • any other solutions i'm not aware of?
  • maybe it's not the best idea to have the kubernetes files lying with the microservice?
-- Thomas Spycher
amazon-eks
amazon-web-services
circleci
continuous-deployment
kubernetes

1 Answer

6/27/2019

There are a lot of ways to do what you want to do like helm charts, updating templates etc.

What I do is, structure the code like this:

├── .git
├── .gitignore
├── .gitlab-ci.yml
├── LICENSE
├── Makefile
├── README.md
├── src
│   ├── Dockerfile
│   ├── index.html
└── templates
    ├── autoscaler.yml
    ├── deployment.yml
    ├── ingress.yml
    ├── sa.yml
    ├── sm.yml
    └── svc.yml

The Kubernetes template files will have something like:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app-deployment
  namespace: __NAMESPACE__
  labels:
    app: app
    environment: __CI_COMMIT_REF_NAME__
    commit: __CI_COMMIT_SHORT_SHA__
spec:
  replicas: 1
  selector:
    matchLabels:
      app: app
  template:
    metadata:
      labels:
        app: app
        environment: __CI_COMMIT_REF_NAME__
        commit: __CI_COMMIT_SHORT_SHA__
      annotations:
        "cluster-autoscaler.kubernetes.io/safe-to-evict": "true"
    spec:
      containers:
        - name: app
          image: <registry>/app:__CI_COMMIT_SHORT_SHA__
          ports:
            - containerPort: 80

So this template won't change as long as you change the src.

Then in the CircleCI configuration, you can have steps to update the template before applying:

- sed -i "s/__NAMESPACE__/${CI_COMMIT_REF_NAME}/" deployment.yml service.yml
- sed -i "s/__CI_COMMIT_SHORT_SHA__/${CI_COMMIT_SHORT_SHA}/" deployment.yml service.yml
- sed -i "s/__CI_COMMIT_REF_NAME__/${CI_COMMIT_REF_NAME}/" deployment.yml service.yml
- kubectl apply -f deployment.yml
- kubectl apply -f service.yml

The variables will be available to you or set in CircleCI.

-- Abhyudit Jain
Source: StackOverflow