Best practices for organizing kubernetes yaml when dealing with multiple environment in a project

1/10/2020

It is unclear to me what are the best practices for organizing tree structure with multiple environments.For example we have 2 clusters per stage and production.I would like to create a development namespace in stage cluster.We would like to use Helm for packaging and use Jenkins for deployments.

As a result,I am not sure the how tree structure should look like for k8s and Helm to deploy our application to multiple environments such as development->test->stage->prod

-- semural
kubernetes
kubernetes-helm

1 Answer

1/10/2020

At a mechanical level, you can make Jenkins run helm upgrade --install with arbitrary flags. helm upgrade and helm install will take the values.yaml file from the chart you're running, but then any -f or --set options override things in the values.yaml file. So, in some form, you need:

  • A Helm chart that can install the software, given the right settings
  • A default values.yaml file in that chart that has sensible defaults, perhaps for a local environment like kind or minikube
  • An override values.yaml file, possibly per service per environment, that specifies the locations of things like external databases
  • A way to specifically set the image tag, because Jenkins will generate a new image with a new unique tag on every build

Where exactly these live depends on who "owns" the various components. I've seen layouts where services have enough information to build Docker images, but then the actual Kubernetes deployment is owned by a separate operation team and they have a separate repository of just Helm charts. An alternate model is one where each service has its own deployment setup, and the team that owns the service also owns the deployment. Generally the filesystem/repository layout will follow your organizational layout here (a single repository of just Helm charts is fine if the devops team owns all of them, and is frustrating if each time manages their own).


Without trying to present it as "the best", here's the general layout I tend to use. This is a setup where each team is ultimately responsible for its own deployments. That means, for each service repository, the source code, Jenkins pipeline code, Docker image build code, and Kubernetes manifests are all in the same repository. That roughly looks like:

a-service
+-- Dockerfile                # to build the Docker image
+-- Jenkinsfile               # scripted pipeline code
+-- Makefile                  # or something similar to build the program
+-- charts
|   \-- a-service
|       +-- Chart.yaml        # Helm chart metadata
|       +-- templates
|       |   \-- ...
|       +-- values.yaml       # default values
|       +-- values.dev.yaml   # development override values
|       +-- values.prod.yaml  # production override values
\-- src                       # actual source code lives here
    \-- ...

In the Jenkinsfile you'd launch the installation doing something like

dir('charts/a-service') {
  sh "helm upgrade --install a-service . -f values.$ENV.yaml --set tag=$TAG"
}
-- David Maze
Source: StackOverflow