This is more an advises asking than a specific technique question.
I did some search but it's hard to find the exact same issue. If you think it's a duplicate of another one question, please give me some links! :-)
As many developers (I guess), I have one "Ali Baba's cave" server hosting my blog and multiple services: GitLab, Minio, Billing system for my freelance account etc...
All services are setup on an Ubuntu server using differents ways according to the possibilities I have: apt-get install
, tar extraction or Capistrano deployments for personal projects.
This is working, but it's a maintenance hell for me. Some project can't be upgraded because a system dependency is conflicted with another one or simply not available on my OS, or an update may have side-effect on some project. For example, a PHP upgrade needed for my personal project completely broke a manually installed PHP service because the new version was not supported.
I'm currently learning Kubernetes and Helm charts. The goal is to setup a new CoreOS server and a Kubernetes ecosystem with all my apps and projects on it.
With that, I'll be able:
I did a test by created a basic chart with helm create my-network
, creating a basic nginx app, perfect to add my network homepage!
But now I would like to add and connect some application, let's start with Gitlab.
I found two ways to add it:
helm upgrade --install gitlab gitalb/gitlab
command with a yaml values file for configuration, outside my own chart.Both works, giving me the nearly same result.
The first solution seems more "independent" but I don't really know how to build/test it under CI (I would like upgrade automation).
The second allows me to configure all with a single values.yaml
files, but I don't know what it is done during a upgrade (Are the upgrade processes of gitlab run during my chart upgrade?) and all in combined onto one "project".
GitLab is an example, but I want to add more of "ready-to-use" apps this way.
What would you advice to me? Solution 1 or 2? And what I should really take care of for both solution, especially for upgrade/backup?
If you have a completely different third solution to propose using Helm, feel free! :-)
Thanks
My experience has generally been that using a separate helm install
for each piece/service is better. If those services have dependencies (“microservice X needs a Redis cache”) then those are good things to put in the requirements.yaml
files.
A big “chart of charts” runs into a couple of issues:
Helm will flatten dependencies, so if service X needs Redis and service Y also needs Redis, a chart-of-charts setup will install one Redis and let it be shared; but in practice that’s often not what you want.
Separating out “shared” vs. “per-service” configuration gets a little weird. With separate charts you can use helm install -f
twice to provide two separate values files, but in a chart-of-charts it’s harder to have a set of really global settings and also a set of per-component settings without duplicating everything.
There’s a standard naming convention that incorporates the Helm helm install --name
and the specific component name. This looks normal if it’s service-x-redis
, a little weird if it’s service-x-service-x
, and kind of strange if you have one global release name the-world-service-x
.
There can be good reasons to want to launch multiple independent copies of something, or to test out just the deployment scripting for one specific service, and that’s harder if your only deployment is “absolutely everything”.
For your use case you also might consider whether non-Docker systems management tools (Ansible, Chef, Salt Stack) could reproduce your existing hand deployment without totally rebuilding your system architecture; Kubernetes is pretty exciting but the old ways work very well too.