Pod installs under Windows Helm instead of Linux

7/24/2019

I am trying to activate Helm Tiller to talk to our Azure Kubernetes Cluster.

The cluster has two nodes.

The default Linux node. And we added a windows node. Which is what we are going to use.

Here is our issue.

When I add an existing Kubernetes Cluster via the Gitlab Portal.

It seems to link. Then it tells me to click a button to install Helm Tiller.

I can get the Tiller server to install on the Linux node.

But for some reason.

Under the gitlab-managed-apps namespace it tries to install a pod called install-Helm under the windows node instead of the Linux node.

How can I tell Gitlab to install this pod under our Linux node instead of our windows node?

-- Mark Richardson
azure-devops
gitlab
kubernetes

1 Answer

7/26/2019

What you can do is:

After adding cluster to Gitlab, dont immediately initiate HELM Tiller installation. You have to prepare nodes first of all.

The tools we will use are

kubectl drain

Drain node in preparation for maintenance.

The given node will be marked unschedulable to prevent new pods from arriving. 'drain' evicts the pods if the APIServer supportshttp://kubernetes.io/docs/admin/disruptions/ . Otherwise, it will use normal DELETE to delete the pods. The 'drain' evicts or deletes all pods except mirror pods (which cannot be deleted through the API server). If there are DaemonSet-managed pods, drain will not proceed without --ignore-daemonsets, and regardless it will not delete any DaemonSet-managed pods, because those pods would be immediately replaced by the DaemonSet controller, which ignores unschedulable markings. If there are any pods that are neither mirror pods nor managed by ReplicationController, ReplicaSet, DaemonSet, StatefulSet or Job, then drain will not delete any pods unless you use --force. --force will also allow deletion to proceed if the managing resource of one or more pods is missing.

'drain' waits for graceful termination. You should not operate on the machine until the command completes.

When you are ready to put the node back into service, use kubectl uncordon, which will make the node schedulable again.

kubectl uncordon

Mark node as schedulable.

So what you need to do is drain windows node in order to make it Unschedulable during tiller installation:

kubectl drain <your-windows-node> --force

kubectl describe node  <your-windows-node> should show you:
Taints:             node.kubernetes.io/unschedulable:NoSchedule
Unschedulable:      true

Next is go to Gitlab and install Tiller: it will be installed on your 2nd(linux node).

You can verify that tiller was installed on correct node by

kubectl get pods -o wide -n gitlab-managed-apps

After Tiller installation is completed - simply uncordon windows node - that will again allow kubernetes schedule new pods into this node:

kubectl uncordon <your-windows-node>

You can read more info about drains in official documentation with an example

Yet another solution is to apply labels for your nodes, eg os=win; os=linux and install Tiller manually using node selectors. So you have to:

  • apply labels to nodes
  • create manually gitlab-managed-apps namespace
  • create serviceaccount in given namespace
  • create clusterrolebindings
  • install tiller in needed namespace(version of installed tiller is important. As I know gitlab install not the newest one)
  • check if gitlab see this Tiller installation from UI. From my experience - I was able to do that, but I had ssl warnings and came to first solution.

For reference:

curl https://raw.githubusercontent.com/helm/helm/master/scripts/get | bash
kubectl create namespace gitlab-managed-apps
kubectl create serviceaccount --namespace gitlab-managed-apps tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=gitlab-managed-apps:tiller
helm init --node-selectors "os"="linux"--tiller-namespace gitlab-managed-apps --service-account tiller 
-- VKR
Source: StackOverflow