k3s cleanup of HelmChart?

7/4/2019

I have followed instructions from this blog post to set up a k3s cluster on a couple of raspberry pi 4:

I'm now trying to get my hands dirty with traefik as front, but I'm having issues with the way it has been deployed as a 'HelmChart' I think.

From the k3s docs

It is also possible to deploy Helm charts. k3s supports a CRD controller for installing charts. A YAML file specification can look as following (example taken from /var/lib/rancher/k3s/server/manifests/traefik.yaml):

So I have been starting up my k3s with the --no-deploy traefik option to manually add it with settings. So I therefore manually apply a yaml like this:

apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
  name: traefik
  namespace: kube-system
spec:
  chart: https://%{KUBERNETES_API}%/static/charts/traefik-1.64.0.tgz
  set:
    rbac.enabled: "true"
    ssl.enabled: "true"
    kubernetes.ingressEndpoint.useDefaultPublishedService: "true"
    dashboard:
      enabled: true
      domain: "traefik.k3s1.local"

But when trying to iterate over settings to get it working as I want, I'm having trouble tearing it down. If I try kubectl delete -f on this yaml it just hangs indefinitely. And I can't seem to find a clean way to delete all the resources manually either.

I've been resorting now to just reinstall my entire cluster over and over because I can't seem to cleanup properly.

Is there a way to delete all the resources created by a chart like this without the helm cli (which I don't even have)?

-- Viktor Hedefalk
k3s
kubernetes

2 Answers

7/15/2019

I see two options here:

  1. Use the --now flag to delete your yaml file with minimal delay.

  2. Use --grace-period=0 --force flags to force delete the resource.

There are other options but you'll need Helm CLI for them.

Please let me know if that helped.

-- OhHiMark
Source: StackOverflow

11/3/2019

Are you sure that kubectl delete -f is hanging?

I had the same issue as you and it seemed like kubectl delete -f was hanging, but it was really just taking a long time.

As far as I can tell, when you issue the kubectl delete -f a pod in the kube-system namespace with a name of helm-delete-* should spin up and try to delete the resources deployed via helm. You can get the full name of that container by running kubectl -n kube-system get pods, find the one with kube-delete-<name of yaml>-<id>. Then use the pod name to look at the logs using kubectl -n kube-system logs kube-delete-<name of yaml>-<id>.

An example of what I did was:

kubectl delete -f jenkins.yaml # seems to hang
kubectl -n kube-system get pods # look at pods in kube-system namespace
kubectl -n kube-system logs helm-delete-jenkins-wkjct # look at the delete logs
-- cwiggs
Source: StackOverflow