How to evict or delete pods from Kubernetes using golang client

7/8/2020

I want to evict all pods from a Kubernetes node by using the client-go package. Similar to kubectl drain <Node>. Possibly ignoring the kube-system namespace pods.

I've obtained the list of pods from a node by:

func evictNodePods(nodeInstance string, client *kubernetes.Clientset) {

	pods, err := client.CoreV1().Pods("").List(metav1.ListOptions{
		FieldSelector: "spec.nodeName=" + nodeInstance,
	})

	if err != nil {
		log.Fatal(err)
	}
	for _, i := range pods.Items {
		if i.Namespace == "kube-system" {
			continue
		} else {
			//evict
		}
	}
}

But im not clear on how to send a POST request to evict the pods on a given node instance

-- popopanda
go
kubernetes

2 Answers

7/8/2020

To delete pod:

err := client.CoreV1().Pods(i.Namespace).Delete(i.Name, metav1.DeleteOptions{})
if err != nil {
  log.Fatal(err)
}

If you upgrade client-go to recent versions, you need to add context as a parameter too.

err := client.CoreV1().Pods(i.Namespace).Delete(context.TODO(), i.Name, metav1.DeleteOptions{})
if err != nil {
  log.Fatal(err)
}
-- Kamol Hasan
Source: StackOverflow

7/12/2021

Although Delete may work most of the time, it does not guarantee that the new pod will not get scheduled on the same node. Here's how one should actually handle this:

Taint the node and make it unschedulable so that this node is taken out of the scheduling pool.

import (
  "context"

  "k8s.io/client-go/kubernetes"
  meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)

func CordonNode(client *kubernetes.Clientset, name string) error {
  # Fetch node object
  node, err := client.CoreV1().Nodes().Get(context.TODO(), name, meta_v1.GetOptions{})

  if err != nil {
    return err
  }

  node.Spec.Unschedulable = true

  # Update the node
  _, err = client.CoreV1().Nodes().Update(context.TODO(), node, meta_v1.UpdateOptions{})

  return err
}

Now you have two options:

  • Add the taint of NoExecute on node, kubelet will evict all the workloads for you from the node. However, the pods that tolerate this taint will still keep running on the node.
import (
  "context"

  "k8s.io/client-go/kubernetes"
  meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
  v1 "k8s.io/api/core/v1"
)

func TaintNode(client *kubernetes.Clientset, name string) error {
  # Fetch node object
  node, err := client.CoreV1().Nodes().Get(context.TODO(), name, meta_v1.GetOptions{})

  node.Spec.Taints = append(node.Spec.Taints, v1.Taint{
    Key: "someKey"
    Value: "someValue"
    Effect: v1.TaintEffectNoExecute
  })

  # Update the node
  _, err = client.CoreV1().Nodes().Update(context.TODO(), node, meta_v1.UpdateOptions{})

  return err
}
  • Individually evict workload pods running on that node.
import (
  "context"

  "k8s.io/client-go/kubernetes"
  meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
  policy "k8s.io/api/policy/v1beta1"
)

func EvictPod(client *kubernetes.Clientset, name, namespace string) error {
  return client.PolicyV1beta1().Evictions(namespace).Evict(context.TODO(), &policy.Eviction{
		ObjectMeta: meta_v1.ObjectMeta{
			Name:      name,
			Namespace: namespace,
		}
}
-- Raghwendra Singh
Source: StackOverflow