I am trying to use both kubernetes HPA and cluster-autoscaler in production, but I want watch when the scaling is triggerred so that I can notify ops and developers. Are there any webhook I can custom in the scaling lifecycle, or which event should I watch since web already collected the event to a isolate ES?
kubernetes generates events when auto scaling is triggered. check ( kubectl get events ) command output. You can watch for those events and notify the folks
I would use Container Lifecycle Hooks, to be more specific the PostStart
and PreStop
.
In Kubernetes documentation we can read the following:
There are two hooks that are exposed to Containers:
PostStart
This hook executes immediately after a container is created. However, there is no guarantee that the hook will execute before the container ENTRYPOINT. No parameters are passed to the handler.
PreStop
This hook is called immediately before a container is terminated due to an API request or management event such as liveness probe failure, preemption, resource contention and others. A call to the preStop hook fails if the container is already in terminated or completed state. It is blocking, meaning it is synchronous, so it must complete before the call to delete the container can be sent. No parameters are passed to the handler.
A more detailed description of the termination behavior can be found in Termination of Pods.
You could use those to execute a specific command, like run.sh
, or to execute an HTTP request against a specific end point on the Container.
Example pod
might look like the following:
apiVersion: v1
kind: Pod
metadata:
name: lifecycle-demo
spec:
containers:
- name: lifecycle-demo-container
image: nginx
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
preStop:
exec:
command: ["/bin/sh","-c","nginx -s quit; while killall -0 nginx; do sleep 1; done"]
As for Cluster scaling you might monitor the events from Apiserver or from command line. If you do kubectl get event
and nodes will scale you might see something like:
$ kubectl get event --watch
LAST SEEN TYPE REASON KIND MESSAGE
5m26s Normal RegisteredNode Node Node gke-standard-cluster-1-default-pool-a663f7f4-8dxk event: Registered Node gke-standard-cluster-1-default-pool-a663f7f4-8dxk in Controller
5m26s Normal RegisteredNode Node Node gke-standard-cluster-1-default-pool-a663f7f4-pk6v event: Registered Node gke-standard-cluster-1-default-pool-a663f7f4-pk6v in Controller
5m26s Normal RegisteredNode Node Node gke-standard-cluster-1-default-pool-a663f7f4-wt5m event: Registered Node gke-standard-cluster-1-default-pool-a663f7f4-wt5m in Controller
13m Normal Starting Node Starting kubelet.
13m Normal NodeHasSufficientMemory Node Node gke-standard-cluster-1-pool-1-557d58bb-73x0 status is now: NodeHasSufficientMemory
13m Normal NodeHasNoDiskPressure Node Node gke-standard-cluster-1-pool-1-557d58bb-73x0 status is now: NodeHasNoDiskPressure
13m Normal NodeHasSufficientPID Node Node gke-standard-cluster-1-pool-1-557d58bb-73x0 status is now: NodeHasSufficientPID
13m Normal NodeAllocatableEnforced Node Updated Node Allocatable limit across pods
13m Normal NodeReady Node Node gke-standard-cluster-1-pool-1-557d58bb-73x0 status is now: NodeReady
13m Normal Starting Node Starting kube-proxy.
13m Normal RegisteredNode Node Node gke-standard-cluster-1-pool-1-557d58bb-73x0 event: Registered Node gke-standard-cluster-1-pool-1-557d58bb-73x0 in Controller
5m26s Normal RegisteredNode Node Node gke-standard-cluster-1-pool-1-557d58bb-73x0 event: Registered Node gke-standard-cluster-1-pool-1-557d58bb-73x0 in Controller
13m Normal Starting Node Starting kubelet.
13m Normal NodeHasSufficientMemory Node Node gke-standard-cluster-1-pool-1-557d58bb-fbpz status is now: NodeHasSufficientMemory
13m Normal NodeHasNoDiskPressure Node Node gke-standard-cluster-1-pool-1-557d58bb-fbpz status is now: NodeHasNoDiskPressure
13m Normal NodeHasSufficientPID Node Node gke-standard-cluster-1-pool-1-557d58bb-fbpz status is now: NodeHasSufficientPID
13m Normal NodeAllocatableEnforced Node Updated Node Allocatable limit across pods
13m Normal NodeReady Node Node gke-standard-cluster-1-pool-1-557d58bb-fbpz status is now: NodeReady
13m Normal RegisteredNode Node Node gke-standard-cluster-1-pool-1-557d58bb-fbpz event: Registered Node gke-standard-cluster-1-pool-1-557d58bb-fbpz in Controller
13m Normal Starting Node Starting kube-proxy.
5m26s Normal RegisteredNode Node Node gke-standard-cluster-1-pool-1-557d58bb-fbpz event: Registered Node gke-standard-cluster-1-pool-1-557d58bb-fbpz in Controller
13m Normal Starting Node Starting kubelet.
13m Normal NodeHasSufficientMemory Node Node gke-standard-cluster-1-pool-1-557d58bb-v5v5 status is now: NodeHasSufficientMemory
13m Normal NodeHasNoDiskPressure Node Node gke-standard-cluster-1-pool-1-557d58bb-v5v5 status is now: NodeHasNoDiskPressure
13m Normal NodeHasSufficientPID Node Node gke-standard-cluster-1-pool-1-557d58bb-v5v5 status is now: NodeHasSufficientPID
13m Normal NodeAllocatableEnforced Node Updated Node Allocatable limit across pods
13m Normal NodeReady Node Node gke-standard-cluster-1-pool-1-557d58bb-v5v5 status is now: NodeReady
13m Normal Starting Node Starting kube-proxy.
13m Normal RegisteredNode Node Node gke-standard-cluster-1-pool-1-557d58bb-v5v5 event: Registered Node gke-standard-cluster-1-pool-1-557d58bb-v5v5 in Controller
5m26s Normal RegisteredNode Node Node gke-standard-cluster-1-pool-1-557d58bb-v5v5 event: Registered Node gke-standard-cluster-1-pool-1-557d58bb-v5v5 in Controller