I have deployed HPA to monitor the Memory and CPU, our application is not robust enough to handle the failures where it terminates the tasks when pod crashed or killed during scaledown, and needs manual intervention restart the tasks with dataloss and need lot of effort. I am looking around if there is a way to send a trigger a command send and receive before triggering a kill signal. I see prestop hook but not sure how exactly i can get this worked. Is it possible to trigger the prestop hook before sending the kill signal during scale down, where run a script in pod by monitoring and send back the signal to kubernetes when CPU or mem hits certain number and then kubernetes sends kill signal to initiate shutdown process. Any help/suggestions.?
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
horizontalpodautoscaler.autoscaling/hpa Deployment/task-deployment1 545%/85%, 1%/75% 2 5 5 36h
below is the hpa manifest file
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: hpa
namespace: namespace-CapAm
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: task-deployment1
minReplicas: 2
maxReplicas: 5
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 75
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 85
We won't be able to use promotheus as it is not firm supported and we were suggested to use HPA with Mem and CPU only.
HPA does not kill (delete) the Pod, it scales the Deployment, which in turn scales underlying ReplicaSet. So the Pod deletion isbtriggered by RS scale change. This makes the.process unaware if the scaling was.in any way related to HPA. You should write your app in a way that is ok with deletion of any Pod in deployment and make it handle normal shutdown gracefully.