I have deployed an app and exposed it as a loadbalancer service. I added the resource field in the yaml of the deployment to request for 100m cpu. Defined a HPA to scale the app when the CPU goes above 50%. The app fails to autoscale and the cpu utilization always shows as unknown.kubectl describe hpa
gives the following result:
Name: storyexporter-hpa
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"autoscaling/v2beta1","kind":"HorizontalPodAutoscaler","metadata":{"annotations":{},"name":"storyexporter-hpa","namespace":"...
CreationTimestamp: Sat, 24 Oct 2020 18:23:46 +0530
Reference: Deployment/storyexporter-deployment
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): <unknown> / 50%
Min replicas: 1
Max replicas: 3
Deployment pods: 1 current / 0 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale
ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: missing request for cpu
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedGetResourceMetric 11s (x7 over 103s) horizontal-pod-autoscaler missing request for cpu
Warning FailedComputeMetricsReplicas 11s (x7 over 103s) horizontal-pod-autoscaler invalid metrics (1 invalid out of 1), first error is: failed to get cpu utilization: missing request for cpu
Kubectl top node commands works. I deployed a demo wordpress app and attached HPA for the same and it shows CPU utilization instead of unknown.
Attaching my yaml for deployment and HPA.
apiVersion: v1
kind: Service
metadata:
name: storyexporter
labels:
app: storyexporter
spec:
ports:
- port: 8080
selector:
app: storyexporter
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: storyexporter-deployment
spec:
selector:
matchLabels:
app: storyexporter
replicas: 1
template:
metadata:
labels:
app: storyexporter
spec:
containers:
- name: storyexporter
image: <ImagePath>
env:
- name: STORYEXPORTER_MONGO_HOST
value: storyexporter-mongodb
- name: STORYEXPORTER_MONGO_USERNAME
value: admin
- name: STORYEXPORTER_MONGO_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-password
key: password
- name: STORYEXPORTER_RABBIT_HOST
value: storyexporter-rabbitmq
- name: STORYEXPORTER_RABBIT_USERNAME
value: guest
- name: STORYEXPORTER_RABBIT_PASSWORD
valueFrom:
secretKeyRef:
name: rabbitmq-password
key: password
- name: EXPIRED_RESOURCES_TTL
value: '3600000'
- name: CHROMIUM_TIMEOUT_IN_SECONDS
value: '900'
- name: CHROMIUM_WINDOW_SIZE
value: '1920,1020'
- name: AVG_MB_PER_STORY
value: '1000'
- name: CHROMIUM_ATTEMPTS_BEFORE_FAIL
value: '0'
- name: JAVA_OPTS
value: ''
- name: SKIP_EQS_ROUTING
value: 'false'
- name: CHROMIUM_POOL_SIZE
value: '4'
- name: DEV
value: 'true'
- name: LOCAL
value: 'true'
ports:
- containerPort: 8080
resources:
requests:
cpu: "100m"
limits:
cpu: "200m"
imagePullPolicy: Always
imagePullSecrets:
- name: regcred
HPA YAML
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: storyexporter-hpa
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: storyexporter-deployment
minReplicas: 1
maxReplicas: 3
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: 50
If you are using a multi container based Pod, you must set resources (i.e. requests and limits for CPU and Memory) for all the containers.
+ resources:
+ requests:
+ cpu: 100m
+ memory: "256Mi"
+ limits:
+ cpu: 200m
+ memory: "512Mi"