I just created a cluster on GKE with 2 n1-standard-2 nodes and installed a prometheusOperator using the official helm.
Prometheus seems to be working fine but i'm getting alerts like this :
message: 33% throttling of CPU in namespace kube-system for container metrics-server in pod metrics-server-v0.3.1-8d4c5db46-zddql.
22 minutes agocontainer: metrics-serverpod: metrics-server-v0.3.1-8d4c5db46-zddql
message: 35% throttling of CPU in namespace kube-system for container heapster-nanny in pod heapster-v1.6.1-554bfbc7d-tg6fm.
an hour agocontainer: heapster-nannypod: heapster-v1.6.1-554bfbc7d-tg6fm
message: 77% throttling of CPU in namespace kube-system for container prometheus-to-sd in pod prometheus-to-sd-789b2.
20 hours agocontainer: prometheus-to-sdpod: prometheus-to-sd-789b2
message: 45% throttling of CPU in namespace kube-system for container heapster in pod heapster-v1.6.1-554bfbc7d-tg6fm.
20 hours agocontainer: heapsterpod: heapster-v1.6.1-554bfbc7d-tg6fm
message: 38% throttling of CPU in namespace kube-system for container default-http-backend in pod l7-default-backend-8f479dd9-9n77b.
All those pods are part of the default GKE installation and I haven't done any modification on them. I believe they are part of some google cloud tools that I haven't really tried yet.
My nodes aren't really under heavy load :
kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
gke-psi-cluster-01-pool-1-d5650403-cl4g 230m 11% 2973Mi 52%
gke-psi-cluster-01-pool-1-d5650403-xn35 146m 7% 2345Mi 41%
Here are my prometheus helm config :
alertmanager:
alertmanagerSpec:
storage:
volumeClaimTemplate:
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
config:
global:
resolve_timeout: 5m
receivers:
- name: "null"
- name: slack_k8s
slack_configs:
- api_url: REDACTED
channel: '#k8s'
send_resolved: true
text: |-
{{ range .Alerts }}
{{- if .Annotations.summary }}
*{{ .Annotations.summary }}*
{{- end }}
*Severity* : {{ .Labels.severity }}
{{- if .Labels.namespace }}
*Namespace* : {{ .Labels.namespace }}
{{- end }}
{{- if .Annotations.description }}
{{ .Annotations.description }}
{{- end }}
{{- if .Annotations.message }}
{{ .Annotations.message }}
{{- end }}
{{ end }}
title: '{{ (index .Alerts 0).Labels.alertname }}'
title_link: https://karma.REDACTED?q=alertname%3D{{ (index .Alerts 0).Labels.alertname
}}
route:
group_by:
- alertname
- job
group_interval: 5m
group_wait: 30s
receiver: slack_k8s
repeat_interval: 6h
routes:
- match:
alertname: Watchdog
receiver: "null"
- match:
alertname: KubeAPILatencyHigh
receiver: "null"
ingress:
enabled: false
hosts:
- alertmanager.REDACTED
coreDns:
enabled: false
grafana:
adminPassword: REDACTED
ingress:
annotations:
kubernetes.io/tls-acme: "true"
enabled: true
hosts:
- grafana.REDACTED
tls:
- hosts:
- grafana.REDACTED
secretName: grafana-crt-secret
persistence:
enabled: true
size: 5Gi
kubeControllerManager:
enabled: true
kubeDns:
enabled: true
kubeScheduler:
enabled: true
nodeExporter:
enabled: true
prometheus:
ingress:
enabled: false
hosts:
- prometheus.REDACTED
prometheusSpec:
additionalScrapeConfigs:
- basic_auth:
password: REDACTED
username: prometheus
retention: 30d
ruleSelectorNilUsesHelmValues: false
serviceMonitorSelectorNilUsesHelmValues: false
storageSpec:
volumeClaimTemplate:
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 30Gi
prometheusOperator:
createCustomResource: false
I've found this git issue https://github.com/kubernetes-monitoring/kubernetes-mixin/issues/108 but i'm not sure if this apply to my case because this is default GKE pods. I want to make sure everything is running smoothly and Stackdriver is able to retrieve all my logs properly even if I haven't really looked up how to use it yet.
Should I modify the limits on GKE default deployement in kube-system? Is there any problem with deploying prometheusOperator on GKE ?
After looking through many links, I think that I understand the issue here.
I think that this is the k8s issue that you’re experiencing. [1]
There seems to be an issue with CFS quotas in Linux that is affecting all containerized clouds including Kubernetes, you can workaround the issue by adding a higher CPU Limit to your cluster or remove CPU limits from your containers. Please do test this on a staging environment and not straight in production.
Best of Luck!