To address the default AWS VPC CNI max pod number node limit (Max Pods = Maximum supported Network Interfaces for instance type) * ( IPv4 Addresses per Interface) - i.e. 17 in a t3.medium) in my EKS cluster I started using Weave CNI Plugin to overcome that limitation.
This brings a caveat, if you have an application or container running in the overlay network and the Kubernetes master node / API needs to talk to it, it won't work. For instance, the ApiService v1beta1.metrics.k8s.io
tries to connect to metrics-server
pods running in the overlay network and it won't succeed. A proposed solution is to run metrics-server
with hostNetwork: true
which works just fine.
My problems started when we decided to have custom-metrics
with prometheus-adapter
so that we could have metrics from Kafka consumer group lag for horizontal autoscaling of the consumer pods. To have the ApiService v1beta1.custom.metrics.k8s.io
talking with prometheus-adapter
pod we also have to set hostNetwork: true
but this time the prometheus-adapter
cannot access the Prometheus running in the overlay network anymore - and we cannot move everything to the host network!
I'm kind of in a dead-end here. I guess I could use some "tool" to forward the metrics that I need from the overlay network Prometheus to another Prometheus in the host network which would be then used by prometheus-adapter?
Problem solved. I was testing connectivity with telnet but somehow wget works just fine.