Kubernetes pods using too much memory on larger machines

4/5/2017

I am still kind of getting my feet under me with kubernetes. We have a spring-boot based app with ~17 microservices running on Kubernetes 1.4.2 on AWS. When I run this app on an AWS cluster of 4 m3.medium workers, my containers are all in the 200-300MB range of memory usage at rest (with a couple exceptions). For production I installed the same set of services on 4 m4.large workers and instantly my memory moved up to 700-1000MB of memory on the same containers with virtually identical specs. I am trying to figure out who is the offending party here - Springboot, Docker or Kubernetes.

Has anyone seen behavior like this before?

I know I can cap the resources using Kubernetes limits, but I really don't want to do that given that I know the application can run just fine on smaller machines and have a smaller footprint. Just looking for some advice on where the problem might be.

EDIT: One more piece or pertinent information. I am using CoreOS stable 1298.6.0 as the host OS image.

-- Mark
amazon-web-services
docker
kubernetes
spring-boot

1 Answer

4/5/2017

In my opinion, the problem is that your processes inside the container see the total host RAM available as the RAM available for them.

If you use a bigger instance, the JVM will try to use even more RAM. You should try to limit your java virtual machine heap with -Xmx300m (adjust this value with what your app needs). I recommend you to read this article where its explained in an easy and clean way.

-- aespejel
Source: StackOverflow