Current Setup:
Docker memory set using deployment.yml
limits: cpu: 350m memory: 700Mi requests: cpu: 200m memory: 128Mi
Native memory committed is below 550m when the pod is very near to hitting the max container limit. This leads me to believe that there is something apart from native memory causing this issue
Total: reserved=1788920KB, committed=546092KB
- Java Heap (reserved=307200KB, committed=307200KB)
(mmap: reserved=307200KB, committed=307200KB)
- Class (reserved=1116111KB, committed=75431KB)
(classes #12048)
(malloc=1999KB #25934)
(mmap: reserved=1114112KB, committed=73432KB)
- Thread (reserved=46444KB, committed=46444KB)
(thread #46)
(stack: reserved=46240KB, committed=46240KB)
(malloc=153KB #270)
(arena=51KB #88)
- Code (reserved=259362KB, committed=57214KB)
(malloc=9762KB #14451)
(mmap: reserved=249600KB, committed=47452KB)
- GC (reserved=1034KB, committed=1034KB)
(malloc=26KB #180)
(mmap: reserved=1008KB, committed=1008KB)
- Compiler (reserved=321KB, committed=321KB)
(malloc=189KB #982)
(arena=133KB #5)
- Internal (reserved=39461KB, committed=39461KB)
(malloc=39429KB #16005)
(mmap: reserved=32KB, committed=32KB)
- Symbol (reserved=15588KB, committed=15588KB)
(malloc=13214KB #128579)
(arena=2374KB #1)
- Native Memory Tracking (reserved=3220KB, committed=3220KB)
(malloc=249KB #3553)
(tracking overhead=2971KB)
- Arena Chunk (reserved=178KB, committed=178KB)
(malloc=178KB)
Issue
The pod has single container and that container has only this java application running with above mentioned configs. This java application essentially downloads images in parallel using threads with some validations like image size, dimension etc. The pod memory consumption keeps increasing until it almost hits that limit and the application restarts. I have read through many different posts and articles and I can't figure out what is causing this memory to keep going up.
Edit 1: Docker image is based off openjdk:8-jdk-slim Edit2: Some screenshots of monitoring