When trying to deploy the JHipster Console on Kubernetes the jhipster-elasticsearch-client pod fails to start. The pod fails with reason OOMKilled
and exit code 137
.
Increasing the default memory limit from 512M to 1G did not solve the issue.
The node also has plenty of memory left:
Non-terminated Pods: (9 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
default gateway-mysql-5c66b69cb6-r84xb 0 (0%) 0 (0%) 0 (0%) 0 (0%)
default jhipster-console-84c54fbd79-k8hjt 0 (0%) 0 (0%) 0 (0%) 0 (0%)
default jhipster-elasticsearch-client-7cb576d5d7-s48mn 200m (10%) 400m (20%) 512Mi (6%) 1Gi (13%)
default jhipster-import-dashboards-s9k2g 0 (0%) 0 (0%) 0 (0%) 0 (0%)
default jhipster-registry-0 0 (0%) 0 (0%) 0 (0%) 0 (0%)
default jhipster-zipkin-6df799f5d8-7fhz9 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system calico-node-hc5p9 250m (12%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-proxy-cgmqj 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system tiller-deploy-5c688d5f9b-zxnnp 0 (0%) 0 (0%) 0 (0%) 0 (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
450m (22%) 400m (20%) 512Mi (6%) 1Gi (13%)
The default installation of Elasticsearch is configured with a 1 GB heap. You can configure the Docker Elasticsearch memory requirements by adding an ENV variable to your container:
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
Related Docs: