I use the Distributed JMeter Helm Chart for distributed benchmarking in Kubernetes.
Depending on the test plan I get the exception:
Uncaught Exception java.lang.OutOfMemoryError: Java heap space in thread Thread[Thread Group 1-45,5,main]. See log file for details.
And in the jmeter.log file:
2020-06-20 08:57:17,602 ERROR o.a.k.c.u.KafkaThread: Uncaught exception in thread 'kafka-producer-network-thread | producer-24':
java.lang.OutOfMemoryError: Java heap space
at org.apache.kafka.common.requests.MetadataResponse.lambda$brokersMap$0(MetadataResponse.java:68) ~[kloadgen-1.5.0.jar:?]
at org.apache.kafka.common.requests.MetadataResponse$$Lambda$222/804455373.apply(Unknown Source) ~[?:?]
at java.util.stream.Collectors.lambda$toMap$58(Collectors.java:1321) ~[?:1.8.0_252]
at java.util.stream.Collectors$$Lambda$51/1566067112.accept(Unknown Source) ~[?:?]
at java.util.stream.ReduceOps$3ReducingSink.accept(ReduceOps.java:169) ~[?:1.8.0_252]
at java.util.Iterator.forEachRemaining(Iterator.java:116) ~[?:1.8.0_252]
at java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801) ~[?:1.8.0_252]
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482) ~[?:1.8.0_252]
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472) ~[?:1.8.0_252]
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708) ~[?:1.8.0_252]
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:1.8.0_252]
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:566) ~[?:1.8.0_252]
at org.apache.kafka.common.requests.MetadataResponse.brokersMap(MetadataResponse.java:67) ~[kloadgen-1.5.0.jar:?]
at org.apache.kafka.common.requests.MetadataResponse.topicMetadata(MetadataResponse.java:202) ~[kloadgen-1.5.0.jar:?]
at org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater.handleCompletedMetadataResponse(NetworkClient.java:1037) ~[kloadgen-1.5.0.jar:?]
at org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:845) ~[kloadgen-1.5.0.jar:?]
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:548) ~[kloadgen-1.5.0.jar:?]
at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:331) ~[kloadgen-1.5.0.jar:?]
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:238) ~[kloadgen-1.5.0.jar:?]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_252]
I think I need to increase the heap size over the JVM_ARGS. The JMeter Usermanuel describes how to set these when starting jmeter. But, how can I set the environment variable via Kubernetes so that all JMeter servers get more memory?
Thanks!
According to the Define an environment variable for a container chapter it should be as simple as changing this section from the jmeter-server-deployment.yaml as follows:
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
args: ["server"]
ports:
- containerPort: 50000
- containerPort: 1099
env:
- name: HEAP
value: "-Xms1G -Xmx2G"
this will increase the JVM heap size twice.
More information: 9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure