i'm new in spark-operator i'm confused how to setup the resources request and limits in YAML file for example in my case i have request 512m of memory for driver pod, but what about limits, it is unbounded ?
spec:
driver:
cores: 1
coreLimit: 200m
memory: 512m
labels:
version: 2.4.5
serviceAccount: spark
spec: driver: cores: 1 coreLimit: 200m memory: 512m labels: version: 2.4.5 serviceAccount: spark
It is good practice to set limits when defining your yaml file. If you do not do so you run the risk of using all resources on node as per this doc since there is no upper bound.
Memory limits for the driver and executor pods are set internally by the Kubernetes scheduler backend in Spark, and calculated as the value of spark.{driver|executor}.memory + memory overhead.