I have setup a spark cluster on Kubernetes.And created an application in which i have initialized the spark context,it is a multi threaded application. Now i have created four executors with 4gb ram and 4 cores and when i submit my spark job then executors consumed some memory and now it is not releasing the executors memory after job completion. I have also tried sqlContext.clearCache() and spark.catalog.clearCache(). But still executors are not releasing the memory. I have tried by shutdown the executors but i am searching some other solution in which i don't need to stop my executors so that i can release the executors memory without stop the executors.