I'm running some scheduled commands on Kubernetes using PHP.
When running the command on local docker PHP RAM usage is at least 70% less than on Kubernetes.
I'm using the same Docker Image on both local and Kubernetes.
An example script:
<?php
use Symfony\Component\Process\Process;
require __DIR__ . '/vendor/autoload.php';
$remotePath = 'https://server/file.csv'; // Around 150MB
function downloadViaCurl($remotePath)
{
$commandline = sprintf(
'curl -o %s %s',
'/tmp/file.csv',
$remotePath
);
$process = new Process($commandline);
$process->disableOutput();
$process->setTimeout(null);
$process->run();
}
downloadViaCurl($remotePath);
$memory = memory_get_peak_usage(true) / 1000000;
echo sprintf("Used %.2fMB of RAM" . PHP_EOL, $memory);
The output for Local Docker:
Used 2.10MB of RAM
The output on K8S:
Used 6.29Mb of RAM
Dockerfile and job.yaml can be found on https://github.com/InFog/memory_issues
I found the issues.
1 - The K8S deployment was setting the application to development mode. This is a Symfony application using Doctrine ORM. Doctrine ORM's profiling was active, this made it collect all the queries in memory, growing the amount of needed memory a lot. FOr long running processes with up to a million queries it was using around 500MB of RAM and after changing it to not profile it uses less than 30MB.
Lesson learned: Always check the production parameters.
2 - The second problem: For each time the cronjob runs K8S will run a new container with no Symfony cache. This will also impact memory usage. I solved this by warming up Symfony cache before pushing the image to the registry.