I am running a Laravel application on Kubernetes and currently have a requirement to mount a storage/log folder from multiple pods where all pods will write to the same laravel.log
file. To achieve this, I used AWS EFS with the EFS-Provisioner for Kubernetes to mount the storage.
While troubleshooting a logging issue from my application, I noticed that when I log an entry to the laravel.log
file from Pod A/B, it shows up in both Pod A/B when I tail the log however it does not show up in Pod C. If I log from Pod C, it will only show up in Pod C. I am working inside the container using php artisan tinker
and Log::error('php-fpm');
for example as well as tail -f /var/www/api/storage/logs/laravel.log
. The same behaviour happens if I echo "php-fpm" >> /var/www/api/storage/logs/laravel.log
.
At first, I thought that I was possibly mounting the wrong storage however if I touch test
in the folder, I can see it across Pod A, B, C. I can also see the other log files that are created with identical timestamps.
Any ideas on how to fix this?
Edit: I noticed that pod A, B which are both seeing each others log entries are in the same AZ. Pod C (and another Pod D running Nginx) are running in a different AZ. I just want to outline this, but I feel like this really shouldn't matter. It should be the same storage no matter where you are connecting from.
AWS EFS is accessed using NFS protocol and according to this stack exchange answer simultaneous writes from multiple NFS clients to the same file will be corrupted.
I'm not sure there is a way of "fixing" NFS itself but you can always just log to separate files.