I am working on integrating a few Laravel PHP applications into a new Kubernetes architecture, and still struggling on how I can run php artisan schedule:run
in a nice manner.
In the official Laravel manual, we are advised to set up the cronjob like this.
* * * * * cd /path-to-your-project && php artisan schedule:run >> /dev/null 2>&1
ref. https://readouble.com/laravel/5.7/en/scheduling.html
Cronjob
Initially, I came up with the idea of using cronjob in Kubernetes, and it works fine for now but started worried about the current architecture.
(One deployment for web service, and one cronjob for the task scheduling.)
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: cron
namespace: my-laravel-app
spec:
concurrencyPolicy: Replace
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- image: my_laravel_app_image:latest
name: cron
command: ["php", "artisan", "schedule:run"]
imagePullPolicy: Always
envFrom:
- configMapRef:
name: laravel-app-config
- secretRef:
name: laravel-app-secret
restartPolicy: Never
However, since I use concurrencyPolicy: Replace
here the pod itself might be terminated even the job is still running (for more than 1 minute). To avoid this issue, I could use the default value concurrencyPolicy: Allow
but it introduces another issue - Even I set failedJobsHistoryLimit
at 1 and successfulJobsHistoryLimit
at 1 the pod associated with the job are not properly terminated in the current running in-house Kubernetes cluster, and it reaches quota limit.
NAME READY STATUS RESTARTS AGE
pod/test-cronjob-a 0/1 Completed 0 4m30s
pod/test-cronjob-b 0/1 Completed 0 3m30s
pod/test-cronjob-c 0/1 Completed 0 2m29s
pod/test-cronjob-d 0/1 Completed 0 88s
pod/test-cronjob-e 0/1 Completed 0 28s
ref. https://github.com/kubernetes/kubernetes/issues/74741#issuecomment-712707783
Also, I feel It's a bit tricky to configure the monitoring and logging stack for those one-off jobs.
Deployment
Instead of using cronjob, I'm thinking to try to deploy the scheduler as another pod having the cron setting in the container using deployment resource.
(One deployment for web service, and one deployment for the task scheduling.)
I just wonder how you guys normally work around this issue in a scalable manner.