How to stop the Laravel Queueworker gracefully, running as a docker image?

6/18/2020

We are deploying a Laravel app on Kubernetes. Just the app is not a problem, but the queue workers is. We've read from multiple sources that running the queueworkers as a separate Deployment is recommended. So, below the kubeconfig part which is running the queue worker as a command. php artisan queue:work

I understand that it's running as PID1. So when the process crashes, Kubernetes will automatically reboot the pod. The thing is however, when we are deleting the pod, it takes a while (about 20 seconds) before it stops, and it exits with exitcode 137 instead of 0. I can see this when i'm exec'ing into the pod. It returns terminated with exit code 137 on deletion.

On this article I read that Laravel (we are using 7.x) is asynchonous, and should be reacting on SIGTERM signals. So, shouldn't it be logical that when we stop the pod, kubernetes is sending a SIGTERM signal, and should gracefully stop the pod? And gracefully should be exitcode 0, right?

I hope anyone can explain what i'm doing wrong here.

apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: hip-prod
  name: worker
  labels:
    worker: worker
spec:
  minReadySeconds: 5
  replicas: 1
  revisionHistoryLimit: 1
  selector:
    matchLabels:
      worker: worker
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 50%
    type: RollingUpdate
  template:
    metadata:
      labels:
        worker: worker
    spec:
      nodeSelector:
        "beta.kubernetes.io/os": linux
      containers:
        - name: worker
          image: my-laravel-image/app:latest
          command: ["php", "artisan", "queue:work"]
          imagePullPolicy: Always
-- Milkmannetje
containers
docker
kubernetes
laravel
linux

1 Answer

9/18/2020

Have you tried to build your docker-image with this?

STOPSIGNAL SIGTERM
-- Bulat Valiev
Source: StackOverflow