How to execute shell scripts on multiple containers at once in AWS EKS

12/4/2019

I am want to deploy a Laravel application on AWS EKS.

My application uses Laravel Jobs and Queues. We can use the artisan utility which comes with the Laravel to manage the Queue Worker.

php artisan queue:work
php artisan queue:restart

I will be using Supervisord to monitor the queue process.

[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /home/forge/app.com/artisan queue:work sqs --sleep=3 --tries=3
autostart=true
autorestart=true
user=forge
numprocs=8
redirect_stderr=true
stdout_logfile=/home/forge/app.com/worker.log

To start the queue when the container is deployed, I am using the ENTRYPOINT script in Dockerfile.

#!/usr/bin/env bash

##
# Ensure /.composer exists and is writable
#
if [ ! -d /.composer ]; then
    mkdir /.composer
fi

chmod -R ugo+rw /.composer

##
# Run a command or start supervisord
#
if [ $# -gt 0 ];then
    # If we passed a command, run it
    exec "$@"
else
    # Otherwise start supervisord
    /usr/bin/supervisord
fi

What I am not able to understand is that if I have multiple replicas of my application running, how will I remotely stop and start the queue process on my running containers.

On EC2, I can use AWS SSM to run shell commands on multiple instances at the same time.

Is there something similar available to AWS EKS also?

Or in general, how do you manage the queue process running on multiple containers in AWS EKS?

-- Vikas Roy
amazon-web-services
aws-eks
kubernetes

1 Answer

12/4/2019

In general, if you want to execute a command in multiple containers at once, you can do this, for example, with:

for pod in $(kubectl get pods -o jsonpath='{.items[*].metadata.name}' -l app=myapp); do
  kubectl exec "$pod" mycommand
done

This executes mycommand in the first container of all Pods with the app=myapp label. It doesn't matter if your cluster runs on EKS or anywhere else.

-- weibeld
Source: StackOverflow