My aim is to have an autoscaled
set of pods, defined as below, that process a dynamic Redis queue. The container process currently is just a python script that takes the next value from the queue, and exits.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: job
spec:
selector:
matchLabels:
tier: job
replicas: 1
template:
metadata:
name: job
labels:
tier: job
spec:
containers:
- name: job
image: job
terminationGracePeriodSeconds: 20
I am having a hard time understanding the restart behaviour: here is the output from watching a pod for a while.
NAME READY STATUS RESTARTS AGE
job-6465767d94-vh667 0/1 Completed 0 13s
job-6465767d94-vh667 1/1 Running 1 16s
job-6465767d94-vh667 0/1 Completed 1 27s
job-6465767d94-vh667 0/1 CrashLoopBackOff 1 38s
job-6465767d94-vh667 1/1 Running 2 40s
job-6465767d94-vh667 0/1 Completed 2 50s
job-6465767d94-vh667 0/1 CrashLoopBackOff 2 63s
As you can see, it restarts fine after first run, then enters a CrashLoopBackOff
state before restarting. The problem with this is that each "crash" the internal timer before the pod is restarted is increased.
To run it, I use:
kubectl apply -f job.yaml
kubectl autoscale deployment job-wq-2 --min=3 --max=10 --cpu-percent=20
Some further details:
kubectl describe
I think you either rewrite the script, so it does not End
but it waits for another thing to process, or use Kind: Job
.
You can follow this Kubernetes documentation regarding Jobs - Run to Completion
apiVersion: batch/v1
kind: Job
metadata:
name: pi
spec:
template:
spec:
containers:
- name: pi
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: Never
backoffLimit: 4
EDIT:
I'm assuming you have already seen Fine Parallel Processing Using a Work Queue?