I am using k8s job system to run a stateful job, and it need to keep sync state to DB. But when node perform auto upgrade or pod reschedule new pod create immediately before ole one delete complete. Delete complete means all cleanup done within the graceful shutdown period.
I've tried using Deployment instead with .spec.strategy=="Recreate" but it seems only work for updating image not for Pod forced deletion.
Here's the code i used to test k8s behavior.
process.on('SIGTERM', () => {
console.log('Got SIGTERM!');
setTimeout(() => {
console.log('really exit');
process.exit(0);
}, 20 * 1000);
setInterval(() => {
console.log('still alive!', 100);
})
});
setInterval(() => {
console.log('alive')
}, 5000);
And the yaml file
apiVersion: batch/v1
kind: Job
metadata:
name: testjob
spec:
template:
spec:
containers:
- name: testjob
image: mybuiltimage
imagePullPolicy: Always
restartPolicy: Never
backoffLimit: 2
Delete the Pod created by Job controller and run kubectl get pods can show two (new and old one) running in same time: one is terminatin another is in running state.
testjob-f88449f78-k2spt 1/1 Terminating 0 2m
testjob-f88449f78-rwnwn 1/1 Running 0 14s
How to make Pod controller make sure deleting complete before creating new one?