That slave-agent pod always seems to die and go away very quickly after an error in my Jenkinsfile. Is there a way to exec into it and keep it alive while I'm in it? I am running Jenkins on Kubernetes using Helm
If the pod is already dead you can't kubectl exec
into the container.
However, you can ssh directly into the node that ran your pod and inspect the (now stopped) container directly. (You can't docker exec
into it once it stopped)
Something like this:
# this pod will die pretty quickly
$ kubectl run --restart=Never --image=busybox deadpod -- sh -c "echo quick death | tee /artifact"
pod "deadpod" created
$ kubectl describe pod deadpod
Name: deadpod
Namespace: default
Node: nodexxx/10.240.0.yyy
Containers:
deadpod:
Container ID: docker://zzzzzzzzz
[...]
$ ssh nodexxx
Once you have ssh'd into the node you have several debugging options.
Get the output:
nodexxx:~# docker logs zzzz
quick death
Examine the filesystem:
nodexxx:~# mkdir debug; cd debug
nodexxx:~/debug# docker export zzz | tar xv
[...]
nodexxx:~/debug# ls -l; cat artifact
[...]
quick death
Create an image from the container, create a new container and get a shell:
nodexxx:~# docker commit zzzz debug
nodexxx:~# docker run -it zzzz sh
/ # cat /artifact
quick death