I have a docker image with a few python functions that I would like to execute using a K8 cronjob.
I added the docker image to the cronjob spec so the functions would be executed on a schedule.
Would it be necessary or beneficial if I created a container with the same docker image via a deployment spec that would follow the logs?
Depends on if you'd like to be able to use the logs for live debugging or not.
Every Job in Kubernetes creates a Pod, so essentially you can just look at the logs with:
kubectl logs <pod-name>
Another alternative is to stream your Pod logs to a logging tool like an EFK or ELK stack. Also, there are also many paid vendors that allow you send logs to the cloud, so that's another option.