As far as I understand the kubernetespodoperator code in airflow, they use python client api to launch "naked" pods, and monitor the status of the pods and container task running in that pods, and feeds the logs/results back to airflow.
In this case, the failures/cleanup is handled by airflow itself, I am just wondering if it make more sense to have KubernetesJobOperator to run airflow task as kubernetes job, since kubernetes handles failures/cleanup for us and we have more control and parallel jobs? I guess this is like either to put more control over airflow or over kubernetes? I don't know the answer, so I ask what you guys think about.