I have a daemonset which deploys containers to check status of few mount points. This daemonset deployment is done from a python script and soon after the deployment command I am collecting the logs which will be created by above daemonset pods. I see that the logs are getting copied soon after the daemonset creation is triggered and for the same reason , logs are not complete.
After some investigation , I found that the daemonset pods are still in process of script execution and writing logs.Meanwhile the main script has already jumped to the next command of copying logs using kubectl cp
command.
Is there any way I can put a wait based on the condition that copy logs only if script execution is completed.
This is the part of code I have
# Create diag pod
cmd = "kubectl apply -f diagnostic_daemon.yaml"
(rc, cmd_out, cmd_err) = cmdHandle.cmd_run(cmd)
if cmd_err.strip():
print "ERROR: Unbale to create diag POD. Exiting!!!"
print "> " + REDC + cmd_err.strip() + ENDC
sys.exit(1)
# wait for the daemonset pod to reach running state
check_daemonset_state("cos-plugin-diag")
print "\n*****Collecting cos-plugin-diag logs*****"
# Collect diag pod logs
global nodeQdigl
So you also need to check for the pods to be in Running
state (all containers in the pod)
# wait for the pod
check_pod_state("cos-plugin-diag")
So if the Pod has 2 containers, for example, you should check for 2/2
Example:
NAME READY STATUS RESTARTS AGE
calico-node-9wnst 2/2 Running 0 6d
The other aspect that you need to check a string in the log file that determines the end of your log and you can check it with
kubectl logs <pod-name> -c <container-if-multiple-containers> -n <your-namespace>
Hope it helps