Trouble recording videos

12/30/2019

We have cameras which continuously stream video. And for every user session, we record the video. I have a video streaming service (using node-media-server), onto which the camera streams the video all the time. And there is another recording service. Whenever a user performs an auth operation (logs in), I spawn a process from the recording service, and create a write stream. When the user logs out, I kill the spawned process in which recording was happening , and I upload the video to google storage bucket.

My problem is with videos of 0 bytes. It seems be happening thrice out of 50 times, on average.

The problem starts when the pod restarts. I just have a single pod. ( which is enough for my cpu and memory requirements. I just have 20 cameras currently from which I stream) . Also, there's this constraint, that if I were to have 2 pods one on VM1, and the other on VM2, then I would need to know which VM (or pod) the process will be allocated, in order to kill the processes. I think in the very near future, I will have to increase the number of pods, and I'll face this problem very soon.

Whenever the pod restarts ( due to reasons that are still unknown to me. I have checked the containers audit logs to find out why my pod restarts, but that didnt help me much, also the resources I have requested are sufficient for my load) and if there are any processes in which recording is actively happening, they will be lost. And my recordings fail. I believe this is the reason why I see those 0 bytes videos.

How do I ensure that if at all the pod restarts, the active processes should not immediately be killed. Or maybe, if there's a way to delay the pod restart until the current recording finishes ( i highly doubt if this is possible tho).

-- farhan
google-kubernetes-engine
kubernetes-pod
video-streaming

1 Answer

12/31/2019

If you have multiple pod and if you want to know the pods belong to the nodes that can be achieved by below command:

$kubectl get pods -o wide

In order to resolve the issue you need to determine the reasons for pod failure. Thus, you can describe the pod by issuing below mentioned command:

$ kubectl describe pod your-pod-name

It will show you events sent by the kubelet to the apiserver about the lifecycle of the pod. And you will get the reasons for pod failure from here, so that you can take action accordingly. Additionally, you can use below commands to get the events of pods in default namespace.

$kubectl get events -n default

For more information please follow the article click here.

-- Mahboob
Source: StackOverflow