I'm using kubernetes, and I want to create tool which have the ability to do tcpdump on some interfaces.
The problem is that this tool is controlled from container container_A in POD_A which doesn't have tcpdump installed in it (and I can't change it at this point), while tcpdump ability can be executed from container_B in POD_B (different from POD_A)
So the solution I choosed is to create tcpdump on different container (container_B in POD_B) and then move the dump file from container_B to container_A.
For this I used something close to solution which is provided here:
https://stackoverflow.com/questions/59703610/copy-file-from-pod-to-host-by-using-kubernetes-python-client
The problem now is that for using this solution I understand that the dump file should be closed on origin container before I copy it to the destination container. This can be causing a high disk usage on container_A which has a low disk space attached to it while container_B has a big disk space attached to it.
I will paste the code from the link here for reference
<!-- language: python -->def stream_copy_from_pod(self, pod_name, name_space, source_path, destination_path):
"""
Copy file from pod to the host.
:param pod_name: String. Pod name
:param name_space: String. Namespace
:param source_path: String. Pod destination file path
:param destination_path: Host destination file path
:return: bool
"""
command_copy = ['tar', 'cf', '-', source_path]
with TemporaryFile() as tar_buffer:
exec_stream = stream(self.coreClient.connect_get_namespaced_pod_exec, pod_name, name_space,
command=command_copy, stderr=True, stdin=True, stdout=True, tty=False,
_preload_content=False)
# Copy file to stream
try:
reader = WSFileManager(exec_stream)
while True:
out, err, closed = reader.read_bytes()
if out:
tar_buffer.write(out)
elif err:
logger.debug("Error copying file {0}".format(err.decode("utf-8", "replace")))
if closed:
break
exec_stream.close()
tar_buffer.flush()
tar_buffer.seek(0)
with tarfile.open(fileobj=tar_buffer, mode='r:') as tar:
member = tar.getmember(source_path)
tar.makefile(member, destination_path)
return True
except Exception as e:
raise manage_kubernetes_exception(e)
My question is can I use somehow the exec_stream
with addaptaion provided in the example (or another kubernetes-client
python api) and tcpdump
command om container_B to stream the tcpdump straight to some tarball file (or another type of file) on container_A?
My assumption is that I just need to open stream container_B
and send command="tcpdump ..."
in a way that it will be stream the tcpdump to stdout of this container and "fetch" + "handle" this stdout from the stream into file in container_A
, but is there a better way to do it?
From what I get you want to have a container that does tcpdump associated to a single application container. So that means You want two tightly coupled containers.
You are in luck because this is exactly the use-case of a multi-container Pod
!
Put your two containers in the same Pod
with an EmptyDir
volume and both containers will be able to access it. This type of volume stores data on the node and is ephemeral. In my opinion this is simpler than transferring data between two separate pods that can be on two very distant nodes.
Eventually I've decided to use Async Thrift for "management" of this operation (created client-server) rather to use python kubernetes library.