How to access local machine from a pod

2/18/2019

I have a pod created on the local machine. I also have a script file on the local machine. I want to run that script file from the pod (I will be inside the pod and run the script present on the local host).

That script will update /etc/hosts of another pod. Is there a way where i can update the /etc/hosts of one pod from another pod? The pods are created from two different deployments.

-- jaya rohith
docker
kubernetes

2 Answers

2/18/2019

As an addition to David's answer - you can copy script from your host to a pod using cp:

kubectl cp [file-path] [pod-name]:/[path]

About your question in the comment. You can do it by exposing a deployment:

kubectl expose deployment/name

Which will result in creating a service, you can find more practical examples and approach in this section. Thus after your specific Pod terminates you can still reach new Pods by the same port and Service. You can find more details here.

In the example from documentation you can see that nginx Pod has been created with a container port 80 and the expose command will have following effect:

This specification will create a Service which targets TCP port 80 on any Pod with the run: my-nginx label, and expose it on an abstracted Service port (targetPort: is the port the container accepts traffic on, port: is the abstracted Service port, which can be any port other pods use to access the Service). View Service API object to see the list of supported fields in service definition

Other than that seems like David provided really good explanation here, and it would be finding out more about FQDN and DNS - which also connects with services.

-- aurelius
Source: StackOverflow

2/18/2019

I want to run that script file from the pod (I will be inside the pod and run the script present on the local host).

You can't do that. In a plain Docker context, one of Docker's key benefits is filesystem isolation, so the container can't see the host's filesystem at all unless parts of it are explicitly published into the container. In Kubernetes not only is there this restriction, but you also have limited control over which node you're running on, and there's potential trouble if one node has a given script and another doesn't.

Is there a way where i can update the /etc/hosts of one pod from another pod?

As a general rule, you should avoid using /etc/hosts for anything. Setting up a DNS service keeps things consistent and avoids having to manually edit files in a bunch of places.

Kubernetes provides a DNS service for you. In particular, if you define a Service, then the name of that Service will be visible as a DNS name (within the cluster); one pod can reach the other via first-service-name.default.svc.cluster.local. That's probably the answer you're actually looking for.

(If you really only have a single-node environment then Kubernetes adds a lot of complexity and not much benefit; consider plain Docker and Docker Compose instead.)

-- David Maze
Source: StackOverflow