what is the relationship between EXPOSE in the dockerfile and TARGETPORT in the service YAML and actual running port in the Pod?

8/19/2019

What is the relationship between EXPOSE in the dockerfile and TARGETPORT in the service YAML and actual running port in the Pod ?

In my dockerfile

expose 8080

in my deployment

ports:
  - containerPort: 8080

In my service

apiVersion: v1
kind: Service
metadata:
 name: xtys-web-admin
spec:
 type: NodePort
 ports:
  - port: 8080
    targetPort: 8080
 selector:
  app: xtys-web-admin

In my pod

kubectl exec xtys-web-admin-7b79647c8d-n6rhk -- ss -tnl
State      Recv-Q Send-Q        Local Address:Port          Peer Address:Port 
LISTEN     0      100                       *:8332                     *:*     

So, in the pod actually running 8332( from some config file). my question is how does it still works ? it works,but i doubt it, someone can clarify it?

-- adrian ding
kubernetes

3 Answers

8/20/2019

In the Dockerfile, EXPOSE is documentation by the image creator to those running the image for how they have configured the image. It sets metadata in the image that you can inspect, but otherwise does not impact how docker configures networking between containers. (Many will confuse this for publishing a port on the host, which is very different than exposing the port. Publishing a port in docker actually creates a mapping to allow the container to be externally accessed.)

The value of containerPort is a runtime equivalent of EXPOSE to expose a port that was not specified in the image. This, again, is documentation only, but may be used by other tooling that inspects running images to self configure. I've mostly seen this used by reverse proxies that default to the exposed port if you do not specify a port to connect.

It is possible for someone to configure an image to listen on a different port number than the image creator documented in their EXPOSE. For example, the nginx image will document that it listens on port 80 with it's default configuration, but you could provide your own nginx.conf file and reconfigure it to listen on port 8080 inside the container instead (e.g. if you did not want to run nginx as root).


Now for the service side:

The value of targetPort in a Kubernetes service needs to refer to the port the running container is actually listening on. Typically this is the same as the exposed port, but if you reconfigure your application like in the example above, you would set targetPort to 8080 instead of 80.

The vaule of port in a Kubernetes service is the port the service itself listens on. For inter-container communication, you need to connect on this port, and it will often be the same as the targetPort to reduce confusing.

Lastly, the value of nodePort in a Kubernetes service is the port published on the nodes for you to externally access your container. By default, this goes in the ephemeral port range starting at 30000.

-- BMitch
Source: StackOverflow

8/19/2019

Docker does not expose any port by default because of obvious security reasons. So you can not access any app running in docker containers by default.

By exposing port in Docker, you are giving user (whoever will use your image) the ability to access your application through exposed port.

Let say, you build docker image with your application running on port 8080 and MySQL database running on port 3306. You don't want any user to access MySQL database directly. Hence you will only expose port 8080.

Then, User can map local port to exposed port by docker run -p 80:8080 your-image:tag

This will map local port 80 to container's port 8080 (on which your application is running.). Because of this, any request made to localhost:80 (localhost) will serve data from your application.

When you use port and targerPort in deployment.yaml file, it does the same as above command (docker run).

When you use port and targetPort, it will forward service:port to container:targetPort.

Please check Docker doc to understand port forwarding.

This Connect applications wit services might be a good resource to read.

PS: I think, port value must be higher than 30000 when you use type: NodePort.

A good resource to read

HTH.

-- Nirav
Source: StackOverflow

8/20/2019

All of these things must agree, and refer to the same port:

  • The actual port the server process inside the container is listening on
  • The containerPort: in the pod spec
  • Port numbers in readiness and liveness probes in the pod spec (can use the name: of the port)
  • The targetPort: in the service spec (can use the name: of the port in the pod spec)

The Dockerfile's EXPOSE line should name the same port as well, but it's not strictly required.

In the service spec, the port: is the port number that other pods can use to reach this service. (I like setting port: 80 for all HTTP-type services, even if the pod uses port 8000 or 8080 or 3000 or whatever else.) For a NodePort-type service there's a third nodePort: number, usually in the 30000-32767 range, that is visible on every node in the cluster that also reaches the service.

In the example you show, if the process inside the container is listening on port 8332 but the pod spec lists containerPort: 8080, I'd expect calls through the service to fail, but maybe not until you actually make a network call (the Kubernetes-level setup would work). If you had a readiness probe that targeted the port, the pod would never show as "ready"; if you had a liveness probe, it would get restarted and eventually reach CrashLoopBackOff state.

-- David Maze
Source: StackOverflow