I have a multi container pod deployment that exposes port 8080 the port inside the container is accessible through localhost but not the pod IP when I telnet on the pod local host I'm able to connect but when I telnet on the pod IP that's in /etc/hosts I get connection refused.
deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: test
namespace: yara
labels:
component: test-multi-container-pod
spec:
replicas: 1
template:
metadata:
labels:
app: test
spec:
serviceAccountName: test
containers:
- name: container-1
image: "gcr.io/projectID/my-image1:v1.9.3"
pullPolicy: "IfNotPresent"
resources:
limits:
cpu: 1000m
memory: 2Gi
requests:
cpu: 500m
memory: 2Gi
- name: container2
image: "gcr.io/projectID/my-image2:0.0.107"
pullPolicy: "IfNotPresent"
securityContext:
runAsUser: 0
resources:
limits:
cpu: 1000m
memory: 2Gi
requests:
cpu: 500m
memory: 2Gi
- name: "app-container"
## nodejs image that exposes ports 3000 & 8080
image: "gcr.io/projectID/node:8.9.4_1804082101"
workingDir: "/usr/src/app"
pullPolicy: "Always"
command: ["tail", "-f", "/dev/null"]
ports:
- name: http
containerPort: 3000
- name: graphql
containerPort: 8080
resources:
limits:
cpu: 1500m
memory: 2Gi
requests:
cpu: 1500m
memory: 2Gi
service.yaml
apiVersion: v1
kind: Service
metadata:
name: test-app
namespace: "yara"
labels:
component: test-multi-container-pod
spec:
type: NodePort
ports:
- protocol: TCP
name: http
port: 3000
targetPort: http
- protocol: TCP
name: graphql
port: 8080
targetPort: graphql
selector:
component: test-multi-container-pod
The command
option in Pod Spec overrides Entrypoint
option in Docker Container, that's why you actually run tail instead of your application
- name: "app-container"
...
command: ["tail", "-f", "/dev/null"]
According to the documentation, command
in kubernetes overrides docker containers entrypoint
with the following rules:
All containers in a Pod share the same network namespace. It looks similar, as if processes from containers in the Pod would run on the same host and be able to bind only to ports which aren't occupied by other processes in the same Pod. Practically, if you configure two containers that use the same port binding, one of them fails to start with error: "[emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address already in use)".
If you need that particular pod container process to be found and accessed by other Pods and Services, you can describe it with port:
directive in Pod Spec. It gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port in Pod Spec does not prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network by the Pod address and from other containers in the pod via localhost
.
So, the response you've received from localhost:8080 could be delivered from another container in the pod which binds to that port.
You can find a good explanation of the Pod networking in this article.