How do you configure a TCP liveness probe with container-to-container networking in a k8s pod?

9/20/2017

I have noticed that containers in a pod can use localhost to talk to each other as advertised. For example one container starts a server socket on localhost:9999 and a second container can connect to that address. This fails if I expose the server container's port. Also it fails if I create a TCP liveness probe on that port. It appears that the liveness probe uses the pod IP address and cannot connect to localhost:9999 unless it is exposed. If both containers use the pod IP, i.e., $HOSTNAME:9999, and the port is exposed then everything works. Does any one have an example that works where each container uses localhost and the TCP probe works?

-- dturanski
kubernetes
sockets
tcp

1 Answer

9/21/2017

Here is an example deployment using TCP liveness probe, TCP readiness probe and networking between containers in a pod with the server containers port exposed:

test.yml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: test
spec:
  template:
    metadata:
      labels:
        app: test
    spec:
      containers:
      - name: server
        image: alpine
        command:
        - '/bin/sh' 
        - '-c'
        - 'nc -p 8080 -kle echo pong'
        livenessProbe:
          tcpSocket:
            port: 8080
        readinessProbe:
          tcpSocket:
            port: 8080
        ports:
        - containerPort: 8080
      - name: client
        image: alpine
        command:
        - '/bin/sh'
        - '-c'
        - 'while true; do echo -e | nc localhost 8080; sleep 1; done'

Creating and verifying the deployment:

> kubectl create -f test.yml
> kubectl get pod -l app=test
NAME                   READY     STATUS    RESTARTS   AGE
test-620943134-fzm05   2/2       Running   0          1m
> kubectl log test-620943134-fzm05 client
pong
pong
pong
[…]
-- Simon Tesar
Source: StackOverflow