Kubernetes talking to specific ports externally

2/13/2020

Having a pod inside a namespace, I can do certain connections externally but some seem to be blocked. I have tested and due to limitations I cannot run Wireshark and tcpdump inside my namespace. So I setup some tests on the destinations.

Here is a sample of tests
nc -v -z -w 2 machinename 445
works like a champ.
nc -v -z -w 2 machinename 80
works like a champ
nc -v -z -w 2 machinename 8080 works
nc -v -z -w 2 machine 5985 - fails with a timeout

Outside of Kubernetes from a bare metal machine nc -v -z -w 2 machinename 5985 - works

So I look at my egress

apiVersion: extensions/v1beta1
kind: NetworkPolicy
metadata:
  name: allow-all
spec:
  podSelector: {}
  egress:
  - {}
  policyTypes:
  - Egress

When I do a netmon on the target machine name. I see the working calls fine. This port is just an example because of a python service that calls 5985 I have a few others but I am picking on a known port (winerm). I never see on target the netmon call. The belief I have is that from the pod it is blocked outright and never escapes the namespace.

I can rule out routing as other ports work. I can rule out egress it is wide open. Since I can reliably say TCP traffic that is ruled out as I UDP issue.

What I don't know is why only specific ports are blocked for TCP traffic. I have 2 other ports with another application that are in the same state.

So it's not all ports, just some. No reasoning that I can see.

If anyone has any ideas on what to look at, please let me know as I have searched and searched and so far all I can find in ingress solutions to expose ports not a service in a pod calling out having issues.

-- Kmac
kubernetes

2 Answers

2/14/2020

If you cluster uses calico cni, then verify are there any GlobalNetworkPolicies (https://docs.projectcalico.org/v3.11/reference/resources/globalnetworkpolicy) with higher priority (lower order) created which is restricting these specific ports.

-- anmol agrawal
Source: StackOverflow

2/26/2020

The answer turned out to be a router rule on those ports. I did a reverse tcpdump and captured traces coming into destination. The admins placed a net shoot root accessible pod into namespace and I was able to capture outbound traffic. Using wireshark I was able to identify a rule in router that blocked those ports from getting through.

-- Kmac
Source: StackOverflow