Kubernetes Service does not forward to ports other than 80 and 443

7/2/2019

Cluster setup:

  • OS: Ubuntu 18.04, w/ Kubernetes recommended install settings
  • Cluster is bootstrapped with Kubespray
  • CNI is Calico

Quick Facts (when redis service ip is 10.233.90.37):

  • Host machine: psql 10.233.90.37:6379 => success
  • Host machine: psql 10.233.90.37:80 => success

  • Pods (in any namespace) psql 10.233.90.37:6379 => timeout

  • Pods (in any namespace) psql redis:6379 => timeout

  • Pods (in any namespace) psql redis.namespace.svc.cluster.local => timeout
  • Pods (in any namespace) psql redis:80 => success
  • Pods (in any namespace) psql redis.namespace.svc.cluster.local:80 => success

Kubernetes Service (NodePort, LoadBalancer, ClusterIP) will not forward ports other than 80 and 443, for pods. The pod ports can be different, but the requests to the Service will time out if the Service port is not 80 or 443.

Requests from the host machine to a Kubernetes Service on ports other than 80 and 443 work. BUT requests from pods to these other ports fail.

Requests from pods to services on ports 80 and 443 do work.

user@host: curl 10.233.90.37:80
200 OK
user@host: curl 10.233.90.37:5432
200 OK

# ... exec into Pod
```bash
bash-4.4# curl 10.233.90.37:80
200 OK
bash-4.4# curl 10.233.90.37:5432
Error ... timeout ...
user@host: kubectl get NetworkPolicy -A
No resources found.
user@host: kubectl get PodSecurityPolicy -A
No resources found.

Example service:

apiVersion: v1
kind: Service
metadata:
  labels:
    app: redis
  name: redis
  namespace: namespace
spec:
  ports:
  - port: 6379
    protocol: TCP
    targetPort: 6379
    name: redis
  - port: 80
    protocol: TCP
    targetPort: 6379
    name: http
  selector:
    app: redis
  type: NodePortI've tried ClusterIP, NodePort, and LoadBalancer

What's going on with this crazy Kubernetes Service port behavior!?

After debugging, I've found that it may be related to ufw and iptables config.

ufw settings (very permissive):

Status: enabled
80                         ALLOW       Anywhere
443                        ALLOW       Anywhere
6443                       ALLOW       Anywhere
2379                       ALLOW       Anywhere
2380                       ALLOW       Anywhere
10250/tcp                  ALLOW       Anywhere
10251/tcp                  ALLOW       Anywhere
10252/tcp                  ALLOW       Anywhere
10255/tcp                  ALLOW       Anywhere
179                        ALLOW       Anywhere
5473                       ALLOW       Anywhere
4789                       ALLOW       Anywhere
10248                      ALLOW       Anywhere
22                         ALLOW       Anywhere
80 (v6)                    ALLOW       Anywhere (v6)
443 (v6)                   ALLOW       Anywhere (v6)
6443 (v6)                  ALLOW       Anywhere (v6)
2379 (v6)                  ALLOW       Anywhere (v6)
2380 (v6)                  ALLOW       Anywhere (v6)
10250/tcp (v6)             ALLOW       Anywhere (v6)
10251/tcp (v6)             ALLOW       Anywhere (v6)
10252/tcp (v6)             ALLOW       Anywhere (v6)
10255/tcp (v6)             ALLOW       Anywhere (v6)
179 (v6)                   ALLOW       Anywhere (v6)
5473 (v6)                  ALLOW       Anywhere (v6)
4789 (v6)                  ALLOW       Anywhere (v6)
10248 (v6)                 ALLOW       Anywhere (v6)
22 (v6)                    ALLOW       Anywhere (v6)

Kubespray deployment fails with ufw disabled. Kubespray deployment succeeds with ufw enabled.

Once deployed, disabling ufw will allow pods to connect on ports other than 80, 443. However, the cluster crashes when ufw is disabled.

Any idea what's going on? Am I missing a port in ufw config.... ? Seems weird that ufw would be required for kubespray install to succeed.

-- Shain Lafazan
cni
coredns
devops
kubernetes
project-calico

1 Answer

7/2/2019

LoadBalancer service exposes 1 external IP which external clients or users will use to connect with your app. In most cases, you would expect your LoadBalancer service to listen on port 80 for http traffic and port 443 for https. Because you would want your users to type http://yourapp.com or https://yourapp.com instead of http://yourapp.com:3000.

It looks like you are mixing different services in your Example service yaml for e.g. nodePort is used when service is of type NodePort. You may try:

apiVersion: v1
kind: Service
metadata:
  labels:
    app: redis
    role: master
    tier: backend
  name: redis
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 6379    // service will target containers on port 6379
    name: someName
  selector:
    app: redis
    role: master
    tier: backend
  type: LoadBalancer
-- Rajesh Gupta
Source: StackOverflow