Kubernetes service responding on different port than assigned port

12/16/2020

I've deployed few services and found one service to be behaving differently to others. I configured it to listen on 8090 port (which maps to 8443 internally), but the request works only if I send on port 8080. Here's my yaml file for the service (stripped down to essentials) and there is a deployment which encapsulates the service and container

apiVersion: v1
kind: Service
metadata:
  name: uisvc
  namespace: default
  labels:
    helm.sh/chart: foo-1
    app.kubernetes.io/name: foo
    app.kubernetes.io/instance: rb-foo
spec:
  clusterIP: None
  ports:
    - name: http
      port: 8090
      targetPort: 8080
  selector:
    app.kubernetes.io/component: uisvc

After installing the helm, when I run kubectl get svc, I get the following output

fooaccess   	ClusterIP   None         <none>        8888/TCP   119m
fooset     		ClusterIP   None         <none>        8080/TCP   119m
foobus   		ClusterIP   None         <none>        6379/TCP   119m
uisvc           ClusterIP   None         <none>        8090/TCP   119m

However, when I ssh into one of the other running containers and issue a curl request on 8090, I get "Connection refused". If I curl to "http://uisvc:8080", then I am getting the right response. The container is running a spring boot application which by default listens on 8080. The only explanation I could come up with is somehow the port/targetPort is being ignored in this config and other pods are directly reaching the spring service inside.

Is this behaviour correct? Why is it not listening on 8090? How should I make it work this way?

Edit: Output for kubectl describe svc uisvc

Name:              uisvc
Namespace:         default
Labels:            app.kubernetes.io/instance=foo-rba
                   app.kubernetes.io/managed-by=Helm
                   app.kubernetes.io/name=rba
                   helm.sh/chart=rba-1
Annotations:       meta.helm.sh/release-name: foo
                   meta.helm.sh/release-namespace: default
Selector:          app.kubernetes.io/component=uisvc
Type:              ClusterIP
IP:                None
Port:              http  8090/TCP
TargetPort:        8080/TCP
Endpoints:         172.17.0.8:8080
Session Affinity:  None
Events:            <none>
-- Wander3r
kubernetes
minikube

1 Answer

12/16/2020

This is expected behavior since you used headless service.

Headless Services are used for service discovery mechanism so instead of returning single DNS A records, the DNS server will return multiple A records for your service each pointing to the IP of an individual pods that backs the service. So you do simple DNS A records lookup and get the IP of all of the pods that are part of the service.

Since headless service doesn't create iptables rules but creates dns records instead, you can interact directly with your pod instead of a proxy. So If you resolve <servicename:port> you will get <podN_IP:port> and then your connection will go to the pod directly. As long as all of this is in the same namespace you don`t have resolve it by full dns name.

With several pods, DNS will give you all of them and just put in the random order (or in RR order). The order depends on the DNS server implementation and settings.

For more reading please visit:

-- acid_fuji
Source: StackOverflow