Kubernetes pods are communicating with other pods over load balancer instead of internally

10/4/2021

I have a Kubernetes cluster in AWS GovCloud. I have deployed the Kong Ingress Controller (private NLB) along with the kong-proxy service in namespace "kong". In namespace "A" I have deployed my applications along with an Ingress object resource. I have deployed Keycloak (authentication/authorization app), Statuses (custom Ruby on Rails app that returns the statuses of an operation - eg. 10%, 20%, 50%, 100% complete), and Operation A, a custom-built Java app that performs a calculation.

My flow:

Client --> Load Balancer DNS --> kong-proxy --> Ingress --> Keycloak service --> authenticate with Keycloak --> returns auth bearer token console output.

Client (me) passes token --> Load Balancer DNS --> kong-proxy --> Ingress --> Operation A service --> authenticate and initialize Operation A

Operation A service --> sends status update to Statuses service --> connection refused error

When troubleshooting the network flow, I see that Operation A is trying to connect to Statuses via the load balancer's DNS name: Operation A service --> Load Balancer DNS --> kong-proxy --> Ingress --> Statuses service

But this is a very strange network flow. A pod shouldn't connect to another pod in the same cluster and namespace by going through a load balancer. It should just connect via the K8s internal DNS name: name.namespace.svc.cluster.local:port/path

Is this an issue with the Kong ingress controller or should I be looking at my application config? Can any annotations or parameters be added to the ingress controller or ingress object manifests to correct this network pathing?

-- Naman Rawal
amazon-eks
aws-load-balancer
kong-ingress
kubernetes
kubernetes-ingress

0 Answers