What is the correct URL to use for GET requests to another Kubernetes container?

8/17/2021

I'm trying to create a simple microservice, where a JQuery app in one Docker container uses this code to get a JSON object from another (analytics) app that runs in a different container:

<script type="text/javascript">
$(document).ready(function(){
$('#get-info-btn').click(function(){
  $.get("http://localhost:8084/productinfo", 
  function(data, status){          
    $.each(data, function(i, obj) {
      //some code
    });   
  });
});
});
</script> 

The other app uses this for the Deployment containerPort.

  ports:
    - containerPort: 8082

and these for the Service ports.

  type: ClusterIP
  ports:
    - targetPort: 8082
      port: 8084   

The 'analytics' app is a golang program that listens on 8082.

func main() {
	http.HandleFunc("/productinfo", getInfoJSON)	
	log.Fatal(http.ListenAndServe(":8082", nil))
}

When running this on Minikube, I encountered issues with CORS, which was resolved by using this in the golang code when returning a JSON object as a response:

w.Header().Set("Access-Control-Allow-Origin", "*")
w.Header().Set("Access-Control-Allow-Headers", "Content-Type") 

All this worked fine on Minikube (though in Minikube I was using localhost:8082). The first app would send a GET request to http://localhost:8084/productinfo and the second app would return a JSON object.

But when I tried it on a cloud Kubernetes setup by accessing the first app via <IP address of node>:<nodePortNumber>, when I open the browser console, I keep getting the error Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://localhost:8084/productinfo.

Question: Why is it working on Minikube but not on the cloud Kubernetes worker nodes? Is using localhost the right way to access another container? How can I get this to work? How do people who implement microservices use their GET and POST requests across containers? All the microservice examples I found are built for simple demos on Minikube, so it's difficult to get a handle on this nuance.

-- Nav
docker
kubernetes
minikube

1 Answer

8/18/2021

@P.... is absolutely right, I just want to provide some more details about DNS for Services and communication between containers in the same Pod.

DNS for Services

As we can find in the documentation, Kubernetes Services are assigned a DNS A (or AAAA) record, for a name of the form <serviceName>.<namespaceName>.svc.<cluster-domain>. This resolves to the cluster IP of the Service.

"Normal" (not headless) Services are assigned a DNS A or AAAA record, depending on the IP family of the service, for a name of the form my-svc.my-namespace.svc.cluster-domain.example. This resolves to the cluster IP of the Service.

Let's break down the form <serviceName>.<namespaceName>.svc.<cluster-domain> into individual parts:

  • <serviceName> - The name of the Service you want to connect to.

  • <namespaceName> - The name of the Namespace in which the Service to which you want to connect resides.

  • svc - This should not be changed - svc stands for Service.

  • <cluster-domain> - cluster domain, by default it's cluster.local.

We can use <serviceName> to access a Service in the same Namespace, however we can also use <serviceName>.<namespaceName> or <serviceName>.<namespaceName>.svc or FQDN <serviceName>.<namespaceName>.svc.<cluster-domain>.

If the Service is in a different Namespace, a single <serviceName> is not enough and we need to use <serviceName>.<namespaceName> (we can also use: <serviceName>.<namespaceName>.svc or <serviceName>.<namespaceName>.svc.<cluster-domain>).

In the following example, app-1 and app-2 are in the same Namespace and app-2 is exposed with ClusterIP on port 8084 (as in your case):

$ kubectl run app-1 --image=nginx
pod/app-1 created

$ kubectl run app-2 --image=nginx
pod/app-2 created

$ kubectl expose pod app-2 --target-port=80 --port=8084
service/app-2 exposed

$ kubectl get pod,svc
NAME        READY   STATUS    RESTARTS   AGE
pod/app-1   1/1     Running   0          45s
pod/app-2   1/1     Running   0          41s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
service/app-2        ClusterIP   10.8.12.83   <none>        8084/TCP   36s

NOTE: The app-2 is in the same Namespace as app-1, so we can use <serviceName> to access it from app-1, you can also notice that we got the FQDN for app-2 (app-2.default.svc.cluster.local):

$ kubectl exec -it app-1 -- bash
root@app-1:/# nslookup app-2
Server:         10.8.0.10
Address:        10.8.0.10#53

Name:   app-2.default.svc.cluster.local
Address: 10.8.12.83

NOTE: We need to provide the port number because app-2 is listening on 8084:

root@app-1:/# curl app-2.default.svc.cluster.local:8084
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...

Let's create app-3 in a different Namespace and see how to connect to it from app-1:

$ kubectl create ns test-namespace
namespace/test-namespace created

$ kubectl run app-3 --image=nginx -n test-namespace
pod/app-3 created

$ kubectl expose pod app-3 --target-port=80 --port=8084 -n test-namespace
service/app-3 exposed

NOTE: Using app-3 (<serviceName>) is not enough, we also need to provide the name of the Namespace in which app-3 resides (<serviceName>.<namespaceName>):

# nslookup app-3
Server:         10.8.0.10
Address:        10.8.0.10#53

** server can't find app-3: NXDOMAIN

# nslookup app-3.test-namespace
Server:         10.8.0.10
Address:        10.8.0.10#53

Name:   app-3.test-namespace.svc.cluster.local
Address: 10.8.12.250

# curl app-3.test-namespace.svc.cluster.local:8084
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...

Communication Between Containers in the Same Pod

We can use localhost to communicate with other containers, but only within the same Pod (Multi-container pods).

I've created a simple multi-container Pod with two containers: nginx-container and alpine-container:

$ cat multi-container-app.yml
apiVersion: v1
kind: Pod
metadata:
  name: multi-container-app
spec:
  containers:
  - image: nginx
    name: nginx-container
  - image: alpine
    name: alpine-container
    command: ["sleep", "3600"]

$ kubectl apply -f multi-container-app.yml
pod/multi-container-app created

We can connect to the alpine-container container and check if we can access the nginx web server located in the nginx-container with localhost:

$ kubectl exec -it multi-container-app -c alpine-container -- sh

/ # netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      -
tcp        0      0 :::80                   :::*                    LISTEN      -

/ # curl localhost
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...

More information on communication between containers in the same Pod can be found here.

-- matt_j
Source: StackOverflow