Kubernetes Pod not able to contact another pod when Namespace specified

11/18/2019

I'm using Kubernetes on a centOS 7.

Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-13T11:23:11Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-13T11:13:49Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}

I did deploy a Nginx pod on a "production" namespace by using the following prod_www_pod.yaml:

apiVersion: v1
kind: Pod
metadata:
  name: www
  namespace: production
  labels:
    app: www
spec:
  containers:
  - name: nginx
    image: myrepo:5001/nginx

I've deployed a second one (prod_debug.yaml) in order to check if everything is running properly.

apiVersion: v1
kind: Pod
metadata:
  name: debug
  namespace: production
spec:
  containers:
  - name: debug
    image: myrepo:5001/debug:latest
    command:
    - "sleep"
    - "10000"

I can see both on the correct namespace,

[plaurent@kubmaster deployment]$ kubectl get po
No resources found in default namespace.
[plaurent@kubmaster deployment]$ kubectl get po -n production
NAME    READY   STATUS    RESTARTS   AGE
debug   1/1     Running   0          94s
www     1/1     Running   0          3m57s

but when trying to curl www from debug, I have :

[plaurent@kubmaster deployment]$ kubectl exec -it -n production debug -- sh
/ # curl www
Nothing there
/ # 

After a kubectl delete po debug www -n production, I tried to install the same except by removing the namespace specified on metadata.

[plaurent@kubmaster deployment]$ kubectl get po
NAME    READY   STATUS    RESTARTS   AGE
debug   1/1     Running   0          10s
www     1/1     Running   0          18s

and when trying to contact www from debug, it is working well.

[plaurent@kubmaster deployment]$ kubectl exec -it debug -- sh
/ # curl www
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
/ # 

Can someone please point me in the right direction ?

Regards,

Pierre

-- Tanc
kubernetes
kubernetes-pod

2 Answers

11/18/2019

You can't reach a pod through dns, unless you have configured hostname and subdomain for it, and the way to reach it is not as you are doing.

Probably you have a service called www in the default namespace that is forwarding the request to the pod, but on your production namespace you don't have the service.

To confirm what I'm saying, run kubectl get svc in your default and production namespaces.

If I'm right, expose your pod, by kubectl expose pod ..., or create a service through a yaml file.

Now, note that create a pod is a bad idea. It is better to create e Deployment with 1 replica.

-- suren
Source: StackOverflow

11/18/2019

This works in the default namespace and not the production namespace because of how the kube-dns is configured by default. The default ndots config of the kube-dns recognises default.svc.cluster.local. Thus when using the default namespace, the default values allow the kube-dns to locate www. But since there is no default entry for production.svc.cluster.local, the name resolution fails.

You'll need to configure a service to expose the www pod in the production namespace (might as well use headless service).

-- Patrick W
Source: StackOverflow