Kubernetes pods cannot communicate internally

3/5/2021

In my K8 cluster, I have two Deployments in the same namespace. One Deployment is for Database-Postgres and the other Deployment is for Tomcat. The Tomcat should be accessible from outside hence I have configured "NodePort" Service and for internal communication I have created a "ClusterIP" Service by exposing the port of Postgres (i.e. 5432). Once everything is deployed, I want the Tomcat pod to communicate with the Postgres pod. But when I "curl postgres-service:5432" from Tomcat pod, I get "Connection refused" message. Is there any misconfiguration?

apiVersion: v1
kind: Service
metadata:
  name: tomcat-service
  namespace: application
  labels:
    app: application-tomcat
spec:
  type: NodePort
  ports:
  - name: tomcat-port
    targetPort: 80
    port: 80
  selector:
    app: application-tomcat

---

apiVersion: v1
kind: Service
metadata:
  name: postgres-service
  namespace: application
  labels:
    app: application-postgres
spec:
  ports:
  - port: 5432
    name: postgres-port
  selector:
    app: application-postgres

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: application-tomcat-deployment
  namespace: application
  labels:
    app: application-tomcat
spec:
  replicas: 1
  selector:
    matchLabels:
      app: application-tomcat
  template:
    metadata:
      labels:
        app: application-tomcat
    spec:
      containers:
      - name: application-container
        image: tomcat
        command:
          - sleep
          - "infinity"
        ports:
        - containerPort: 80
     
---

apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: application
  name: application-postgres-deployment
  labels:
    app: application-postgres
spec:
  replicas: 1
  selector:
    matchLabels:
      app: application-postgres
  template:
    metadata:
      labels:
        app: application-postgres
    spec:
      containers:
      - name: postgres
        image: postgres
        command:
          - sleep
          - "infinity"
        ports:
        - containerPort: 5432
          name: postgredb

Postgres pod is listening on port '5432' and database is running.

Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:5432            0.0.0.0:*               LISTEN      -
tcp6       0      0 :::5432                 :::*                    LISTEN      -

Resources in the Namespace

$ kubectl get all -n application
NAME                                                   READY   STATUS    RESTARTS   AGE
pod/application-postgres-deployment-694869cd5d-wrhzr   1/1     Running   0          9m9s
pod/application-tomcat-deployment-6db75ffb6d-ds8fr     1/1     Running   0          9m9s

NAME                       TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
service/postgres-service   ClusterIP   10.32.0.207   <none>        5432/TCP       9m9s
service/tomcat-service     NodePort    10.32.0.59    <none>        80:31216/TCP   9m9s

NAME                                              READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/application-postgres-deployment   1/1     1            1           9m9s
deployment.apps/application-tomcat-deployment     1/1     1            1           9m9s

NAME                                                         DESIRED   CURRENT   READY   AGE
replicaset.apps/application-postgres-deployment-694869cd5d   1         1         1       9m9s
replicaset.apps/application-tomcat-deployment-6db75ffb6d     1         1         1       9m9s
-- Dusty
kubernetes
kubernetes-service
postgresql
tomcat

2 Answers

3/5/2021

Looking at your yamls, it seems that your tomcat deployment is residing in a namespace called "test", whereas your postgres deployment lives in the "default" namespace.

So, in order to do cross-namespace communication, you will have to append the namespace to the service domain. This one - if executed from the tomcat Pod - should work:

curl http://postgres-service.default:5432

Alternatively, simply deploy you tomcat to the "default" namespace.

-- Fritz Duchardt
Source: StackOverflow

3/5/2021

You have overridden the default ENTRYPOINT in the postgresql image by specifying deployment.spec.template.containers[0].command. So now the only process that runs inside the pod of the deployment nginx-statefulset is sleep infinity and postgres is not running. Removing the command field and adding either POSTGRES_PASSWORD or POSTGRES_HOST_AUTH_METHOD=trust environment variable should fix the issue. Use the following manifest for nginx-statefulset

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-statefulset
  namespace: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      BIZID: nginx
  template:
    metadata:
      labels:
        BIZID: nginx
    spec:
      containers:
      - name: postgres
        image: postgres
        env:
        - name: POSTGRES_PASSWORD
          value: admin
        ports:
        - containerPort: 5432
          name: postgredb

You have to set the env variable POSTGRES_PASSWORD or POSTGRES_HOST_AUTH_METHOD=trust. Without this, the pod will crashloop with the following error message:

Error: Database is uninitialized and superuser password is not specified.
       You must specify POSTGRES_PASSWORD to a non-empty value for the
       superuser. For example, "-e POSTGRES_PASSWORD=password" on "docker run".

       You may also use "POSTGRES_HOST_AUTH_METHOD=trust" to allow all
       connections without a password. This is *not* recommended.

       See PostgreSQL documentation about "trust":
       https://www.postgresql.org/docs/current/auth-trust.html
-- livinston
Source: StackOverflow