Connectionstring that an pod should use to connect to an postgresql pod in same cluster?

2/10/2020

I am currently working on a application which will be running in a kubernetes pod. It is supposed to connect to a postgressql pod, that is ran within the same cluster.

but I can for some reason not deduce what the connection string should be

I for now defined the postgressql deployment as such:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: postgres
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
        - name: postgres
          image: postgres:10.4
          imagePullPolicy: "IfNotPresent"
          ports:
            - containerPort: 5432
          volumeMounts:
            - mountPath: /var/lib/postgresql/data
              name: postgredb
      volumes:
        - name: postgredb
          persistentVolumeClaim:
            claimName: postgres-pv-claim   

---
apiVersion: v1
kind: Service
metadata:
  name: postgres-service
  labels:
    app: postgres
spec:
  ports:
   - port: 5432
     targetPort: 5432 
  selector:
   app: postgres

but for a connection string

            x.UseNpgsql("Host=postgres-service:5432;Database=postgres;Username=postgres;Password=postgres"));

Which does not seem to work?

Something as simple as

using System;
using System.Net.NetworkInformation;

    namespace pingMe
    {
        class Program
        {
            static void Main(string[] args)
            {
                Console.WriteLine("Hello World!");
                Ping ping = new Ping();
                PingReply pingresult = ping.Send("postgres-service.default.svc.cluster.local");
                if (pingresult.Status.ToString() == "Success")
                {
                    Console.WriteLine("I can reach");
                }
            }
        }
    }

resolve into this:

within the cluster triggers an error

    System.Net.NetworkInformation.PingException: An exception occurred during a Ping request.
 ---> System.Net.Internals.SocketExceptionFactory+ExtendedSocketException (00000005, 0xFFFDFFFF): Name or service not known
   at System.Net.Dns.InternalGetHostByName(String hostName)
   at System.Net.Dns.GetHostAddresses(String hostNameOrAddress)
   at System.Net.NetworkInformation.Ping.GetAddressAndSend(String hostNameOrAddress, Int32 timeout, Byte[] buffer, PingOptions options)
   --- End of inner exception stack trace ---
   at System.Net.NetworkInformation.Ping.GetAddressAndSend(String hostNameOrAddress, Int32 timeout, Byte[] buffer, PingOptions options)
   at System.Net.NetworkInformation.Ping.Send(String hostNameOrAddress, Int32 timeout, Byte[] buffer, PingOptions options)
   at System.Net.NetworkInformation.Ping.Send(String hostNameOrAddress)
   at API.Startup.Configure(IApplicationBuilder app, IWebHostEnvironment env, SchemaContext schemaContext) in /src/API/Startup.cs:line 42
   at System.RuntimeMethodHandle.InvokeMethod(Object target, Object[] arguments, Signature sig, Boolean constructor, Boolean wrapExceptions)
   at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture)
   at Microsoft.AspNetCore.Hosting.ConfigureBuilder.Invoke(Object instance, IApplicationBuilder builder)
   at Microsoft.AspNetCore.Hosting.ConfigureBuilder.<>c__DisplayClass4_0.b__0(IApplicationBuilder builder)
   at Microsoft.AspNetCore.Hosting.GenericWebHostBuilder.<>c__DisplayClass13_0.b__2(IApplicationBuilder app)
   at Microsoft.AspNetCore.Mvc.Filters.MiddlewareFilterBuilderStartupFilter.<>c__DisplayClass0_0.g__MiddlewareFilterBuilder|0(IApplicationBuilder builder)
   at Microsoft.AspNetCore.HostFilteringStartupFilter.<>c__DisplayClass0_0.b__0(IApplicationBuilder app)
   at Microsoft.AspNetCore.Hosting.GenericWebHostService.StartAsync(CancellationToken cancellationToken)
Unhandled exception. System.Net.NetworkInformation.PingException: An exception occurred during a Ping request.

Kubernetes service

kubectl get svc postgres-service
NAME               TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)    AGE
postgres-service   ClusterIP   10.106.91.9   <none>        5432/TCP   74m

Dockerfile:

#See https://aka.ms/containerfastmode to understand how Visual Studio uses this Dockerfile to build your images for faster debugging.

FROM mcr.microsoft.com/dotnet/core/runtime:3.1-buster-slim AS base
WORKDIR /app

FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
WORKDIR /src
COPY ["pingMe/pingMe.csproj", "pingMe/"]
RUN dotnet restore "pingMe/pingMe.csproj"
COPY . .
WORKDIR "/src/pingMe"
RUN dotnet build "pingMe.csproj" -c Release -o /app/build

FROM build AS publish
RUN dotnet publish "pingMe.csproj" -c Release -o /app/publish

FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "pingMe.dll"]

Local pod:

kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: local-deployment
spec:
  replicas: 1
  revisionHistoryLimit: 3
  template:
    metadata:
      labels:
        app: local-pod
    spec:
      containers:
      - name: local-deployment
        image: api:dev5
        imagePullPolicy: Never
        ports:
        - containerPort: 80
        livenessProbe:
          httpGet:
            path: /WeatherForecast
            port: 80
          initialDelaySeconds: 3
          periodSeconds: 3
-- kafka
kubernetes
npgsql
postgresql

4 Answers

2/11/2020

I am not sure I understand why.. But all my asp. Net core containerized application was not able to resolve the service name..

These had to be resolved in a separate step using https://docs.microsoft.com/en-us/dotnet/api/system.net.dns.gethostname?view=netframework-4.8 , and then passed as an ip address, resolved from the prior step..

I based this upon how nslookup responded, it initially fails as the dns record is not cached, and then resolves the host name as this can be found via broadcast.

I guess since the initial dns lookup fails, it triggers and exception which mine kept failing at...

-- kafka
Source: StackOverflow

2/10/2020

tl;dr: postgres-service.default.svc

see the explanation in the docs: default is your namespace name and cluster domain part can be omitted.

-- morgwai
Source: StackOverflow

2/11/2020

I've read all answers and comments, let's start over so we can have a new point of view.

I am currently working on a application which will be running in a kubernetes pod. It is supposed to connect to a PostgreSql pod, that is ran within the same cluster.

In order to help you, we need to test each step of your environment separately.

First, one clarification:

  • Services does not accept ping, to test the service you have to test the port of the application which is exposed. It's designed this way.

Step 1 - We need to ensure the PostgreSQL service is functioning properly.

This is my deployed PostgreSQL:

$ kubectl get all 
NAME                                 READY   STATUS    RESTARTS   AGE
pod/postgresql-0   1/1     Running   0          99m

NAME                          TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
service/postgresql            ClusterIP   10.0.8.179   <none>        5432/TCP   99m
service/postgresql-headless   ClusterIP   None         <none>        5432/TCP   99m
service/kubernetes            ClusterIP   10.0.0.1     <none>        443/TCP    8d

NAME                          READY   AGE
statefulset.apps/postgresql   1/1     99m
  • Section A - Run a postgresql-client interactive shell pod:
$ kubectl run postgresql-client --rm --tty -i --restart='Never' \
--namespace default \
--image docker.io/bitnami/postgresql:11.6.0-debian-10-r0 \
--env="PGPASSWORD=postgres" \
--command -- psql --host postgres \
-U postgres -d postgres -p 5432

If credentials are correct, You will see postgres=#. Try running some a command like \du:

postgres=# \du
                                   List of roles
 Role name |                         Attributes                         | Member of 
-----------+------------------------------------------------------------+-----------
 postgres  | Superuser, Create role, Create DB, Replication, Bypass RLS | {}

IF: it works! Go to Step 2.

IF NOT:

  • Section B - Test manually with postgresql-client.

Start a single execution pod of a Ubuntu shell:

kubectl run -i --tty --rm --image ubuntu test-shell -- /bin/bash

Then install the postgresql client and nslookup:

apt update && apt install postgresql-client -y && apt install dnsutils -y

Run a nslookup to the service:

root@test-shell-845c969686-h9gz2:/# nslookup postgresql
Server:         10.0.0.10
Address:        10.0.0.10#53

Name:   postgresql.default.svc.cluster.local
Address: 10.0.8.179

As you can see, as we are working in the same cluster and namespace, any calls made to kneeling-scorpion-postgresql are correctly assigned without specifying the FQDN.

Run the pg_isready (remember the host is the service, not the pod):

root@test-shell-845c969686-h9gz2:/# pg_isready --host=postgresql --port=5432 --username=postgres --dbname=postgresql
postgresql:5432 - accepting connections

You can also test as a connection string:

The structure is export my_conn='postgresql://user:password@FQDN/DATABASE'

root@test-shell-845c969686-h9gz2:/# export my_conn='postgresql://postgres:postgres@postgresql/postgres'
root@test-shell-845c969686-h9gz2:/# pg_isready -d $my_conn
kneeling-scorpion-postgresql:5432 - accepting connections

Lastly, let's do the login connection again just like we did with the postegresql-client pod:

root@test-shell-845c969686-vh9zh:/# psql --host=postgresql --port=5432 --username=postgres --dbname=postgres  
Password for user postgres: 
psql (10.10 (Ubuntu 10.10-0ubuntu0.18.04.1), server 11.6)
Type "help" for help.

postgres=# \du
                                   List of roles
 Role name |                         Attributes                         | Member of 
-----------+------------------------------------------------------------+-----------
 postgres  | Superuser, Create role, Create DB, Replication, Bypass RLS | {}

IF: it works! Go to Step 2.

IF NOT:

  • There may be some problem with your PostgreSQL.
  • At this point I'd suggest you to try this tests again but with a clean database like the one from Helm Chart: stable/postgresql. Documentation Here. It's really easy to install and remove when needed.

Step 2 - Narrowing down to the client:

At this point I assume your database connection is working properly. So we must review a few things:

  • If you can connect to the database from other pods I suggest you try to deploy and run a instance of your APP from the Ubuntu POD we used in Section B.

If it still does not work, now you've narrowed to a problem inside your App.

If you have any difficulty reproducing this solution, let me know in the comments.

-- willrof
Source: StackOverflow

2/11/2020

Does the /etc/resolv.conf file inside the pod has IP of coredns pod? It should look like below:

u@pod$ cat /etc/resolv.conf
nameserver 10.0.0.10 # IP of core dns pod
search default.svc.cluster.local svc.cluster.local cluster.local example.com
options ndots:5

Also check if you are able to lookup any other service

nslookup kubernetes.default.svc

Check this guide on how to debug issues with services in kubernetes.

-- Arghya Sadhu
Source: StackOverflow