How to connect to GKE postgresql svc in GCP?

4/3/2019

I'm trying to connect to the postgresql service (pod) in my kubernetes deployment but I GCP does not give a port (so I can not use something like: $ psql -h localhost -U postgresadmin1 --password -p 31070 postgresdb to connect to Postgresql and see my database).

I'm using a LoadBalancer in my service:

@cloudshell:~ (academic-veld-230622)$ psql -h 35.239.52.68 -U jhipsterpress --password -p 30728 jhipsterpress-postgresql
Password for user jhipsterpress:
psql: could not connect to server: Connection timed out
        Is the server running on host "35.239.52.68" and accepting
        TCP/IP connections on port 30728?

apiVersion: v1
kind: Service
metadata:
  name: jhipsterpress
  namespace: default
  labels:
    app: jhipsterpress
spec:
  selector:
    app: jhipsterpress
  type: LoadBalancer
  ports:
  - name: http
    port: 8080



NAME                                            READY     STATUS    RESTARTS   AGE
pod/jhipsterpress-84886f5cdf-mpwgb              1/1       Running   0          31m
pod/jhipsterpress-postgresql-5956df9557-fg8cn   1/1       Running   0          31m

NAME                               TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)          AGE
service/jhipsterpress              LoadBalancer   10.11.243.22   35.184.135.134   8080:32670/TCP   31m
service/jhipsterpress-postgresql   LoadBalancer   10.11.255.64   35.239.52.68     5432:30728/TCP   31m
service/kubernetes                 ClusterIP      10.11.240.1    <none>           443/TCP          35m

NAME                                       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/jhipsterpress              1         1         1            1           31m
deployment.apps/jhipsterpress-postgresql   1         1         1            1           31m

NAME                                                  DESIRED   CURRENT   READY     AGE
replicaset.apps/jhipsterpress-84886f5cdf              1         1         1         31m
replicaset.apps/jhipsterpress-postgresql-5956df9557   1         1         1         31m


@cloudshell:~ (academic-veld-230622)$ kubectl describe pod jhipsterpress-postgresql
Name:               jhipsterpress-postgresql-5956df9557-fg8cn
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               gke-standard-cluster-1-default-pool-bf9f446d-9hsq/10.128.0.58
Start Time:         Sat, 06 Apr 2019 13:39:08 +0200
Labels:             app=jhipsterpress-postgresql
                    pod-template-hash=1512895113
Annotations:        kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container postgres
Status:             Running
IP:                 10.8.0.14
Controlled By:      ReplicaSet/jhipsterpress-postgresql-5956df9557
Containers:
  postgres:
    Container ID:   docker://55475d369c63da4d9bdc208e9d43c457f74845846fb4914c88c286ff96d0e45a
    Image:          postgres:10.4
    Image ID:       docker-pullable://postgres@sha256:9625c2fb34986a49cbf2f5aa225d8eb07346f89f7312f7c0ea19d82c3829fdaa
    Port:           5432/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Sat, 06 Apr 2019 13:39:29 +0200
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:  100m
    Environment:
      POSTGRES_USER:      jhipsterpress
      POSTGRES_PASSWORD:  <set to the key 'postgres-password' in secret 'jhipsterpress-postgresql'>  Optional: false
    Mounts:
      /var/lib/pgsql/data from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-mlmm5 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  spingular-bucket
    ReadOnly:   false
  default-token-mlmm5:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-mlmm5
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                  Age                From                                                        Message
  ----     ------                  ----               ----                                                        -------
  Warning  FailedScheduling        33m (x3 over 33m)  default-scheduler                                           persistentvolumeclaim "spingular-bucket" not found
  Warning  FailedScheduling        33m (x3 over 33m)  default-scheduler                                           pod has unbound immediate PersistentVolumeClaims
  Normal   Scheduled               33m                default-scheduler                                           Successfully assigned default/jhipsterpress-postgresql-5956df9557-fg8cn to gke-standard-cluster-1-default-pool-bf9f446d-9hsq
  Normal   SuccessfulAttachVolume  33m                attachdetach-controller                                     AttachVolume.Attach succeeded for volume "pvc-95ba1737-5860-11e9-ae59-42010a8000a8"
  Normal   Pulling                 33m                kubelet, gke-standard-cluster-1-default-pool-bf9f446d-9hsq  pulling image "postgres:10.4"
  Normal   Pulled                  32m                kubelet, gke-standard-cluster-1-default-pool-bf9f446d-9hsq  Successfully pulled image "postgres:10.4"
  Normal   Created                 32m                kubelet, gke-standard-cluster-1-default-pool-bf9f446d-9hsq  Created container
  Normal   Started                 32m                kubelet, gke-standard-cluster-1-default-pool-bf9f446d-9hsq  Started container

With the open firewall: posgresql-jhipster Ingress Apply to all
IP ranges: 0.0.0.0/0 tcp:30728 Allow 999 default

Thanks for your help. Any documentation is really appreciated.

-- Mike
google-cloud-platform
google-kubernetes-engine

1 Answer

4/4/2019

Your service is currently a type clusterIP. This does not expose the service or the pods outside the cluster. You can't connect to the pod from the Cloud Shell like this since the Cloud shell is not on your VPC and the pods are not exposed.

Update your service using kubectl edit svc jhipsterpress-postgresql Change the spec.type field to 'LoadBalancer'

You will then have an external IP that you can connect to

-- Patrick W
Source: StackOverflow