I want to set up connections from a kubernetes cluster (created via az acs create
with mostly default settings) to an Azure Postgresql instance, and I'd like to know what source-IP range to enter in postgres HBA (this is the thing Azure calls a firewall-rule
under az postgres server
).
The thing is, although I can see from the console errors (when using psql
to test) what the current IP is that the cluster requests come from
FATAL: no pg_hba.conf entry for host "x.x.x.x" [...]
... I just don't see this IP address anywhere in the cluster properties - and anyway, it would seem a very fragile configuration to just whitelist this one IP address without knowing how it's assigned.
(In the Azure Portal, I do see one "Public IP" associated with the cluster master, but that's not the same as the IP seen by postgres, and, I assume, mainly for ingress.)
So ideally, does ACS let me control the outbound IP addresses for the cluster? And if not, can I figure out programmatically what IP or range of IPs to allow?
does ACS let me control the outbound IP addresses for the cluster? And if not, can I figure out programmatically what IP or range of IPs to allow?
Based on my knowledge, Azure container service expose docker application to public via Azure load balancer, load balancer will get a public IP address.
By the way, we can't specify which public IP address will associate to Azure load balancer.
After we can expose the application to the internet, we can add the public IP address to your Postgresql's postgres HBA.
It should be the external IP for the node that the pod is scheduled on, e.g. on container engine:
$ kubectl get no -o wide
NAME STATUS AGE VERSION EXTERNAL-IP OS-IMAGE KERNEL-VERSION
gke-cluster-1-node-1 Ready 58d v1.5.4 <example node IP> Container-Optimized OS from Google 4.4.21+
$ ssh gke-cluster-1-node-1
$ curl icanhazip.com
<example node IP>
$ kubectl get po -o wide | grep node-1
example-pod-1 1/1 Running 0 11d <pod IP> gke-cluster-1-node-1
$ kubectl exec -it example-pod-1 curl icanhazip.com
<example node IP>