I have a kubernetes cluster that comprises my application and a database that I host on a separate VM instance (outside of the cluster). The database should be listening on port 27017 (I confirm this using netstat). I want to be able for my services in my cluster to connect to this database. I recieve errors about the connection timing out. This leads me to believe that a) the service discovery in my cluster is not working correctly (most likely a configuration issue) or b) there is some firewall that is blocking my requests.
To allow my external database to be discovered, I have defined a service and a custom endpoint for my database. Here is the service definition:
kind: Service
apiVersion: v1
metadata:
name: db-service
namespace: production
spec:
ports:
- protocol: TCP
port: 27017
targetPort: 27017
Here is the endpoint definition:
kind: Endpoints
apiVersion: v1
metadata:
name: db-service
namespace: production
subsets:
- addresses:
- ip: <internal-ip>
ports:
- port: 27017
I have tried using both the external and internal ips for the endpoint definition (using the internal ip makes the most sense to me).
The only network tags I have on my VM instance hosting my database is 'http-server' and 'https-server' (default tags allowing for http/https traffic).
What in my approach is preventing requests from nodes in my cluster from being received by my db being hosted on the VM instance?
Turns out my database was not listening for remote connections (for postgresql, check out pg_hba.conf and postgresql.conf).