I have set up a Kubernetes cluster. The cluster contains, among other things, a cluster and deployment surfacing an API webservice (based on the subway-explorer-gmaps-proxy
container).
I've deployed the service externally, using the LoadBalancer
service type (this is on GCP):
$kubectl get svc subway-explorer-gmaps-proxy-service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
subway-explorer-gmaps-proxy-service LoadBalancer 10.35.252.232 35.224.78.225 9000:31396/TCP 19h
My understanding (and correct me if I'm wrong!) is that this service should now be queryable outside of the cluster, by visiting http://35.224.78.225
in the browser.
When running the Docker container locally, I can verify things are working correctly by navigating to the following URL:
http://localhost:49161/starting_x=-73.954527&starting_y=40.587243&ending_x=-73.977756&ending_y=40.687163
Looking at the kubectl get
output, I expect visiting the following URL in the browser will serve me the content I'm looking for:
http://35.224.78.225:31396/starting_x=-73.954527&starting_y=40.587243&ending_x=-73.977756&ending_y=40.687163
But when I visit this URL, nothing gets served.
I suspect there is a non-fatal error in the deployment configuration. What is an effective way of debugging this effective way of debugging this problem? Are there access logs or a stdout
stream somewhere I can check to see what's wrong?
What I didn't realize is that Google Cloud has a separate firewall system, which is distinct from the connection settings managed by Kubernetes. In order to expose the application to the outside world (e.g. a web browser, for example), I need to also modify the Google Cloud Firewall rules (see for example this answer as to how).
To test that the application is working on the Kubernetes side, you need not modify cloud firewall rules. Instead, run wget
, curl
, or some similar data retrieval command from a different pod on the cluster, pointed at the internal IP address and port number of the pod of interest.
For example. The "hello world" pod used by the Kubernetes documentation is the busybox
pod (defined here). By creating this pod in my cluster, and then running the following:
kubectl exec busybox -c busybox -- wget "10.35.249.23:9000"
I was able to confirm that the service is functioning correctly within Kubernetes. You can also use any other pod which defines a wget
in the underlying OS, I just used busybox
because all of my other pods use Google's Container Optimized OS, which doesn't include it.
Finally, for the purposes of debugging, I went ahead and added a /status
endpoint to my API application service which serves {"status": "OK"}
when the core service is working. I recommend following this pattern with other applications as well, as it gives a simple endpoint that you can test to make sure that, at a minimum, the webserver is responding to input. In my case, I discovered that the /status
page is OK, but the API calls are failing, which allows me to narrow the issue down to unresolved Promises caused by a bad credentials secret.
You can try running through the official docs on debugging services: https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/
Beyond that, have you confirmed you're querying the load balancer on the right port? While I don't deploy on GCP, when launching a load balancer for a kubernetes service on AWS it'll accept traffic on port 80/443 and forward it to the NodePort of the service, which I'm guessing is 31396 for your case. What are the ports listed in kubectl get svc subway-explorer-gmaps-proxy-service -o yaml
?