Given the following K8s resources (deployment/pods, service, ingress), I expect to see the request echoed back to me when I visit https://staging-micro.local/
in my browser. What I get instead is 502 Bad Gateway
.
# describe deployment (trunc. to show only containers)
Containers:
cloudsql-proxy:
Image: gcr.io/cloudsql-docker/gce-proxy:1.11
Port: <none>
Host Port: <none>
Command:
/cloud_sql_proxy
-instances=myproject:us-central1:project-staging=tcp:5432
-credential_file=/secrets/cloudsql/credentials.json
Environment: <none>
Mounts:
/secrets/cloudsql from cloudsql-instance-credentials-volume (ro)
adv-api-django:
Image: gcr.io/google_containers/echoserver:1.9
Port: 8000/TCP
Host Port: 0/TCP
Environment:
# describe service
Name: staging-adv-api-service
Namespace: staging
Labels: app=adv-api
platformRole=api
tier=backend
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"adv-api","platformRole":"api","tie...
Selector: app=adv-api-backend,platformRole=api,tier=backend
Type: LoadBalancer
IP: 10.103.67.61
Port: http 80/TCP
TargetPort: 8000/TCP
NodePort: http 32689/TCP
Endpoints: 172.17.0.14:8000,172.17.0.6:8000,172.17.0.7:8000
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
# describe ingress
Name: staging-api-ingress
Namespace: staging
Address: 10.0.2.15
Default backend: default-http-backend:80 (172.17.0.12:8080)
Rules:
Host Path Backends
---- ---- --------
staging-micro.local
/ staging-adv-api-service:http (172.17.0.14:8000,172.17.0.6:8000,172.17.0.7:8000)
note that I have the entry 192.168.99.100 staging-micro.local
in /etc/hosts
on the host machine (running minikube) and that is the correct minikube ip
. If I remove the service, hitting staging-micro.local/
gives the 404 Not Found
response of the default backend.
My expectation is that the Ingress maps the hostname staging-micro.local
and the path /
to the service, which is listening on port 80. The service then forwards the request on to one of the 3 selected containers on port 8000. The echoserver container is listening on port 8000, and returns an HTTP Response with the Request as its body. This is, of course, not what actually happens.
Finally, the cloudsql-proxy
container: this should not be involved at this point, but I'm including it because I wanted to validate the service works when the sidecar container is present. Then I can swap out the echoserver
for my main application container. I have tested with echoserver
removed, and get the same results.
Logs show the echoserver
is starting up without error.
I haven't been able to locate any more comprehensive documentation of echoserver
so I'm not 100% about the ports it's listening on.
My guess you have used wrong echoserver:1.9
target container port, as it is responding on 8080
port by default. Look at this example.
I have tested it on my environment with successful container responsiveness on 8080
port.