I am kubernetizing (if I can use that term), this demo, and I am getting 503 from the front service.
So, what I have done is to create three services; green, blue and red, and all they work fine. If I hit them directly, I get 200, but when I do it through the front service, I get 503.
The idea of the demo is to hit a front service, and this one proxies the request, depending on the path, to one of the services behind (green, blue or red). This is the front service envoy yaml file:
static_resources:
listeners:
- address:
socket_address:
address: 0.0.0.0
port_value: 80
filter_chains:
- filters:
- name: envoy.http_connection_manager
config:
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: backend
domains:
- "*"
routes:
- match:
prefix: "/service/blue"
route:
cluster: service_blue
- match:
prefix: "/service/green"
route:
cluster: service_green
- match:
prefix: "/service/red"
route:
cluster: service_red
http_filters:
- name: envoy.router
config: {}
clusters:
- name: service_blue
connect_timeout: 0.25s
type: strict_dns
lb_policy: round_robin
http2_protocol_options: {}
hosts:
- socket_address:
address: blue
port_value: 80
- name: service_green
connect_timeout: 0.25s
type: strict_dns
lb_policy: round_robin
http2_protocol_options: {}
hosts:
- socket_address:
address: green
port_value: 80
- name: service_red
connect_timeout: 0.25s
type: strict_dns
lb_policy: round_robin
http2_protocol_options: {}
hosts:
- socket_address:
address: red
port_value: 80
admin:
access_log_path: "/dev/null"
address:
socket_address:
address: 0.0.0.0
port_value: 8001
These are my services in the cluster:
# kubectl get svc -n envoy-demo
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
blue ClusterIP 10.98.101.176 <none> 80/TCP 5h31m
front ClusterIP 10.98.119.222 <none> 80/TCP 73m
green ClusterIP 10.107.136.62 <none> 80/TCP 54m
red ClusterIP 10.101.240.162 <none> 80/TCP 160m
If I access front
pod and I curl any of the services, I get 200:
root@envoy-front-54856466dc-59jdw:/# curl blue/service/blue -I
HTTP/1.0 200 OK
Content-Type: text/html; charset=utf-8
Content-Length: 166
Server: Werkzeug/0.16.0 Python/3.7.4
Date: Mon, 14 Oct 2019 15:53:14 GMT
root@envoy-front-54856466dc-59jdw:/# curl green/service/green -I
HTTP/1.0 200 OK
Content-Type: text/html; charset=utf-8
Content-Length: 170
Server: Werkzeug/0.16.0 Python/3.7.4
Date: Mon, 14 Oct 2019 15:53:31 GMT
root@envoy-front-54856466dc-59jdw:/# curl red/service/red -I
HTTP/1.0 200 OK
Content-Type: text/html; charset=utf-8
Content-Length: 163
Server: Werkzeug/0.16.0 Python/3.7.4
Date: Mon, 14 Oct 2019 15:53:38 GMT
These services return 200 on /service/whatever
actually.
If I curl to the localhost or from outside the pod to the front
service, I get 503:
# curl localhost/service/red
upstream connect error or disconnect/reset before headers. reset reason: connection termination
Same, if I curl the IP of front
service and and the service name, from outside the front
pod.
The service ideally is running:
# netstat -tlpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:8001 0.0.0.0:* LISTEN 6/envoy
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 6/envoy
In case, it is relevant, this is the service envoy yaml file:
static_resources:
listeners:
- address:
socket_address:
address: 0.0.0.0
port_value: 80
filter_chains:
- filters:
- name: envoy.http_connection_manager
config:
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: service
domains:
- "*"
routes:
- match:
prefix: "/service"
route:
cluster: local_service
http_filters:
- name: envoy.router
config: {}
clusters:
- name: local_service
connect_timeout: 0.25s
type: strict_dns
lb_policy: round_robin
hosts:
- socket_address:
address: 0.0.0.0
port_value: 8080
admin:
access_log_path: "/dev/null"
address:
socket_address:
address: 0.0.0.0
port_value: 8081
All the services are very similar; one of them is a normal service, the other one adds a delay, and the other one aborts 50% of the requests.
Any ideas what am I doing wrong?