I have a GKE cluster (gke v1.13.6) and using istio (v1.1.7) with several services deployed and working successfully except one of them which always responds with HTTP 503 when calling through the gateway : upstream connect error or disconnect/reset before headers. reset reason: connection failure.
I've tried calling the pod directly from another pod with curl enabled and it ends up in 503 as well :
$ kubectl exec sleep-754684654f-4mccn -c sleep -- curl -v d-vine-machine-dev:8080/d-vine-machine/swagger-ui.html
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying 10.3.254.3...
* TCP_NODELAY set
* Connected to d-vine-machine-dev (10.3.254.3) port 8080 (#0)
> GET /d-vine-machine/swagger-ui.html HTTP/1.1
> Host: d-vine-machine-dev:8080
> User-Agent: curl/7.60.0
> Accept: */*
>
upstream connect error or disconnect/reset before headers. reset reason: connection failure< HTTP/1.1 503 Service Unavailable
< content-length: 91
< content-type: text/plain
< date: Thu, 04 Jul 2019 08:13:52 GMT
< server: envoy
< x-envoy-upstream-service-time: 60
<
{ [91 bytes data]
100 91 100 91 0 0 1338 0 --:--:-- --:--:-- --:--:-- 1378
* Connection #0 to host d-vine-machine-dev left intact
Setting the log level to TRACE at the istio-proxy level :
$ kubectl exec -it -c istio-proxy d-vine-machine-dev-b8df755d6-bpjwl -- curl -X POST http://localhost:15000/logging?level=trace
I looked into the logs of the injected sidecar istio-proxy and found this :
[2019-07-04 07:30:41.353][24][debug][router] [external/envoy/source/common/router/router.cc:381] [C119][S9661729384515860777] router decoding headers:
':authority', 'api-dev.d-vine.tech'
':path', '/d-vine-machine/swagger-ui.html'
':method', 'GET'
':scheme', 'http'
'cache-control', 'max-age=0'
'upgrade-insecure-requests', '1'
'user-agent', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36'
'accept', 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3'
'accept-encoding', 'gzip, deflate'
'accept-language', 'fr-FR,fr;q=0.9,en-US;q=0.8,en;q=0.7'
'x-forwarded-for', '10.0.0.1'
'x-forwarded-proto', 'http'
'x-request-id', 'e38a257a-1356-4545-984a-109500cb71c4'
'content-length', '0'
'x-envoy-internal', 'true'
'x-forwarded-client-cert', 'By=spiffe://cluster.local/ns/default/sa/default;Hash=8b6afba64efe1035daa23b004cc255e0772a8bd23c8d6ed49ebc8dabde05d8cf;Subject="O=";URI=spiffe://cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account;DNS=istio-ingressgateway.istio-system'
'x-b3-traceid', 'f749afe8b0a76435192332bfe2f769df'
'x-b3-spanid', 'bfc4618c5cda978c'
'x-b3-parentspanid', '192332bfe2f769df'
'x-b3-sampled', '0'
[2019-07-04 07:30:41.353][24][debug][pool] [external/envoy/source/common/http/http1/conn_pool.cc:88] creating a new connection
[2019-07-04 07:30:41.353][24][debug][client] [external/envoy/source/common/http/codec_client.cc:26] [C121] connecting
[2019-07-04 07:30:41.353][24][debug][connection] [external/envoy/source/common/network/connection_impl.cc:644] [C121] connecting to 127.0.0.1:8080
[2019-07-04 07:30:41.353][24][debug][connection] [external/envoy/source/common/network/connection_impl.cc:653] [C121] connection in progress
[2019-07-04 07:30:41.353][24][debug][pool] [external/envoy/source/common/http/conn_pool_base.cc:20] queueing request due to no available connections
[2019-07-04 07:30:41.353][24][trace][http] [external/envoy/source/common/http/conn_manager_impl.cc:811] [C119][S9661729384515860777] decode headers called: filter=0x4f118b0 status=1
[2019-07-04 07:30:41.353][24][trace][http] [external/envoy/source/common/http/http1/codec_impl.cc:384] [C119] parsed 1272 bytes
[2019-07-04 07:30:41.353][24][trace][connection] [external/envoy/source/common/network/connection_impl.cc:282] [C119] readDisable: enabled=true disable=true
[2019-07-04 07:30:41.353][24][trace][connection] [external/envoy/source/common/network/connection_impl.cc:440] [C121] socket event: 3
[2019-07-04 07:30:41.353][24][trace][connection] [external/envoy/source/common/network/connection_impl.cc:508] [C121] write ready
[2019-07-04 07:30:41.353][24][debug][connection] [external/envoy/source/common/network/connection_impl.cc:526] [C121] delayed connection error: 111
[2019-07-04 07:30:41.353][24][debug][connection] [external/envoy/source/common/network/connection_impl.cc:183] [C121] closing socket: 0
[2019-07-04 07:30:41.353][24][debug][client] [external/envoy/source/common/http/codec_client.cc:82] [C121] disconnect. resetting 0 pending requests
[2019-07-04 07:30:41.353][24][debug][pool] [external/envoy/source/common/http/http1/conn_pool.cc:129] [C121] client disconnected, failure reason:
[2019-07-04 07:30:41.353][24][debug][pool] [external/envoy/source/common/http/http1/conn_pool.cc:164] [C121] purge pending, failure reason:
[2019-07-04 07:30:41.353][24][debug][router] [external/envoy/source/common/router/router.cc:644] [C119][S9661729384515860777] upstream reset: reset reason connection failure
[2019-07-04 07:30:41.353][24][trace][http] [external/envoy/source/common/http/conn_manager_impl.cc:1200] [C119][S9661729384515860777] encode headers called: filter=0x4f0e5f0 status=0
[2019-07-04 07:30:41.353][24][trace][http] [external/envoy/source/common/http/conn_manager_impl.cc:1200] [C119][S9661729384515860777] encode headers called: filter=0x4f0edc0 status=0
[2019-07-04 07:30:41.353][24][debug][filter] [src/envoy/http/mixer/filter.cc:133] Called Mixer::Filter : encodeHeaders 2
[2019-07-04 07:30:41.353][24][trace][http] [external/envoy/source/common/http/conn_manager_impl.cc:1200] [C119][S9661729384515860777] encode headers called: filter=0x4f0f0e0 status=0
[2019-07-04 07:30:41.353][24][debug][http] [external/envoy/source/common/http/conn_manager_impl.cc:1305] [C119][S9661729384515860777] encoding headers via codec (end_stream=false):
':status', '503'
'content-length', '91'
'content-type', 'text/plain'
'date', 'Thu, 04 Jul 2019 07:30:41 GMT'
'server', 'istio-envoy'
Has anyone encountered such an issue ? If you need more info about the configuration, I can provide.
Thanks for your answer Manvar. There was no problem with the curl enabled pod but thanks for the insight. It was a misconfiguration of our tomcat port that was not matching the service/virtualService config.
When pod with an istio side car is started, the follwing things happen
an init container changes the iptables rules so that all the outgoing tcp traffic is routed to the sidecar istio-proxy on port 15001.
the containers of the pod are started in parallel (curl and istio-proxy)
If your curl container is executed before istio-proxy listens on port 15001, you get the error.