Exposing virtual service with istio and mTLS globally enabled

10/11/2019

I've this configuration on my service mesh:

  • mTLS globally enabled and meshpolicy default
  • simple-web deployment exposed as clusterip on port 8080
  • http gateway for port 80 and virtualservice routing on my service

Here the gw and vs yaml

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: http-gateway
spec:
  selector:
    istio: ingressgateway # Specify the ingressgateway created for us
  servers:
  - port:
      number: 80 # Service port to watch
      name: http-gateway
      protocol: HTTP
    hosts:
    - "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: simple-web
spec:
  gateways:
  - http-gateway
  hosts:
  - '*'
  http:
  - match:
    - uri:
        prefix: /simple-web
    rewrite:
      uri: /
    route:
    - destination:
        host: simple-web
        port:
          number: 8080

Both vs and gw are in the same namespace. The deployment was created and exposed with these commands:

k create deployment --image=yeasy/simple-web:latest simple-web
k expose deployment simple-web --port=8080 --target-port=80 --name=simple-web

and with k get pods I receive this:

pod/simple-web-9ffc59b4b-n9f85   2/2     Running

What happens is that from outside, pointing to ingress-gateway load balancer I receive 503 HTTP error. If I try to curl from ingressgateway pod I can reach the simple-web service. Why I can't reach the website with mTLS enabled? What's the correct configuration?

-- Manuel Castro
envoyproxy
istio
kubernetes

2 Answers

10/11/2019

I just installed istio-1.3.2, and k8s 1.15.1, to reproduced your issue, and it worked without any modifications. This is what I did:

0.- create a namespace called istio and enable sidecar injection automatically.

1.- $ kubectl run nginx --image nginx -n istio

2.- $ kubectl expose deploy nginx --port 8080 --target-port 80 --name simple-web -n istio

3.- $kubectl craete -f gw.yaml -f vs.yaml

Note: these are your files.

The test:

$ curl a.b.c.d:31380/simple-web -I
HTTP/1.1 200 OK
server: istio-envoy
date: Fri, 11 Oct 2019 10:04:26 GMT
content-type: text/html
content-length: 612
last-modified: Tue, 24 Sep 2019 14:49:10 GMT
etag: "5d8a2ce6-264"
accept-ranges: bytes
x-envoy-upstream-service-time: 4


[2019-10-11T10:04:26.101Z] "HEAD /simple-web HTTP/1.1" 200 - "-" "-" 0 0 6 4 "10.132.0.36" "curl/7.52.1" "4bbc2609-a928-9f79-9ae8-d6a3e32217d7" "a.b.c.d:31380" "192.168.171.73:80" outbound|8080||simple-web.istio.svc.cluster.local - 192.168.171.86:80 10.132.0.36:37078 - -

And to be sure mTLS was enabled, this is from ingress-gateway describe command:

--controlPlaneAuthPolicy MUTUAL_TLS

So, I don't know what is wrong, but you might want to go through these steps and discard things.

Note: the reason I am attacking istio gateway on port 31380 is because my k8s is on VMs right now, and I didn't want to spin up a GKE cluster for a test.

EDIT

Just deployed another deployment with your image, exposed it as simple-web-2, and worked again. May be I'm lucky with istio:

$ curl a.b.c.d:31380/simple-web -I
HTTP/1.1 200 OK
server: istio-envoy
date: Fri, 11 Oct 2019 10:28:45 GMT
content-type: text/html
content-length: 354
last-modified: Fri, 11 Oct 2019 10:28:46 GMT
x-envoy-upstream-service-time: 4

[2019-10-11T10:28:46.400Z] "HEAD /simple-web HTTP/1.1" 200 - "-" "-" 0 0 5 4 "10.132.0.36" "curl/7.52.1" "df0dd00a-875a-9ae6-bd48-acd8be1cc784" "a.b.c.d:31380" "192.168.171.65:80" outbound|8080||simple-web-2.istio.svc.cluster.local - 192.168.171.86:80 10.132.0.36:42980 - -

What's your k8s environment?

EDIT2

# istioctl authn tls-check curler-6885d9fd97-vzszs simple-web.istio.svc.cluster.local -n istio
HOST:PORT                                   STATUS     SERVER     CLIENT     AUTHN POLICY     DESTINATION RULE
simple-web.istio.svc.cluster.local:8080     OK         mTLS       mTLS       default/         default/istio-system
-- suren
Source: StackOverflow

10/15/2019

As @suren mentioned in his answer this issue is not present in istio version 1.3.2 . So one of solutions is to use newer version.

If you chose to upgrade istio to newer version please review documentation 1.3 Upgrade Notice and Upgrade Steps as Istio is still in development and changes drastically with each version.

Also as mentioned in comments by @Manuel Castro this is most likely issue addressed in Avoid 503 errors while reconfiguring service routes and newer version simply handles them better.

Creating both the VirtualServices and DestinationRules that define the corresponding subsets using a single kubectl call (e.g., kubectl apply -f myVirtualServiceAndDestinationRule.yaml is not sufficient because the resources propagate (from the configuration server, i.e., Kubernetes API server) to the Pilot instances in an eventually consistent manner. If the VirtualService using the subsets arrives before the DestinationRule where the subsets are defined, the Envoy configuration generated by Pilot would refer to non-existent upstream pools. This results in HTTP 503 errors until all configuration objects are available to Pilot.

It should be possible to avoid this issue by temporarily disabling mTLS or by using permissive mode during the deployment.

-- Piotr Malec
Source: StackOverflow