I am trying to deploy a Service in a Kubernetes Cluster. Everything works fine as long as I do not use TLS.
My Setup is like this: Azure Kubernetes Cluster with Version 1.15.7 Istio 1.4.2
What I did so far is. Creating the Cluster and Installing Istio with the following Command:
istioctl manifest apply --set values.grafana.enabled=true \--set values.tracing.enabled=true \
--set values.tracing.provider=jaeger \
--set values.global.mtls.enabled=false \
--set values.global.imagePullPolicy=Always \
--set values.kiali.enabled=true \
--set "values.kiali.dashboard.jaegerURL=http://jaeger-query:16686" \
--set "values.kiali.dashboard.grafanaURL=http://grafana:3000"
Everything starts up and all pods are running. Then I create a Gateway
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: ddhub-ingressgateway
namespace: config
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*.example.de"
# tls:
# httpsRedirect: true # sends 301 redirect for http requests
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
privateKey: /etc/istio/ingressgateway-certs/tls.key
hosts:
- "*.example.de"
- port:
number: 31400
name: tcp
protocol: TCP
hosts:
- "*.example.de"
I then import my custom certificates which I assume also work since they are mounted correctly and when accessing my service over the browser I can see the secured connection properties with all values.
This is my deployed service:
kind: Service
apiVersion: v1
metadata:
name: hellohub-frontend
labels:
app: hellohub-frontend
namespace: dev
spec:
ports:
- protocol: TCP
port: 8080
targetPort: 8080
type: ClusterIP
selector:
app: hellohub-frontend
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hellohub-frontend
namespace: dev
spec:
replicas: 1
template:
metadata:
labels:
app: hellohub-frontend
spec:
containers:
- image: ddhubregistry.azurecr.io/hellohub-frontend:latest
imagePullPolicy: Always
name: hellohub-frontend
volumeMounts:
- name: azure
mountPath: /cloudshare
ports:
- name: http
containerPort: 8080
volumes:
- name: azure
azureFile:
secretName: cloudshare-dev
shareName: ddhub-share-dev
readOnly: true
and the Virtual Service:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: hellohub-frontend
namespace: dev
spec:
hosts:
- "dev-hellohub.example.de"
gateways:
- config/ddhub-ingressgateway
http:
- match:
- uri:
prefix: /
route:
- destination:
host: hellohub-frontend.dev.svc.cluster.local
port:
number: 8080
When I access the service with http. The page of my service shows up. When using https I always get "upstream connect error or disconnect/reset before headers. reset reason: connection termination".
What am I missing or what am I doing wrong? What is the difference that makes Kubernetes not finding my service. I understand that my config terminates TLS at the gateway and the communication inside the cluster is the same but this seems not to be the case.
Another question is how to enable debug logs for the Sidecars. I could not find a working way.
Thanks in advance!
Seems the gateway tried to access your upstream in mtls mode through the envoy proxy, but no envoy proxy found in your container "hellohub-frontend", Have you enabled the istio-injection for your namespace "dev" or the pod, and also defined the mtls-policy?
apiVersion: "authentication.istio.io/v1alpha1"
kind: "Policy"
metadata:
name: "default"
spec:
peers:
- mtls:
mode: STRICT
Have you tried using istioctl to change log level of istio-proxy.
istioctl proxy-config log <pod-name[.namespace]> --level all:warning,http:debug,redis:debug