I am using Kubernetes with Minikube on a Windows 10 Home machine to "host" a gRPC service. I am working on getting Istio working in the cluster and have been running into the same issue over and over and I cannot figure out why. The problem is that once everything is up and running, the Istio gateway uses IPv6, seemingly for no reason at all. IPv6 is even disabled on my machine (via regedit) and network adapters. My other services are accessible from IPv4. Below are my steps for installing my environment:
minikube start
kubectl create namespace abc
kubectl apply -f service.yml -n abc
kubectl apply -f gateway.yml
istioctl install --set profile=default -y
kubectl label namespace abc istio-injection=enabled
Nothing is accessible over the network at this point, until I run the following in its own terminal:
minikube tunnel
Now I can access the gRPC service directly using IPv4: 127.0.0.1:5000
. However, accessing the gateway is inaccessible from 127.0.0.1:443
and instead is only accessible from [::1]:443
.
Here is the service.yml:
apiVersion: v1
kind: Service
metadata:
name: account-grpc
spec:
ports:
- name: grpc
port: 5000
protocol: TCP
targetPort: 5000
selector:
service: account
ipc: grpc
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
service: account
ipc: grpc
name: account-grpc
spec:
replicas: 1
selector:
matchLabels:
service: account
ipc: grpc
template:
metadata:
labels:
service: account
ipc: grpc
spec:
containers:
- image: account-grpc
name: account-grpc
imagePullPolicy: Never
ports:
- containerPort: 5000
Here is the gateway.yml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 443
name: grpc
protocol: GRPC
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: virtual-service
spec:
hosts:
- "*"
gateways:
- gateway
http:
- match:
- uri:
prefix: /account
route:
- destination:
host: account-grpc
port:
number: 5000
And here are the results of kubectl get service istio-ingressgateway -n istio-system -o yaml
:
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: ...
creationTimestamp: "2021-08-27T01:21:21Z"
labels:
app: istio-ingressgateway
install.operator.istio.io/owning-resource: unknown
install.operator.istio.io/owning-resource-namespace: istio-system
istio: ingressgateway
istio.io/rev: default
operator.istio.io/component: IngressGateways
operator.istio.io/managed: Reconcile
operator.istio.io/version: 1.11.1
release: istio
name: istio-ingressgateway
namespace: istio-system
resourceVersion: "4379"
uid: b4db0e2f-0f45-4814-b187-287acb28d0c6
spec:
clusterIP: 10.97.4.216
clusterIPs:
- 10.97.4.216
externalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: status-port
nodePort: 32329
port: 15021
protocol: TCP
targetPort: 15021
- name: http2
nodePort: 31913
port: 80
protocol: TCP
targetPort: 8080
- name: https
nodePort: 32382
port: 443
protocol: TCP
targetPort: 8443
selector:
app: istio-ingressgateway
istio: ingressgateway
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 127.0.0.1
Changing the port number to port 80 resolved my issue. The problem was that my gRPC service was not using HTTPS. Will return if I have trouble once I change the service to use HTTPS.