Sorry in advance if my terminology isn't perfect, I'm learning Kubernetes right now.
I have a self-managed Kubernetes cluster on a series of AWS instances, with one master node and five worker nodes. All nodes are running Ubuntu 18.04. These nodes are all on a VPC and I ssh into them using a bastion host. For the time being, I've also given all of the nodes external IPs as well, just to make testing easier. I also have a domain, let's call it xxx.example.org, pointed at the current external IP of the master node.
I set up Kubernetes using Kubespray and then proceeded to install Istio (using istioctl) and set up the Ingress Gateway per the official docs here and here
When I run kubectl get svc -n istio-system istio-ingressgateway
, the External-IP for the cluster is always :
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.233.3.209 <pending> 15020:30051/TCP,80:32231/TCP,443:30399/TCP,15029:31406/TCP,15030:32078/TCP,15031:30050/TCP,15032:30204/TCP,31400:31912/TCP,15443:31071/TCP 3m1s
I am able to access the services in a browser using IP:32231/headers
or xxx.example.org:32231/headers
I used the following command to configure my Gateway and VirtualService for the httpbin and Bookinfo projects referenced in the Istio docs:
kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: httpbin-gateway
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: httpbin
spec:
hosts:
- "*"
gateways:
- httpbin-gateway
http:
- match:
- uri:
prefix: /status
- uri:
prefix: /delay
- uri:
prefix: /headers
route:
- destination:
port:
number: 8000
host: httpbin
- match:
- uri:
exact: /productpage
- uri:
prefix: /static
- uri:
exact: /login
- uri:
exact: /logout
- uri:
prefix: /api/v1/products
route:
- destination:
host: productpage
port:
number: 9080
EOF
Seeing as this is a self-managed cluster, is there any way to get an external-ip for the cluster? If not, how would I go about modifying my current configuration such that the pages are accessible from xxx.example.org
rather than xxx.example.org:32231
?
EDIT #1
I did try to set up a NLB on AWS by following this documentation and this guide. Unfortunately, this didn't change anything, the EXTERNAL-IP
is still <pending>
. After doing that, I deployed a new ingress gateway, which looked like this:
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
labels:
app: istio-ingressgateway-2
istio: ingressgateway-2
operator.istio.io/component: IngressGateways
operator.istio.io/managed: Reconcile
operator.istio.io/version: 1.5.2
release: istio
name: istio-ingressgateway-2
namespace: istio-system
spec:
ports:
- name: status-port
nodePort: 30625
port: 15020
protocol: TCP
targetPort: 15020
- name: http2
nodePort: 32491
port: 80
protocol: TCP
targetPort: 80
- name: https
nodePort: 30466
port: 443
protocol: TCP
targetPort: 443
- name: kiali
nodePort: 32034
port: 15029
protocol: TCP
targetPort: 15029
- name: prometheus
nodePort: 30463
port: 15030
protocol: TCP
targetPort: 15030
- name: grafana
nodePort: 31176
port: 15031
protocol: TCP
targetPort: 15031
- name: tracing
nodePort: 32040
port: 15032
protocol: TCP
targetPort: 15032
- name: tcp
nodePort: 32412
port: 31400
protocol: TCP
targetPort: 31400
- name: tls
nodePort: 30411
port: 15443
protocol: TCP
targetPort: 15443
selector:
app: istio-ingressgateway-2
istio: ingressgateway-2
type: LoadBalancer
I also changed my httpbin-gateway
to use ingressgateway-2
. This failed to load anything, even on port 32231.
This issue can be fixed by adding annotations to Your LoadBalancer
service manifest.
According to Amazon Documentation:
Amazon EKS supports the Network Load Balancer and the Classic Load Balancer for pods running on Amazon EC2 instance worker nodes through the Kubernetes service of type
LoadBalancer
. Classic Load Balancers and Network Load Balancers are not supported for pods running on AWS Fargate (Fargate). For Fargate ingress, we recommend that you use the ALB Ingress Controller on Amazon EKS (minimum version v1.1.4).The configuration of your load balancer is controlled by annotations that are added to the manifest for your service. By default, Classic Load Balancers are used for
LoadBalancer
type services. To use the Network Load Balancer instead, apply the following annotation to your service:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
For an example service manifest that specifies a load balancer, see Type LoadBalancer in the Kubernetes documentation. For more information about using Network Load Balancer with Kubernetes, see Network Load Balancer support on AWS in the Kubernetes documentation.
By default, services of type
LoadBalancer
create public-facing load balancers. To use an internal load balancer, apply the following annotation to your service:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
For internal load balancers, your Amazon EKS cluster must be configured to use at least one private subnet in your VPC. Kubernetes examines the route table for your subnets to identify whether they are public or private. Public subnets have a route directly to the internet using an internet gateway, but private subnets do not.
To add one or more annotations like that to Your istio ingress configuration You can follow an example from this article.
Hope it helps.