Hello and thank you for taking the time to read my question.
First, I have an EKS cluster setup to use public and private subnets.
I generated the cluster using cloudformation as described at https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html#vpc-create
I then initialized helm by creating a service account for tiller via kubectl apply -f (below file)
:
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
and then helm init --service-account=tiller
followed by helm repo update
I then used helm to install the nginx-ingress controller via:
helm install --name nginx-ingress \
--namespace nginx-project \
stable/nginx-ingress \
-f nginx-ingress-values.yaml
where my nginx-ingress-values.yaml is:
controller:
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval: "60"
service.beta.kubernetes.io/aws-load-balancer-access-log-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name: "abc-us-west-2-elb-access-logs"
service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix: "vault-cluster/nginx"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:us-west-2:123456789:certificate/bb35b4c4-..."
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
And so far everything looks great, I see the ELB get created and hooked up to use acm for https
I then install kubernetes-dashboard via:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
and I can access it via kubectl proxy
But when I add an ingress rule for dashboard via: kubectl apply -f dashboard-ingress.yaml
Where dashboard-ingress.yaml is:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: dashboard
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
namespace: kube-system
spec:
# tls:
# - hosts:
# - abc.def.com
rules:
- host: abc.def.com
http:
paths:
- path: /dashboard
backend:
serviceName: kubernetes-dashboard
servicePort: 8443
then when I try to go http://abc.def.com/ I get stuck in an infinite redirect loop.
same for https://abc.def.com/ and http://abc.def.com/dashboard
I am new to kubernetes and very stuck on this one. Any help would be GREATLY appreciated
UPDATE - 9/5/2019: When I take out the tls block from the ingress.yaml I then get
to the nginx backend but http://abc.def.com forwards me to https://abc.def.com and I get a 502 Bad Gateway from openresty/1.15.8.1
when I then try to go to https://abc.def.com/dashboard
I get "404 page not found" which is a response from the nginx-ingress controller as I understand it.
UPDATE - 9/6/2019: Thanks so much to mk_sta for the answer below which helped me understand what I was missing.
For anyone reading this in the future, my nginx-ingress install via helm works as expected but my kubernetes-dashboard install was missing some key annotations. In the end I was able to configure helm to install the kubernetes-dashboard via:
helm install --name kubernetes-dashboard \
--namespace kube-system \
stable/kubernetes-dashboard \
-f kubernetes-dashboard-values.yaml
where kubernetes-dashboard-values.yaml is:
ingress:
enabled: true
hosts: [abc.def.com]
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /$2
paths: [/dashboard(/|$)(.*)]
I can then access dashboard at http://abc.def.com/dashboard/ and https://abc.def.com/dashboard/
For some reason if I leave off the trailing slash it does not work however.
This is good enough for me at the moment.
This can happen if your cluster doesn't have nginx
as the default ingress class, and your ingress manifest doesn't specify one.
You can try one of the following: - Upgrade your NGINX-ingress installation with controller.ingressClass
set to nginx
(with this, all ingresses created will use NGINX-ingress by default) - Add the kubernetes.io/ingress.class: nginx
annotation to your ingress yaml to specify that you want NGINX-ingress to handle it.
It seems to me that you've used wrong location path /dashboard
within yours origin Ingress
configuration, even more the relevant K8s dashboard UI endpoint is exposed on 443
port by default across the corresponded K8s Service resource, whenever you've not customized this setting.
ports:
- port: 443
protocol: TCP
targetPort: 8443
In order to get a proper path based routing, override existing parameters with the following arguments:
paths:
- path: /
backend:
serviceName: kubernetes-dashboard
servicePort: 443
Once you've decided accessing K8s dashboard UI through indirect path holder URL (https://abc.def.com/dashboard), you can manage applying Rewrite rules in order to transparently change a part of the authentic URL and transmit requests to the faithful target path. Actually, Nginx Ingress controller adds this functionality via specific nginx.ingress.kubernetes.io/rewrite-target
annotation:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: dashboard
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /$2
namespace: kube-system
spec:
# tls:
# - hosts:
# - abc.def.com
rules:
- host: abc.def.com
http:
paths:
- path: /dashboard(/|$)(.*)
backend:
serviceName: kubernetes-dashboard
servicePort: 443