I have a development cluster on AWS deployed with Kops (cluster with 3 worker nodes and 3 master nodes on the eu-central zone) and I'm trying to set-up an Ingress to expose my app to the external world.
I followed this documentation:
Basically, I deployed skipper Ingress, kube-ingress-aws-controller and external DNS to my cluster and everything was working.
Bellow, I have the scripts of the deployments above mentioned.
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: skipper-ingress
namespace: kube-system
labels:
component: ingress
spec:
selector:
matchLabels:
component: ingress
updateStrategy:
type: RollingUpdate
template:
metadata:
name: skipper-ingress
labels:
component: ingress
application: skipper
spec:
hostNetwork: true
serviceAccountName: skipper-ingress
containers:
- name: skipper-ingress
image: registry.opensource.zalan.do/pathfinder/skipper:v0.11.17
ports:
- name: ingress-port
containerPort: 9999
hostPort: 9999
- name: metrics-port
containerPort: 9911
args:
- "skipper"
- "-kubernetes"
- "-kubernetes-in-cluster"
- "-address=:9999"
- "-proxy-preserve-host"
- "-serve-host-metrics"
- "-enable-ratelimits"
- "-experimental-upgrade"
- "-metrics-exp-decay-sample"
- "-lb-healthcheck-interval=3s"
- "-metrics-flavour=codahale,prometheus"
- "-enable-connection-metrics"
resources:
requests:
cpu: 200m
memory: 200Mi
readinessProbe:
httpGet:
path: /kube-system/healthz
port: 9999
initialDelaySeconds: 5
timeoutSeconds: 5
And regarding skipper RBAC:
apiVersion: v1
kind: ServiceAccount
metadata:
name: skipper-ingress
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: skipper-ingress
rules:
- apiGroups: ["extensions"]
resources: ["ingresses", ]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["namespaces", "services", "endpoints", "pods"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: skipper-ingress
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: skipper-ingress
subjects:
- kind: ServiceAccount
name: skipper-ingress
namespace: kube-system
apiVersion: apps/v1
kind: Deployment
metadata:
name: kube-ingress-aws-controller
annotations:
kubernetes.io/ingress.class: "skipper"
namespace: kube-system
labels:
application: kube-ingress-aws-controller
component: ingress
spec:
replicas: 1
selector:
matchLabels:
application: kube-ingress-aws-controller
component: ingress
template:
metadata:
labels:
application: kube-ingress-aws-controller
component: ingress
spec:
serviceAccountName: kube-ingress-aws
containers:
- name: controller
image: registry.opensource.zalan.do/teapot/kube-ingress-aws-controller:latest
env:
- name: AWS_REGION
value: eu-central-1
args:
- "--redirect-http-to-https"
And regarding kube-ingress RBAC:
apiVersion: v1
kind: ServiceAccount
metadata:
name: kube-ingress-aws
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: ingress-controller
rules:
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses/status
verbs:
- patch
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-controller
subjects:
- kind: ServiceAccount
name: default
namespace: kube-system
- kind: ServiceAccount
name: default
namespace: default
- kind: ServiceAccount
name: kube-ingress-aws
namespace: kube-system
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-dns
namespace: kube-system
spec:
strategy:
type: Recreate
selector:
matchLabels:
app: external-dns
template:
metadata:
labels:
app: external-dns
spec:
serviceAccountName: external-dns
containers:
- name: external-dns
image: registry.opensource.zalan.do/teapot/external-dns:latest
args:
- --source=service
- --source=ingress
- --domain-filter=alchemyone.eu # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones
- --provider=aws
- --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization
- --aws-zone-type=public # only look at public hosted zones (valid values are public, private or no value for both)
- --registry=txt
- --txt-owner-id=my-hostedzone-identifier
securityContext:
fsGroup: 65534 # For ExternalDNS to be able to read Kubernetes and AWS token files
And regarding the external DNS RBAC:
apiVersion: v1
kind: ServiceAccount
metadata:
name: external-dns
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: external-dns
rules:
- apiGroups: [""]
resources: ["services"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get","watch","list"]
- apiGroups: ["extensions"]
resources: ["ingresses"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["list","watch"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: external-dns-viewer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: external-dns
subjects:
- kind: ServiceAccount
name: external-dns
namespace: kube-system
After that I verified that everything was working so I deployed the frontend of my app:
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
labels:
app: frontend
env: prod
spec:
replicas: 1
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: my-react-frontend:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
readinessProbe:
httpGet:
path: /
port: 8080
initialDelaySeconds: 30
periodSeconds: 60
timeoutSeconds: 5
successThreshold: 2
livenessProbe:
httpGet:
path: /
port: 8080
initialDelaySeconds: 90
periodSeconds: 60
timeoutSeconds: 5
failureThreshold: 2
hostname: frontend
imagePullSecrets:
- name: regcred
---
apiVersion: v1
kind: Service
metadata:
name: frontend
labels:
app: frontend
spec:
type: ClusterIP
ports:
- port: 8080
name: "frontend-service"
selector:
app: frontend
Finally, I deployed the Ingress RESOURCE:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "test-cluster-ingress"
annotations:
kubernetes.io/ingress.class: "skipper"
labels:
app: foo-app
spec:
rules:
- host: my.example.com
http:
paths:
- backend:
serviceName: frontend
servicePort: 8080
As predicted, the record set of "my.example.com" was created on my ROUTE 53 of AWS as an ALIAS of my ALB. However, when I try to go to the host (my.example.com), my frontend is not displayed and a message of "503 Service Temporarily Unavailable" is shown....
Can you tell me what am I doing wrong? Thanks in advance for the help!