I cannot access kibana on nginx ingress and route53

4/26/2020

I have deployed nginx ingress controller with internal load balancer and externalDNS on my EKS cluster so i tried to expose kibana with the hostname registred on route53 with private hosted zone (my-hostname.com). but when i access it on the browser using vpn it shows me site can't be reached. So i need to know what i did wrong

here is all the resources :

ingress controller :

apiVersion: v1
kind: Service
metadata:
  name: default-http-backend
spec:
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: default-http-backend

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: default-http-backend
spec:
  selector:
    matchLabels:
      app: default-http-backend
  template:
    metadata:
      labels:
        app: default-http-backend
    spec:
      containers:
      - name: default-http-backend
        image: gcr.io/google_containers/defaultbackend:1.3

---

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: internal-ingress
  name: internal-ingress-controller
spec:
  replicas: 1
  selector:
    matchLabels:
      app: internal-ingress
  template:
    metadata:
      labels:
        app: internal-ingress
    spec:
      containers:
      - args:
        - /nginx-ingress-controller
        - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
        - --configmap=$(POD_NAMESPACE)/internal-ingress-configuration
        - --tcp-services-configmap=$(POD_NAMESPACE)/internal-tcp-services
        - --udp-services-configmap=$(POD_NAMESPACE)/internal-udp-services
        - --annotations-prefix=nginx.ingress.kubernetes.io
        - --ingress-class=internal-ingress
        - --publish-service=$(POD_NAMESPACE)/internal-ingress
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.11.0
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: internal-ingress-controller
        ports:
        - containerPort: 80
          name: http
          protocol: TCP
        - containerPort: 443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1

---

apiVersion: v1
kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "3600"
    service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
    service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
  labels:
    app: internal-ingress
  name: internal-ingress
spec:
  externalTrafficPolicy: Cluster
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: http
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    app: internal-ingress
  sessionAffinity: None
  type: LoadBalancer

externalDNS :

apiVersion: v1
kind: ServiceAccount
metadata:
  name: external-dns

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: external-dns
rules:
- apiGroups: [""]
  resources: ["services","endpoints","pods"]
  verbs: ["get","watch","list"]
- apiGroups: ["extensions"]
  resources: ["ingresses"]
  verbs: ["get","watch","list"]
- apiGroups: [""]
  resources: ["nodes"]
  verbs: ["list","watch"]

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: external-dns-viewer
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: external-dns
subjects:
- kind: ServiceAccount
  name: external-dns
  namespace: default

---

apiVersion: apps/v1beta2
kind: Deployment
metadata:
  labels:
    app: external-dns-private
  name: external-dns-private
spec:
  replicas: 1
  selector:
    matchLabels:
      app: external-dns-private
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: external-dns-private
    spec:
      serviceAccountName: external-dns
      containers:
      - args:
        - --source=ingress
        - --domain-filter=my-hostname.com
        - --provider=aws
        - --registry=txt
        - --txt-owner-id=dev.k8s.nexus
        - --annotation-filter=kubernetes.io/ingress.class=internal-ingress
        - --aws-zone-type=private
        image: registry.opensource.zalan.do/teapot/external-dns:latest
        name: external-dns-private

ingress resource:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: "internal-ingress"
  labels:
    app: app
  name: app-private
spec:
  rules:
  - host: kibana.my-hostname.com
    http:
      paths:
      - backend:
          serviceName: kibana
          servicePort: 5601

kibana service :

apiVersion: v1
kind: Service
metadata:
  name: kibana
spec:
  selector:
    app: kibana
  ports:
  - name: client
    port: 5601
    protocol: TCP
  type: ClusterIP

I have checked recordsets of my private hosted zone and figured that kibana.my-hostname.com has been added but still cannot access it.

-- touati ahmed
aws-eks
external-dns
kubernetes
kubernetes-ingress
nginx-ingress

1 Answer

4/27/2020

Route53 will only respond request coming from your internal and allowed VPC's. You cannot reach the domain out of your VPC.

To solve the issue, change your zone to public, or use a VPN with Simple AD to forward requests to your private zone as described here.

References:

Working with private hosted zones

-- KoopaKiller
Source: StackOverflow