Kubernetes ingress in AWS

12/18/2018

Please, help me to deal with accessibility of my simple application. I created YML with an application:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: myapp-test
spec:
  replicas: 2
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: gcr.io/kubernetes-e2e-test-images/echoserver:2.1
        ports:
        - containerPort: 8080
---
apiVersion: extensions/v1beta1
kind: Service
apiVersion: v1
metadata:
  name: myapp-service
spec:
  selector:
    app: myapp
  ports:
   - name: http
     protocol: TCP
     port: 80
     targetPort: 8080
  type: NodePort
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nginx-ingress
spec:
  rules:
  - host: myapp.com
    http:
      paths:
      - path: /
        backend:
          serviceName: myapp-service
          servicePort: 80
      - path: /hello
        backend:
          serviceName: myapp-service
          servicePort: 80

Then I created k8s cluster via kops, like this, all services k8s have risen, I can enter the master:

kops create cluster \
--node-count = 2 \
--node-size = t2.micro \
--master-size = t2.micro \
--master-count = 1 \
--zones = us-east-1a \
--name = ${KOPS_CLUSTER_NAME}

In the end, I can't get to the application on port 80, it write's that the connection is refused! Can someone tell me, what is the problem? This yml above fully works, but in the minikube environment(

-- Stefan
amazon-web-services
kops
kubernetes
kubernetes-ingress

1 Answer

12/19/2018

Indeed you have created an Ingress resource, but I presume you have not deployed prior the NGINX Ingress Controller for your on-premise cluster on AWS. It's explained here on how to do this in general.

In case of Kubernetes cluster bootsrapped with Kops, things are more complex, and it requires you to modify an existing cluster, to use a dedicated kops add-on: kube-ingress-aws-controller, as explained on their github project page here

In current form your app can be reached only via Node/AWS Instance external IP on port assigned from default range (30000-32767). You can check currently assign port via kubectl get svc myapp-service), but this requires opening it first on firewall (default Inbound rules deny All traffic apart SSH). Based on you deploy/service manifest files:

NAME            TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
myapp-service   NodePort   100.64.187.80   <none>        80:32076/TCP   37m

with port 32076 open in inbound rules of Security Group assigned to my instance I can now reach app on NodePort:

curl <node_external_ip>:32076

Hostname: myapp-test-f87bcbd44-8nxpn
Pod Information:
-no pod information available-
Server values:
-- Nepomucen
Source: StackOverflow