How to configure an AWS Elastic IP to point to an OpenShift Origin running pod?

2/22/2018

We have set up OpenShift Origin on AWS using this handy guide. Our eventual hope is to have some pods running REST or similar services that we can access for development purposes. Thus, we don't need DNS or anything like that at this point, just a public IP with open ports that points to one of our running pods. Our first proof of concept is trying to get a jenkins (or even just httpd!) pod that's running inside OpenShift to be exposed via an allocated Elastic IP.

I'm not a network engineer by any stretch, but I was able to successuflly get an Elastic IP connected to one of my OpenShift "worker" instances, which I tested by sshing to the public IP allocated to the Elastic IP. At this point we're struggling to figure out how to make a pod visible that allocated Elastic IP, owever. We've tried a kubernetes LoadBalancer service, a kubernetes Ingress, and configuring an AWS Network Load Balancer, all without being able to successfully connect to 18.2XX.YYY.ZZZ:8080 (my public IP).

The most promising success was using oc port-forward seemed to get at least part way through, but frustratingly hangs without returning:

$ oc port-forward --loglevel=7 jenkins-2-c1hq2 8080 -n my-project
I0222 19:20:47.708145   73184 loader.go:354] Config loaded from file /home/username/.kube/config
I0222 19:20:47.708979   73184 round_trippers.go:383] GET https://ec2-18-2AA-BBB-CCC.us-east-2.compute.amazonaws.com:8443/api/v1/namespaces/my-project/pods/jenkins-2-c1hq2
....
I0222 19:20:47.758306   73184 round_trippers.go:390] Request Headers:
I0222 19:20:47.758311   73184 round_trippers.go:393]     X-Stream-Protocol-Version: portforward.k8s.io
I0222 19:20:47.758316   73184 round_trippers.go:393]     User-Agent: oc/v1.6.1+5115d708d7 (linux/amd64) kubernetes/fff65cf
I0222 19:20:47.758321   73184 round_trippers.go:393]     Authorization: Bearer Pqg7xP_sawaeqB2ub17MyuWyFnwdFZC5Ny1f122iKh8
I0222 19:20:47.800941   73184 round_trippers.go:408] Response Status: 101 Switching Protocols in 42 milliseconds
I0222 19:20:47.800963   73184 round_trippers.go:408] Response Status: 101 Switching Protocols in 42 milliseconds
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080

( oc port-forward hangs at this point and never returns)

We've found a lot of information about how to get this working under GKE, but nothing that's really helpful for getting this working for OpenShift Origin on AWS. Any ideas?

Update:

So we realized that sysdig.com's blog post on deploying OpenShift Origin on AWS was missing some key AWS setup information, so based on OpenShift Origin's Configuring AWS page, we set the following env variables and re-ran the ansible playbook:

$ export AWS_ACCESS_KEY_ID='AKIASTUFF'
$ export AWS_SECRET_ACCESS_KEY='STUFF'
$ export ec2_vpc_subnet='my_vpc_subnet'
$ ansible-playbook -c paramiko -i hosts openshift-ansible/playbooks/byo/config.yml --key-file ~/.ssh/my-aws-stack

I think this gets us closer, but creating a load-balancer service now gives us an always-pending IP:

$ oc get services
NAME         CLUSTER-IP    EXTERNAL-IP   PORT(S)             AGE
jenkins-lb   172.30.XX.YYY <pending>     8080:31338/TCP      12h

The section on AWS Applying Configuration Changes seems to imply I need to use AWS Instance IDs rather than hostnames to identify my nodes, but I tried this and OpenShift Origin fails to start if I use that method. Still at a loss.

-- Ogre Psalm33
amazon-web-services
kubernetes
openshift
openshift-origin

2 Answers

2/26/2018

It may not satisfy the "Elastic IP" part but how about using AWS cloud provider ELB to expose the IP/port to the pod via a service to the pod with LoadBalancer option?

  1. Make sure to configure the AWS cloud provider for the cluster (References)
  2. Create a svc to the pod(s) with type LoadBalancer.

For instance to expose a Dashboard via AWS ELB.

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: LoadBalancer <-----
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard

Then the svc will be exposed as an ELB and the pod can be accessed via the ELB public DNS name a53e5811bf08011e7bae306bb783bb15-953748093.us-west-1.elb.amazonaws.com.

$ kubectl (oc) get svc kubernetes-dashboard -n kube-system -o wide
NAME                   TYPE           CLUSTER-IP      EXTERNAL-IP                                                              PORT(S)         AGE       SELECTOR
kubernetes-dashboard   LoadBalancer   10.100.96.203   a53e5811bf08011e7bae306bb783bb15-953748093.us-west-1.elb.amazonaws.com   443:31636/TCP   16m       k8s-app=kubernetes-dashboard

References

  1. K8S AWS Cloud Provider Notes
  2. Reference Architecture OpenShift Container Platform on Amazon Web Services
  3. DEPLOYING OPENSHIFT CONTAINER PLATFORM 3.5 ON AMAZON WEB SERVICES
  4. Configuring for AWS
-- mon
Source: StackOverflow

2/27/2018

Check this guide out: https://github.com/dwmkerr/terraform-aws-openshift

It's got some significant advantages vs. the one you referring to in your post. Additionally, it has a clear terraform spec that you can modify and reset to using an Elastic IP (haven't tried myself but should work).

Another way to "lock" your access to the installation is to re-code the assignment of the Public URL to the master instance in the terraform script, e.g., to a domain that you own (the default script sets it to an external IP-based value with "xip.io" added - works great for testing), then set up a basic ALB that forwards https 443 and 8443 to the master instance that the install creates (you can do it manually after the install is completed, also need a second dummy Subnet; dummy-up the healthcheck as well) and link the ALB to your domain via Route53. You can even use free Route53 wildcard certs with this approach.

-- SVUser
Source: StackOverflow