Application load balancer distributing between elastic beanstalk and kubernetes

9/28/2019

We need to be able to point requests at different applications based on the url path. In our case, we have an elastic beanstalk application for one service and a kubernetes cluster for another. We need to able to route requests as api.example.com/service1 to elastic beanstalk and api.example.com/service2 to kubernetes.

We came across this question/answer on SO: Load balancing across different Elastic Beanstalk applications

After following the steps to associate the target group pointed at by a new application load balancer we created to the EB environment's auto scaling group, the requests to /service1 actually work, but only about half the time. The other time the requests simply timeout and no response is received.

To rule out security group issues, we've opened up the Elastic Beanstalk instance security group to all traffic temporarily, but still this issue persists.

Here is the application load balancer rules showing forward all to "everybody" target group. The "everbody" target group is the new target group attached to the EB environment's auto scaling group. application load balancer rules showing forward all to "everybody" target group

Here is the registered targets under the target group, showing 3 healthy instances. registered targets

Is anybody able to see something that we may be doing wrong to cause these intermittent issues?

-- SirCapsLock
amazon-elastic-beanstalk
amazon-web-services
kubernetes

1 Answer

9/29/2019

You need a global load balancer for managing two cluster. You can use a proxy as global load balancer like haproxy, envoy. Now in this situation you dns will point to proxy and then proxy will route traffic between elastic beanstalk and Kubernetes cluster.

envoy.yml

  static_resources:
  listeners:
  - address:
      socket_address:
        address: 0.0.0.0
        port_value: 80
    filter_chains:
    - filters:
      - name: envoy.http_connection_manager
        typed_config:
          "@type": type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager
          codec_type: auto
          stat_prefix: ingress_http
          route_config:
            name: local_route
            virtual_hosts:
            - name: backend
              domains:
              - "*"
              routes:
              - match:
                  prefix: "/service/1"
                route:
                  cluster: service1
              - match:
                  prefix: "/service/2"
                route:
                  cluster: service2
          http_filters:
          - name: envoy.router
            typed_config: {}
  clusters:
  - name: service1
    connect_timeout: 0.25s
    type: strict_dns
    lb_policy: round_robin
    http2_protocol_options: {}
    load_assignment:
      cluster_name: service1
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: service1
                port_value: 80
  - name: service2
    connect_timeout: 0.25s
    type: strict_dns
    lb_policy: round_robin
    http2_protocol_options: {}
    load_assignment:
      cluster_name: service2
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: service2
                port_value: 80
admin:
  access_log_path: "/dev/null"
  address:
    socket_address:
      address: 0.0.0.0
      port_value: 8001

Dockerfile

FROM envoyproxy/envoy-dev:98c35eff10ad170d550fb5ecfc2c6b3637418c0c

COPY envoy.yaml /etc/envoy/envoy.yaml

Google just launched Traffic Director and Traffic Director also work as global load balancer. Watch this conf for Traffic Director

-- evalsocket
Source: StackOverflow