How to deploy Kubernetes on AWS?

3/2/2017

I'm wondering how people are deploying a production-caliber Kubernetes cluster in AWS and, more importantly, how they chose their approach.

The k8s documentation points towards kops for Debian, Ubuntu, CentOS, and RHEL or kube-aws for CoreOS/Container Linux. Among these choices it's not clear how to pick one over the others. CoreOS seems like the most compelling option since it's designed for container workloads.

But wait, there's more.

bootkube seems to be next iteration of the CoreOS deployment technology and is on the roadmap for inclusion within kube-aws. Should I wait until kube-aws uses bootkube?

Heptio recently announced a Quickstart architecture for deploying k8s in AWS. This is the newest approach and so probably the least mature approach but it does seem to have gained traction from within AWS.

Lastly kubeadm is a thing and I'm not really sure where it fits into all of this.

There are probably more approaches that I'm missing too.

Given the number of options with overlapping intent it's very difficult to choose a path forward. I'm not interested in a proof-of-concept. I want to be able to deploy a secure, highly-available cluster for production use and be able to upgrade the cluster (host OS, etcd, and k8s system components) over time.

What did you choose and how did you decide?

-- bfallik
amazon-web-services
kubernetes

1 Answer

3/2/2017

I'd say pick anything which fit's your needs (see also Picking the right solution)...

Which could be:

  • Speed of the cluster setup
  • Integration in your existing toolchain
    • e.g. kops integrates with Terraform which might be a good fit for some prople
  • Experience within your team/company/...
    • e.g. how comfortable are you with the related Linux distribution
  • Required maturity of the tool itself
    • some tools are very alpha, are you willing to play to role of an early adaptor?
  • Ability to upgrade between Kubernetes versions
    • kubeadm has this on their agenda, some others prefer to throw away clusters instead of upgrading
  • Required integration into external tools (monitoring, logging, auth, ...)
  • Supported cloud providers

With your specific requirements I'd pick the Heptio or kubeadm approach.

  • Heptio if you can live with the given constraints (e.g. predefined OS)
  • kubeadm if you need more flexibility, everything done with kubeadm can be transferred to other cloud providers

Other options for AWS lower on my list:

  • Kubernetes the hard way - using this might be the only true way to setup a production cluster as this is the only way you can fully understand each moving part of the system. Lower on the list, because often the result from any of the tools might just be more than enough, even for production.
  • kube-up.sh - is deprecated by the community, so I'd not use it for new projects
  • kops - my team had some strange experiences with it which seemed due to our (custom) needs back then (existing VPC), that's why it's lower on my list - it would be #1 for an environment where Terraform is used too.
  • bootkube - lower on my list, because it's limitation to CoreOS
  • Rancher - interesting toolchain, seems to be too much for a single cluster

Offtopic: If you don't have to run on AWS, I'd also always consider to rather run on GCE for production workloads, as this is a well managed platform rather than something you've to build yourself.

-- pagid
Source: StackOverflow