kubeadm init on CentOS 7 using AWS as cloud provider enters a deadlock state

10/11/2016

I am trying to install Kubernetes 1.4 on a CentOS 7 cluster on AWS (the same happens with Ubuntu 16.04, though) using the new kubeadm tool.

Here's the output of the command kubeadm init --cloud-provider aws on the master node:

# kubeadm init --cloud-provider aws

<cmd/init> cloud provider "aws" initialized for the control plane. Remember to set the same cloud provider flag on the kubelet.
<master/tokens> generated token: "980532.888de26b1ef9caa3"
<master/pki> created keys and certificates in "/etc/kubernetes/pki"
<util/kubeconfig> created "/etc/kubernetes/kubelet.conf"
<util/kubeconfig> created "/etc/kubernetes/admin.conf"
<master/apiclient> created API client configuration
<master/apiclient> created API client, waiting for the control plane to become ready

The issue is that the control plane does not become ready and the command seems to enter a deadlock state. I also noticed that if the --cloud-provider flag is not provided, pulling images from Amazon EC2 Container Registry does not work, and when creating a service with type LoadBalancer an Elastic Load Balancer is not created.

Has anyone run kubeadm using aws as cloud provider?

Let me know if any further information is needed.

Thanks!

-- renansdias
kubernetes

3 Answers

11/1/2016

There are a couple of possibilities I am aware of here -:

1) In older kubeadm versions selinux blocks access at this point

2) If you are behind a proxy you will need to add the usual to the kubeadm environment -:

HTTP_PROXY
HTTPS_PROXY
NO_PROXY

Plus, which I have not seen documented anywhere -:

KUBERNETES_HTTP_PROXY 
KUBERNETES_HTTPS_PROXY
KUBERNETES_NO_PROXY
-- msduk
Source: StackOverflow

12/18/2016

I launched a cluster with kubeadm on AWS recently (kubernetes 1.5.1), and it was stuck on same step as your does. To solve it I had to add "--api-advertise-addresses=LOCAL-EC2-IP", it didn't work with external IP (which kubeadm probably fetches itself, when not specified other IP). So it's either a network connectivity issue (try temporarily a 0.0.0.0/0 security group rule on that master instance), or something else... In my case was a network issue, it wasn't able to connect to itself using its own external IP :)

Regarding PV and ELB integrations, I actually did launch a "PersistentVolumeClaim" with my MongoDB cluster and it works (it created the volume and attached to one of the slave nodes) here is it for example: PV created and attached to slave node

So latest version of kubeadm that ships with kubernetes 1.5.1 should work for you too! One thing to note: you must have proper IAM role permission to create resources (assign your master node, IAM role with something like "EC2 full access" during testing, you can tune it later to allow only the few needed actions)

Hope it helps.

-- Dmitry Shmakov
Source: StackOverflow

10/13/2016

The documentation (as of now) clearly states the following in the limitations:

The cluster created here doesn’t have cloud-provider integrations, so for example won’t work with (for example) Load Balancers (LBs) or Persistent Volumes (PVs). To easily obtain a cluster which works with LBs and PVs Kubernetes, try the “hello world” GKE tutorial or one of the other cloud-specific installation tutorials.

http://kubernetes.io/docs/getting-started-guides/kubeadm/

-- manojlds
Source: StackOverflow