Scheduling pods onto nodes only af kube2iam is up and running

12/5/2018

On our AWS based Kubernetes cluster, we use kube2iam to provide pod level IAM roles.

There's an edge case we're dealing with where pods load before kube2iam is ready and they get the default instance role, and are therefore unable to operate.

I can think of a few solution which I dislike:

  • Requiring app code to check it's own role
  • Adding an init container to check that the expected role is being served
  • Adding podAffinity to each pod to make sure it's co-located with a running kube2iam instance

I don't want each application developer to need to remember to include some specific machinery to ensure they get a role.

Is there a way to globally do this? I'm guessing it would involve something like marking the Node as unschedulable by default and changing that status when kube2iam is up. Not sure how to achieve that.

-- Rotem Tamir
amazon-iam
amazon-web-services
kubernetes

1 Answer

12/6/2018

How about utilizing liveness probe inside your Pod templates against Pod created by kube2iam`s DaemonSet ? It should keep restarting newly created pods until kube2iam is ready. (e.g HTTP probe against 8181 port - default listening port of kube2iam)

-- Nepomucen
Source: StackOverflow