Kubernetes security in master-slave salt mode

2/17/2016

I have a few questions about Kubernetes master-slave salt mode (reposting from https://github.com/kubernetes/kubernetes/issues/21215)

  • How do you think anyone who has a large cluster in GCE upgrade things in place when a new vulnerability is exposed?
  • How does one do things like regular key rotation etc, without a master-minion salt setup in GCE? Does not that leave GCE cluster more vulnerable in the long run?
  • I am not a security expert, so this is probably a naive question. Since GCE cluster is already running inside a pretty locked down network, is the communication between master-slave a major concern? I understand, in GKE the master is hidden and access is restricted to the GCP project owner. But in GCE, master is visible. So, is this a real concern for GCE only setups?
-- codefx
kubernetes

1 Answer

2/17/2016

How do you think anyone who has a large cluster in GCE upgrade things in place when a new vulnerability is exposed?

By upgrading to a new version of k8s. If there is a kernel or docker vulnerability, we would build a new base image (container-vm), send a PR to enable it in GCE, and then cut a new release referencing the new base image. If there is a k8s vulnerability, we would cut a new version of kubernetes and you could upgrade it using the upgrade.sh script in github.

How does one do things like regular key rotation etc, without a master-minion salt setup in GCE? Does not that leave GCE cluster more vulnerable in the long run?

By updating the keys on the master node, updating the keys in the node instance template, and rolling nodes from the old instance template to the new instance template. We don't want to distribute keys via salt, because then you have to figure out how to secure salt (which requires keys which then also need to be rotated). Instead we "distribute" keys out of band using the GCE metadata server.

Since GCE cluster is already running inside a pretty locked down network, is the communication between master-slave a major concern?

For GKE, the master is running outside of the protected network, so it is a concern. GCE follows the same security model (even though it isn't strictly necessary) because it reduces the burden on the folks maintaining both systems if there is less drift in how they are configured.

So, is this a real concern for GCE only setups?

For most folks it probably isn't a concern. But you could imagine a large company running multiple clusters (or other workloads) in the same network so that services maintained by different teams could easily communicate over the internal cloud network. In that case, you would still want to protect the communication between the master and nodes to reduce the impact an attacker (or malicious insider) could have by exploiting a single entry point into the network.

-- Robert Bailey
Source: StackOverflow