When we set new SSH key using kops for existing Kubernetes cluster, would it break anything?

7/23/2018

We need to access the kubelet logs on our Kubernetes node (which is in AWS) to investigate an issue we are facing regarding Kubernetes error (see Even after adding additional Kubernetes node, I see new node unused while getting error "No nodes are available that match all of the predicates:).

Kubectl logs only gets logs from pod. To get kubelet logs, we need to ssh into the k8s node box - (AWS EC2 box). While doing so we are getting error "Permission denied (publickey)" which means we need to set the ssh public key as we may not be having access to what were set earlier.

Question is if we set the new keys using kops as described in https://github.com/kubernetes/kops/blob/master/docs/security.md, would we end up creating any harm to existing cluster? Would any of the existing services/access stop working? Or would this only impact manual ssh to the AWS EC2 machines?

-- mi10
amazon-web-services
kops
kubelet
kubernetes
ssh

1 Answer

7/23/2018

You would need to update the kops cluster using kops cluster update first. However, this would not change the SSH key on any running nodes.

By modifying a cluster using kops cluster update you are simply modifying the Launch Configurations for the cluster. This will only take effect when new nodes are provisioned.

In order to rectify this, you'll need to cycle your infrastructure. The only way to do this is to delete the nodes and control plane nodes one by one from the ASG.

Once you delete a node from the ASG, it will be replaced by the new launch configuration with the new SSH key.

Before you delete a node from AWS, you should drain it it first using kubectl drain:

kubectl drain <nodename> --ignore-daemonsets --force
-- jaxxstorm
Source: StackOverflow