SSH into kubernetes nodes created through KOPS

3/6/2019

I created a Kubernetes cluster through Kops. The configuration and the ssh keys were in a machine that I don't have access to anymore. Is it possible to ssh to the nodes through kops even if I have lost the key? I see there is a command -

kops get secrets

This gives me all the secrets. Can I use this to get ssh access to the nodes and how to do it?

I see the cluster state is stored in S3. Does it store the secret key as well?

-- Anshul Tripathi
amazon-ec2
amazon-web-services
kops
kubernetes

4 Answers

4/27/2019

In my case when I installed the cluster with Kops I had to run ssh-keygen like below that created id_rsa.pub/pvt keys. This is allowing me to simply do a ssh or

ssh-keygen
kops create secret --name ${KOPS_CLUSTER_NAME} sshpublickey admin -i ~/.ssh/id_rsa.pub

and then created the cluster with

kops update cluster --name ${KOPS_CLUSTER_NAME} --yes
ssh admin@ec2-13-59-4-99.us-east-2.compute.amazonaws.com
-- Al Kannan
Source: StackOverflow

3/6/2019

This gives me all the secrets. Can I use this to get ssh access to the nodes and how to do it?

Not really. These are secrets to access the kube-apiserver in the cluster. For example, for you to be able to run kubectl commands.

I see the cluster state is stored in S3. Does it store the secret key as well?

It's stored in S3 but not the ssh keys to access the servers. Those are stored in AWS under 'Key Pairs'.

keypair

Unfortunately, you can only get your private key that you can use to log in only once (when you create the keypair). So I think you are out of luck if you don't have the private key. If you have access to the AWS console you could snapshot the root drive of your instances and recreate your nodes (or control plane) one by one with a different AWS keypair that you have the private key for.

-- Rico
Source: StackOverflow

7/7/2019

You can run new daemonset with gcr.io/google-containers/startup-script containers, to update the public key on all your nodes, this will help you in case you've a new node spun and will replace the public key in all existing nodes.

kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
  name: startup-script
  labels:
    app: startup-script
spec:
  template:
    metadata:
      labels:
        app: startup-script
    spec:
      hostPID: true
      containers:
        - name: startup-script
          image: gcr.io/google-containers/startup-script:v1
          imagePullPolicy: Always
          securityContext:
            privileged: true
          env:
          - name: STARTUP_SCRIPT
            value: |
              #! /bin/bash
              touch /tmp/foo
              #echo "MYPUBLICKEY" > /home/admin/.ssh/authorized_keys
              echo done

replace MYPUBLICKEY with your public key, and the username after home, here admin will get replace depending on what OS you are using. This will help you access the node via ssh without changing/replacing your existing nodes

You can also add user-data in the ig while performing kops edit ig nodes and add the small one liner to append your public key.

-- rebelution
Source: StackOverflow

5/9/2019

You can't recover the private key, but you should be able install a new public key following this procedure:

kops delete secret --name <clustername> sshpublickey admin
kops create secret --name <clustername> sshpublickey admin -i ~/.ssh/newkey.pub
kops update cluster --yes to reconfigure the auto-scaling groups
kops rolling-update cluster --name <clustername> --yes to immediately roll all the machines so they have the new key (optional)

Taken from this document:

https://github.com/kubernetes/kops/blob/master/docs/security.md#ssh-access

-- Ben W.
Source: StackOverflow