I'm doing a little R&D on Kubernetes and deploying clusters on AWS. I'm using KOPS to do the heavy lifting and provisioning the cluster with Terraform.
My goal is testing the behavior of Kubernetes when a port that is essential to a particular service has been blocked on a node hosting pods provide the service. To do so, I wanted to ssh into my master node and manually block the port on a worker.
I've been running the following command :
ssh -i ~/.ssh/id_rsa admin@<master_ip>
only to have the following response
ssh: connect to host <master_ip> port 22: Connection refused
This is the command that I've been using to create my cluster :
kops create cluster \
--name=${KOPS_NAME} \
--state=${KOPS_STATE_STORE} \
--zones=eu-central-1a,eu-central-1b,eu-central-1c \
--master-zones=eu-central-1a,eu-central-1b,eu-central-1c \
--node-count=5 \
--node-size=t2.micro \
--master-size=t2.micro \
--ssh-public-key=~/.ssh/id_rsa.pub \
--out=. \
--target=terraform
The security groups on AWS, in which my masters are enrolled, allow traffic on the port 22 and running a
kops describe secret admin
shows that there is a public key attributed to the admin user.
I do not think this is a bug, as there is no one else that seems to have this problem on the KOPS git, and, while I am far from an expert in AWS, it would seem odd to me that this is a problem with AWS.
EDIT
gt; ssh -i ~/.ssh/id_rsa <adress>.elb.amazon.com -vvv
OpenSSH_7.4p1, OpenSSL 1.0.2k-fips 26 Jan 2017
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 58: Applying options for *
debug2: resolving "<address>.elb.amazonaws.com" port 22
debug2: ssh_connect_direct: needpriv 0
debug1: Connecting to <address>.elb.amazonaws.com [ip address] port 22.
Use can use a --bastion
flag while provisioning cluster. Then use the bastion host to ssh into master. This approach has been working for us.