Created a 2 node Kubernetes cluster as:
KUBERNETES_PROVIDER=aws NUM_NODES=2 kube-up.sh
This shows the output as:
Found 2 node(s).
NAME STATUS AGE
ip-172-20-0-226.us-west-2.compute.internal Ready 57s
ip-172-20-0-227.us-west-2.compute.internal Ready 55s
Validate output:
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
Cluster validation succeeded
Done, listing cluster services:
Kubernetes master is running at https://52.33.9.1
Elasticsearch is running at https://52.33.9.1/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging
Heapster is running at https://52.33.9.1/api/v1/proxy/namespaces/kube-system/services/heapster
Kibana is running at https://52.33.9.1/api/v1/proxy/namespaces/kube-system/services/kibana-logging
KubeDNS is running at https://52.33.9.1/api/v1/proxy/namespaces/kube-system/services/kube-dns
kubernetes-dashboard is running at https://52.33.9.1/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
Grafana is running at https://52.33.9.1/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
InfluxDB is running at https://52.33.9.1/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb
I can see the instances in EC2 console. How do I ssh into the master node?
"Creates an AWS SSH key named kubernetes-. Fingerprint here is the OpenSSH key fingerprint, so that multiple users can run the script with different keys and their keys will not collide (with near-certainty). It will use an existing key if one is found at AWS_SSH_KEY, otherwise it will create one there. (With the default Ubuntu images, if you have to SSH in: the user is ubuntu and that user can sudo"
https://github.com/kubernetes/kubernetes/blob/master/docs/design/aws_under_the_hood.md
You should see the ssh key-fingerprint locally in ssh config or set the ENV and recreate.
If you are throwing up your cluster on AWS with kops, and use CoreOS as your image, then the login name would be "core".
Here is the exact command that worked for me:
ssh -i ~/.ssh/kube_aws_rsa admin@<masterip>
kube_aws_rsa
is the default key generated, otherwise controlled with AWS_SSH_KEY
environment variable. For AWS, it is specified in the file cluster/aws/config-default.sh
.
More details about the cluster can be found using kubectl.sh config view
.