After successfully launching a gcloud or aws cluster then populating it with kubernetes Service + Deployment commands like
kubectl create -f my-deployment.yaml
all is well ONLY if I stay on that same machine ... However how do I continue interacting with same deployed cluster from a different local host ? ... I am trying to avoid the dreaded :
kubectl get pods
The connection to the server localhost:8080 was refused - did you specify the right host or port?
For Amazon aws just authenticate using
export AWS_ACCESS_KEY_ID=$(cat ${AWS_ACCOUNT_CONFIGDIR}/id)
export AWS_SECRET_ACCESS_KEY=$(cat ${AWS_ACCOUNT_CONFIGDIR}/key)
$(aws ecr get-login --region ${AWS_REGION} )
then issue kubectl commands as though you deployed your cluster from this other host
For google cloud : just login to gcloud on that different local host to retrieve cluster credentials
gcloud container --project ${PROJECT_ID} clusters get-credentials ${GKE_CLUSTER} --zone ${GKE_ZONE}
then kubernetes commands will work ... as per
kubectl get pods
NAME READY STATUS RESTARTS AGE
chainsaw-deployment-2102970301-q5hyn 2/2 Running 0 2h
mongo-controller-81c3m 1/1 Running 0 2h