How to run kubernetes commands on different host than one originally used?

5/11/2016

After successfully launching a gcloud or aws cluster then populating it with kubernetes Service + Deployment commands like

kubectl create -f my-deployment.yaml

all is well ONLY if I stay on that same machine ... However how do I continue interacting with same deployed cluster from a different local host ? ... I am trying to avoid the dreaded :

kubectl get pods
The connection to the server localhost:8080 was refused - did you specify the right host or port?
-- Scott Stensland
amazon-web-services
gcloud
kubernetes

1 Answer

5/11/2016

For Amazon aws just authenticate using

export     AWS_ACCESS_KEY_ID=$(cat ${AWS_ACCOUNT_CONFIGDIR}/id)
export AWS_SECRET_ACCESS_KEY=$(cat ${AWS_ACCOUNT_CONFIGDIR}/key)

$(aws ecr get-login --region ${AWS_REGION} )

then issue kubectl commands as though you deployed your cluster from this other host


For google cloud : just login to gcloud on that different local host to retrieve cluster credentials

gcloud container --project ${PROJECT_ID}  clusters get-credentials ${GKE_CLUSTER} --zone ${GKE_ZONE} 

then kubernetes commands will work ... as per

kubectl get pods
NAME                                    READY     STATUS    RESTARTS   AGE
chainsaw-deployment-2102970301-q5hyn    2/2       Running   0          2h
mongo-controller-81c3m                  1/1       Running   0          2h
-- Scott Stensland
Source: StackOverflow