I have followed the helloword tutorial on http://kubernetes.io/docs/hellonode/.
When I run:
kubectl run hello-node --image=gcr.io/PROJECT_ID/hello-node:v1 --port=8080
I get: The connection to the server localhost:8080 was refused - did you specify the right host or port?
Why do the command line tries to connect to localhost?
try run with sudo
permission mode
example sudo kubectl....
I have a smae issue. in my scenario there is kubernetes API server is not responding. so check you kubernetes API server and controller as well as.
I was also getting same below error:
Unable to connect to the server: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it.
Then I just execute below command and found everything working fine.
PS C:> .\minikube.exe start
Starting local Kubernetes v1.10.0 cluster... Starting VM... Downloading Minikube ISO 150.53 MB / 150.53 MB [============================================] 100.00% 0s Getting VM IP address... Moving files into cluster... Downloading kubeadm v1.10.0 Downloading kubelet v1.10.0 Finished Downloading kubelet v1.10.0 Finished Downloading kubeadm v1.10.0 Setting up certs... Connecting to cluster... Setting up kubeconfig... Starting cluster components... Kubectl is now configured to use the cluster. Loading cached images from config file. PS C:> .\minikube.exe start Starting local Kubernetes v1.10.0 cluster... Starting VM... Getting VM IP address... Moving files into cluster... Setting up certs... Connecting to cluster... Setting up kubeconfig... Starting cluster components... Kubectl is now configured to use the cluster.
The issue is that your kubeconfig
is not right. To auto-generate it run:
gcloud container clusters get-credentials "CLUSTER NAME"
This worked for me.
Make sure your config is set to the project - gcloud config set project [PROJECT_ID]
Run a checklist of the Clusters in the account: gcloud container clusters list
Check the output : NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VE. alpha-cluster asia-south1-a 1.9.7-gke.6 35.200.254.78 f1-micro 1.9.7- NUM_NODES STATUS gke.6 3 RUNNING
Run the following cmd -
gcloud container clusters get-credentials your-cluster-name --zone your-zone --project your-project
Fetching cluster endpoint and auth data. kubeconfig entry generated for alpha-cluster.
kubectl
such as-kubectl get nodes -o wide
Should be good to go.
Just make sure to follow: https://cloud.google.com/container-engine/docs/before-you-begin before http://kubernetes.io/docs/hellonode/
Reproduce the same error when doing a tutorial from Udacity called Scalable Microservices with Kubernetes https://classroom.udacity.com/courses/ud615, at the point of Using Kubernetes, Part 3 of Lesson.
Launch a Single Instance:
kubectl run nginx --image=nginx:1.10.0
Error:
Unable to connect to the server: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it.
How I resolved the Error:
Login to Google Cloud Platform
Navigate to Container Engine Google Cloud Platform, Container Engine
Click CONNECT on Cluster
Use login Credentials to access Cluster [NAME] in your Teminal
Proceeded With Work!!!
After running "kubeinit" command, kubernetes asks you to run following as regular user
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
But if you run this as a regular user, you will get "The connection to the server localhost:8080 was refused - did you specify the right host or port?" when trying to access as a root user and vice versa. So try accessing "kubectl" as the user who executed the above commands.
Reinitialising gcloud with proper account and project worked for me.
gcloud init
After this retrying the below command was successful and kubeconfig entry was generated.
gcloud container clusters get-credentials "cluster_name"
check the cluster info with
kubectl cluster-info
I had this problem using a local docker. The thing to do is check the logs of the containers its spins up to figure out what went wrong. For me it transpired that etcd had fallen over
$ docker logs <etcdContainerId>
<snip>
2016-06-15 09:02:32.868569 C | etcdmain: listen tcp 127.0.0.1:7001: bind: address already in use
Aha! I'd been playing with Cassandra in a docker container and I'd forwarded all the ports since I wasn't sure which it needed exposed and 7001 is one of its ports. Stopping Cassandra, cleaning up the mess and restarting it fixed things.
If you created a cluster on AWS using kops, then kops creates ~/.kube/config
for you, which is nice. But if someone else needs to connect to that cluster, then they also need to install kops so that it can create the kubeconfig for you:
export AWS_ACCESS_KEY_ID=$(aws configure get aws_access_key_id)
export AWS_SECRET_ACCESS_KEY=$(aws configure get aws_secret_access_key)
export CLUSTER_ALIAS=kubernetes-cluster
kubectl config set-context ${CLUSTER_ALIAS} \
--cluster=${CLUSTER_FULL_NAME} \
--user=${CLUSTER_FULL_NAME}
kubectl config use-context ${CLUSTER_ALIAS}
kops export cluster --name ${CLUSTER_FULL_NAME} \
--region=${CLUSTER_REGION} \
--state=${KOPS_STATE_STORE}
Regardless of your environment (gcloud or not ) , you need to point your kubectl to kubeconfig. By default, kubectl expects the path as $HOME/.kube/config or point your custom path as env variable (for scripting etc ) export KUBECONFIG=/your_kubeconfig_path
Please refer :: https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/
If you don't have a kubeconfig file for your cluster, create one by referring :: https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/
It is required to find cluster's ca.crt , apiserver-kubelet-client key and cert.
I had the same issue after a reboot, I followed the guide described here
So try the following:
$ sudo -i
# swapoff -a
# exit
$ strace -eopenat kubectl version
After that it works fine.
This errors means that kubectl
is attempting to connect to a Kubernetes apiserver running on your local machine, which is the default if you haven't configured it to talk to a remote apiserver.
Solution is this:
minikube delete
minikube start --vm-driver none