i'm getting an error when running kubectl one one machine (windows)
the k8s cluster is running on CentOs 7 kubernetes cluster 1.7 master, worker
Here's my .kube\config
`apiVersion: v1 clusters:
the cluster is built using kubeadm with the default certificates on the pki directory
kubectl unable to connect to server: x509: certificate signed by unknown authority
For those of you that were late to the thread like I was and none of these answers worked for you I may have the solution:
When I copied over my .kube/config file to my windows 10 machine (with kubectl installed) I didn't change the IP address from 127.0.0.1:6443 to the master's IP address which was 192.168.x.x. (running windows 10 machine connecting to raspberry pi cluster on the same network). Make sure that you do this and it may fix your problem like it did mine.
This is an old question but in case that also helps someone else here is another possible reason.
Let's assume that you have deployed Kubernetes with user x. If the .kube dir is under the /home/x user and you connect to the node with root or y user it will give you this error.
You need to switch to the user profile so kubernetes can load the configuration from the .kube dir.
Hope this helps.
In case of the error you should export all the kubecfg which contains the certs. kops export kubecfg "your cluster-name
and export KOPS_STATE_STORE=s3://"paste your S3 store"
.
Now you should be able to access and see the resources of your cluster.
On GCP
check: gcloud version
-- localMacOS# gcloud version
Run: --- localMacOS# gcloud container clusters get-credentials 'clusterName' \ --zone=us-'zoneName'
Get clusterName and zoneName from your console -- here: https://console.cloud.google.com/kubernetes/list?
ref: .x509 @market place deployments on GCP #Kubernetes
This was happening because my company's network does not allow self signing certificates through their network. Try switching to a different network
Run:
gcloud container clusters get-credentials standard-cluster-1 --zone us-central1-a --project devops1-218400
here devops1-218400
is my project name. Replace it with your project name.
One more solution in case it helps anyone:
My scenario:
~/.kube/config
~/.kube/config
for server
is https://kubernetes.docker.internal:6443
Issue: kubectl
commands to this endpoint were going through the proxy, I figured it out after running kubectl --insecure-skip-tls-verify cluster-info dump
which displayed the proxy html error page.
Fix: just making sure that this URL doesn't go through the proxy, in my case in bash I used export no_proxy=$no_proxy,*.docker.internal
I my case I resolved this issue copying the kubelet configuration to my home kube config
cat /etc/kubernetes/kubelet.conf > ~/.kube/config
I just want to share, sorry I wasn't able to provide this earlier as I just realized this is causing
so on the master node we're running a kubectl proxy
kubectl proxy --address 0.0.0.0 --accept-hosts '.*'
I stopped this and voila the error was gone.
I'm now able to do
kubectl get nodes
NAME STATUS AGE VERSION centos-k8s2 Ready 3d v1.7.5 localhost.localdomain Ready 3d v1.7.5
I hope this helps those who stumbled upon this scenario
I got the same error while running $ kubectl get nodes
as a root user. I fixed it by exporting kubelet.conf
to environment variable.
$ export KUBECONFIG=/etc/kubernetes/kubelet.conf
$ kubectl get nodes