I followed this guide https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws-launch.html to create a kubernetes cluster on AWS with kube-aws
.
I am using kube-aws
to v0.9.4-rc2
After successfully do kube-aws up --s3-uri s3://..
, I tried to get the nodes with kubectl get nodes
, and that's when I get this error:
:; kubectl get nodes
Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kube-ca")
In the kubeconfig
file, there is a line describing the certificate authority:
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority: credentials/ca.pem
Does anyone know what might have gone wrong for me? How could I debug it a bit further?
It seems like the problem was because my credentials were not all generated correctly. So perhaps the apiserver cert was signed with a wrong ca cert? Not sure how that might've happened.
Anyway, deleting the credentials
directory, then destroy the cluster and bring it up again solved the problem for me. Luckily it's still an experimental cluster, so I could do that. Not sure if I could've fixed it without destroying the cluster.
1.mkdir -p $HOME/.kube
2.sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
3.sudo chown $(id -u):$(id -g) $HOME/.kube/config
4.kubectl get nodes