I'm new at kubernetes tech and I try to setup an healthy local cluster (on ESXI).
I encounter many errors that I was unable to resolve:
DashBoard running but can't access through kubectl proxy api
I was unable to access any svc exposed in NodePort type (tcp connection reset)
I was unable to retrieve logs from pods
I was unable to kubeadm upgrade plan
I thing that most of them are due to same miss configuration/error but I was enable to locate what/where is this broken bric.
If I forgot some information tell me, I'll add them to the post.
I running the cluster on vm. All vm are running centos7 I have already do this on all of them:
swapoff -a
systemctl disable firewalld
systemctl stop firewalld
setenforce 0
systemctl daemon-reload
systemctl restart docker
systemctl restart kubeletFor Flannel
sysctl -w net.bridge.bridge-nf-call-iptables=1
sysctl -w net.bridge.bridge-nf-call-ip6tables=1kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:22:21Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:10:24Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}kubectl get ep
NAME ENDPOINTS AGE
dark-room-dep 172.17.0.10:8085,172.17.0.9:8085 19h
kubernetes 10.66.222.223:6443 8dkubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dark-room-dep NodePort 10.99.12.214 <none> 8085:30991/TCP 19h
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8dkubectl cluster-info
Kubernetes master is running at https://10.66.222.223:6443
Heapster is running at https://10.66.222.223:6443/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at https://10.66.222.223:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
monitoring-grafana is running at https://10.66.222.223:6443/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
monitoring-influxdb is running at https://10.66.222.223:6443/api/v1/namespaces/kube-system/services/monitoring-influxdb/proxykubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
dark-room-dep 2 2 2 2 20hkubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default dark-room-dep-577bf64bb8-9n5p7 1/1 Running 0 20h
default dark-room-dep-577bf64bb8-jmppg 1/1 Running 0 20h
kube-system etcd-localhost.localdomain 1/1 Running 6 8d
kube-system heapster-69b5d4974d-qvtrj 1/1 Running 0 1d
kube-system kube-apiserver-localhost.localdomain 1/1 Running 5 8d
kube-system kube-controller-manager-localhost.localdomain 1/1 Running 4 8d
kube-system kube-dns-86f4d74b45-njzj9 3/3 Running 0 1d
kube-system kube-flannel-ds-h9c2m 1/1 Running 3 6d
kube-system kube-flannel-ds-tcbd7 1/1 Running 5 8d
kube-system kube-proxy-7v6mf 1/1 Running 3 6d
kube-system kube-proxy-hwbwl 1/1 Running 4 8d
kube-system kube-scheduler-localhost.localdomain 1/1 Running 6 8d
kube-system kubernetes-dashboard-7d5dcdb6d9-q42q5 1/1 Running 0 1d
kube-system monitoring-grafana-69df66f668-zf2kc 1/1 Running 0 1d
kube-system monitoring-influxdb-78d4c6f5b6-nhdbx 1/1 Running 0 1droute -n
Table de routage IP du noyau
Destination Passerelle Genmask Indic Metric Ref Use Iface
0.0.0.0 10.66.222.1 0.0.0.0 UG 100 0 0 ens192
10.66.222.0 0.0.0.0 255.255.254.0 U 100 0 0 ens192
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
172.25.1.0 172.25.1.0 255.255.255.0 UG 0 0 0 flannel.1kubectl get nodes --all-namespaces
NAME STATUS ROLES AGE VERSION
k8s-01 Ready <none> 6d v1.10.2
localhost.localdomain Ready master 8d v1.10.2Thank you for all help. Have a nice day.
zonko
To access DASHBOARD UI this is what I've done and it works on a kuebernetes cluster with the following specifications :
OS : CentOS 7
Kubernetes components version (but worked for me too with v1.10.x):
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:17:28Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:08:19Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}Steps
Install dahsboard UI
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yamlInstall kubectl on your local machine : the method here depends if you're working with Windows, Linux or OS X, but is pretty straightforward
Copy the directory .kube from your master node to your local machine
Create a service account with name <name> (you can put whatever you want, but from my experience is better if you use the same account name that you use to log in in your machine, where you import the .kube directory) in namespace kube-system
$ vim my_user.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: <your account user_name>
namespace: kube-system
kubectl create -f my_user.yamlCreate the cluster role association
$ vim cluster-admin-role-association.yml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: <your account user_name>
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: <your account user_name>
namespace: kube-system
kubectl create -f cluster-admin-role-association.ymlGet your token to login
$ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep <your account user_name> | awk '{print $1}')`
`Name: <your account user_name>-token-xxxxx
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name=<your account user_name>
kubernetes.io/service-account.uid=xxxxxxxxxxxxxxxxxxxxxx
Type: kubernetes.io/service-account-token
Data
====
namespace: 11 bytes
token:
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx (your token)
Now you can execute in your local machine kubectl proxy, access de DashboardUI on the following URL and login with the token:
http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
You can change namespaces to affect different user to different projects for example and be able to be more precise with permissions
To access a SERVICE usually, at least with my deployment, you need to know in which of your nodes the service is running (you can get it by adding -o wide to your kubectl get resource query) and you should be able to access it with http(s)://<node_ip>:<service_port>/<any url complement if there is one>
There is probably a better way to access the service (dns names), but I'm still learning too, so for the moment that's how I do it
Hope that it helps
Cheers
I errors that I have resolved:
I was unable to retrieve logs from pods : node disable firewall
I was unable to kubeadm upgrade plan : proxy config misbehaving
I errors that I was unable to resolve:
DashBoard running but can't access through kubectl proxy api : I have work on this and discover that it need heapster and heapster need other component ... I was enable to make it work.
I was unable to access any svc exposed in NodePort type (tcp connection reset): I have successfully deploy svc on port 80 but it doesn't work on any other port.