I have a local(without cloud provider) cluster made up of 3 vm the master and the nodes, I have created a volume with a nfs to reuse it if a pod die and is reschedule on another nodes, but i think same component not work well: I use to create the cluster just this guide: kubernetes guide and I have after that create the cluster this is the actual state:
master@master-VirtualBox:~/Documents/KubeT/nfs$ sudo kubectl get pod --all-namespaces
[sudo] password for master:
NAMESPACE NAME READY STATUS RESTARTS AGE
default mysqlnfs3 1/1 Running 0 27m
kube-system etcd-master-virtualbox 1/1 Running 0 46m
kube-system kube-apiserver-master-virtualbox 1/1 Running 0 46m
kube-system kube-controller-manager-master-virtualbox 1/1 Running 0 46m
kube-system kube-dns-86f4d74b45-f6hpf 3/3 Running 0 47m
kube-system kube-flannel-ds-nffv6 1/1 Running 0 38m
kube-system kube-flannel-ds-rqw9v 1/1 Running 0 39m
kube-system kube-flannel-ds-s5wzn 1/1 Running 0 44m
kube-system kube-proxy-6j7p8 1/1 Running 0 38m
kube-system kube-proxy-7pj8d 1/1 Running 0 39m
kube-system kube-proxy-jqshs 1/1 Running 0 47m
kube-system kube-scheduler-master-virtualbox 1/1 Running 0 46m
master@master-VirtualBox:~/Documents/KubeT/nfs$ sudo kubectl get node
NAME STATUS ROLES AGE VERSION
host1-virtualbox Ready <none> 39m v1.10.2
host2-virtualbox Ready <none> 40m v1.10.2
master-virtualbox Ready master 48m v1.10.2
and this is the pod:
master@master-VirtualBox:~/Documents/KubeT/nfs$ sudo kubectl get pod
NAME READY STATUS RESTARTS AGE
mysqlnfs3 1/1 Running 0 29m
it is schedule on the host2 and if i try to go in the shell of host 2 and I do dockerexec I use the container very well, the data are store and retrieve, but when I try to use kubect exec not work:
master@master-VirtualBox:~/Documents/KubeT/nfs$ sudo kubectl exec -it -n default mysqlnfs3 -- /bin/bash
error: unable to upgrade connection: pod does not exist