I'm setting up a Kubernetes 1.1 with vagrant 1.7.4 on Windows. box-cutter/fedora22 is used in vagrant. The Kubernetes is up and running fine. All nodes are ready and the test pod can be deployed to any ready nodes. But the service can't reach to the pod on other node. it is working fine if I run the service on the node which hosting the pod. I believe this is a networking issue, since the podIP is reachable only on the hosting node.
I was told to set up flanneld on all nodes. But the problem is still there.
Any helps are highly appreciated.
George
I finally make it work.The key is to set up an overlay network for docker on different nodes to see each other. I'm using flannel. Here is an article about how to set up flannel on vagrant.
One thing need to be mentioned is you may have to delete 'docker0' interface before starting flanneld. I used bridge-utils to do this
$ yum -y install bridge-utils
$ ip link set docker0 down
$ brctl delbr docker0
Just in case anyone finds this in the future. You shouldn’t setup the overlay network for Docker as the container CRI shouldn’t be able to access containers outside their pod. This error is because of how vagrant ups multiple VMs and the eth interface assigned when using a private network for the nodes. In most VMs for Linux eth0 is the default external IP interface and K8s will bind to it automatically in the vagrant vm, keeping the K8s networking from functioning correctly as it’s configured with the external IP that is actually on eth1. You can fix this pretty easily by editing a few ENV variables prior to running KubeAdm. See this medium article for example. https://medium.com/@joatmon08/playing-with-kubeadm-in-vagrant-machines-part-2-bac431095706?source=linkShare-102e54f71648-1533865749
Kubespray may also be of interest to you, it allows you to set up a Kubernetes cluster from sratch using Vagrant.