I'm developing in a virtual machine (centOS in Vagrant) running multiple containers. I need to set up a container orchestrator within the VM. A list of containers:
[vagrant@localhost ~]$ docker-compose -f /vagrant/docker-compose.yml ps
Name Command State Ports
------------------------------------------------------------------------------------------------------------
vagrant_django_1 /run-mod_wsgi-express.sh Up 8000/tcp
vagrant_ers-build_1 bash /ers/startup.sh Up 35729/tcp
vagrant_jupyterhub_1 /srv/run_jupyterhub.sh Up 8081/tcp, 0.0.0.0:8888->8888/tcp
vagrant_mongodb_1 /usr/bin/mongod Up 27017/tcp
vagrant_proxy_1 /run-httpd.sh Up 0.0.0.0:443->443/tcp, 0.0.0.0:80->80/tcp
vagrant_python-volume_1 /bin/bash Exit 0
vagrant_static_1 /run-httpd.sh Up 80/tcp
vagrant_websocket_1 bash -c source activate tr ... Up 8473/tcp
My understanding is that I run a "master" on the host, and connect each container to the master.
If nodes = container, the Kubernetes documentation says to ssh into each container and run the kubeadm join
command. The issue is you can't ssh into containers; Executing /bin/bash
on each container is the closest thing to SSH, but kubeadm, docker, and systemd isn't installed in each container.
If nodes != container, then I'm not sure how to connect within a single VM.
Do I have to create a second VM as the "master" or can everything be done in a single VM?
Minikube may be what you are looking for:
https://github.com/kubernetes/minikube
The minikube tool spins up a VM which it configures to run a single-node version of kubernetes. From the VM host you can use kubernetes cli tools like kubectl to deploy containers into the single-node kubernetes running in the VM.
There is another tool called kompose:
https://github.com/kubernetes/kompose
which can help translate docker-compose.yml files into kubernetes resource files, suitable for use by kubectl.