I would like to learn Kubernetes and would like to setup it on my laptop.
The architecture would be as follows:
For virtualization, I am using Virtual Box.
The question is, how to achieve it?
To set up Kubernetes Cluster on Ubuntu Servers with Virtualbox and Kubeadm follow this steps:
All of the virtual machines need to communicate with the Internet, main host and each other. It can be done through various means like: bridged networking, virtual hosts adapters etc. The networking scheme example below can be adjusted.
You can do all things manually but to speed up the configuration process you can use automation tool like Ansible. It can be installed on the virtualization host, another virtual machine etc.
$ sudo apt update
$ sudo apt install python3-pip
$ sudo pip3 install ansible
To be able to connect to virtual machines without password you need to configure ssh keys. Command below will create a pair of ssh keys (private and public) and allow you to login to different systems without providing password.$ ssh-keygen -t rsa -b 4096
These keys will be created in default location: /home/USER/.ssh
Next step is to upload newly created ssh keys to all of the virtual machines.
For each of virtual machine you need to invoke:$ ssh-copy-id USER@IP_ADDRESS
This command will copy your public key to the authorized_keys file and will allow you to login without password.
By default root account can't be accessed with ssh only by password. It can be accessed with ssh keys (which you created earlier). Assuming the default configurations of files you can copy the ssh directory from user to root directory.
This step needs to invoked on all virtual machines:$ sudo cp -r /home/USER/.ssh /root/
You can check it by running below command on main host:$ ssh root@IP_ADDRESS
If you can connect without password it means that the keys are configured correctly.
You need to check if Ansible can connect to all of the virtual machines. To do that you need 2 things:
Example hosts file:
[kubernetes:children]
master
nodes
[kubernetes:vars]
ansible_user=root
ansible_port=22
[master]
kubernetes-master ansible_host=10.0.0.10
[nodes]
kubernetes-node1 ansible_host=10.0.0.11
kubernetes-node2 ansible_host=10.0.0.12
kubernetes-node3 ansible_host=10.0.0.13
Hosts file consists of 2 main groups of hosts:
Variables specific to group are stored in section [kubernetes:vars].
Example playbook:
- name: Playbook for checking connection between hosts
hosts: all
gather_facts: no
tasks:
- name: Task to check the connection
ping:
Main purpose of above playbook is to check connection between host and virtual machines.
You can test the connection by invoking command:$ ansible-playbook -i hosts_file ping.yaml
Output of this command should be like this:
PLAY [Playbook for checking connection between hosts] *****************************************************
TASK [Task to check the connection] ***********************************************************************
ok: [kubernetes-node1]
ok: [kubernetes-node2]
ok: [kubernetes-node3]
ok: [kubernetes-master]
PLAY RECAP ************************************************************************************************
kubernetes-master : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
kubernetes-node1 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
kubernetes-node2 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
kubernetes-node3 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
The output above proves that connection between Ansible and virtual machines have been successful.
Hostnames can be configured with Ansible. Each vm should connect with each vm by their hostnames. Ansible can modify hostnames as well as /etc/hosts file. Example playbook: hostname.yaml
Swap needs to be disabled when working with Kubernetes. Example playbook: disable_swap.yaml
Some packages are required before provisioning. All of them can be installed with Ansible:
Example playbook: apt_install.yaml
In this example you will install Docker as your CRI. Playbook docker_install.yaml will:
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd"
When deploying Kubernetes cluster kubeadm will give above warning about Docker cgroup driver. Playbook docker_configure.yaml was created to resolve this issue.
There are some core components of Kubernetes that need to be installed before cluster deployment. Playbook kubetools_install.yaml will:
Playbook reboot.yaml will reboot all the virtual machines.
After successfully completing all the steps above, cluster can be created. Command below will initialize a cluster:
$ kubeadm init --apiserver-advertise-address=IP_ADDRESS_OF_MASTER_NODE --pod-network-cidr=192.168.0.0/16
Kubeadm can give warning about number of CPU's. It can be ignored by passing additional argument to kubeadm init command: --ignore-preflight-errors=NumCPU
Sucessful kubeadm provisioning should output something similar to this:
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.0.0.10:6443 --token SECRET-TOKEN \
--discovery-token-ca-cert-hash sha256:SECRET-CA-CERT-HASH
Copy kubeadm join command for all the worker nodes:
kubeadm join 10.0.0.10:6443 --token SECRET-TOKEN \
--discovery-token-ca-cert-hash sha256:SECRET-CA-CERT-HASH
Run commands below as regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
CNI is responsible for networking between pods and nodes. There are many examples like:
Command below will install Calico:
$ kubectl apply -f https://docs.projectcalico.org/v3.10/manifests/calico.yaml
Run previously stored command from kubeadm init output on all worker nodes:
kubeadm join 10.0.0.10:6443 --token SECRET-TOKEN \
--discovery-token-ca-cert-hash sha256:SECRET-CA-CERT-HASH
All of the worker nodes should output:
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
Run below command on master node as regular user to check if nodes are properly connected:
$ kubectl get nodes
Output of this command:
NAME STATUS ROLES AGE VERSION
kubernetes-master Ready master 115m v1.16.2
kubernetes-node1 Ready <none> 106m v1.16.2
kubernetes-node2 Ready <none> 105m v1.16.2
kubernetes-node3 Ready <none> 105m v1.16.2
Above output concludes that all the nodes are configured correctly.
Pods can now be deployed on the cluster!
Hope this helps. The most simple way I found, after trying almost every other way to do this. Rancher 2.0 is an orchestration tool to get started with cluster creation specifically for kubernetes with ease and get into deploying your first service as quick as possible. This helps in understanding the minutes of the kubernetes via a top-down approach.
Rancher provides a very simple to use user-friendly UI to get into it with well-written guides. If visualizing stuff helps you this is the best way to do it.
This is a use case of an architecture we have and what can be achieved with Rancher RKE.
Some references + there are videos out there as well.