3 Kubernetes clusters 1 base on local machine

11/7/2019

I would like to learn Kubernetes and would like to setup it on my laptop.

The architecture would be as follows:

enter image description here

  • Create 4 Ubuntu 18.04 server VM's instances on my laptop
  • 3 of 4 VM's will be Kubernetes Clusters and 1 VM wilk be the base
  • Access via SSH the base VM

For virtualization, I am using Virtual Box.

The question is, how to achieve it?

-- zero_coding
kubernetes
virtual-machine

2 Answers

11/12/2019

To set up Kubernetes Cluster on Ubuntu Servers with Virtualbox and Kubeadm follow this steps:

Prerequisites:

  • Virtual machines with specification of minimum:
    • 2 cores and 2GB RAM for master node
    • 1 core and 1GB for each of worker nodes
  • Ubuntu Server 18.04 installed on all virtual machines
  • OpenSSH Server installed on all virtual machines

All of the virtual machines need to communicate with the Internet, main host and each other. It can be done through various means like: bridged networking, virtual hosts adapters etc. The networking scheme example below can be adjusted.

Network scheme

Ansible:

You can do all things manually but to speed up the configuration process you can use automation tool like Ansible. It can be installed on the virtualization host, another virtual machine etc.

Installation steps to reproduce on host

  • Refresh the information about packages in repository:
    $ sudo apt update
  • Install package manager for Python3:
    $ sudo apt install python3-pip
  • Install Ansible package:
    $ sudo pip3 install ansible

Configuring SSH key based access:

Generating key pairs

To be able to connect to virtual machines without password you need to configure ssh keys. Command below will create a pair of ssh keys (private and public) and allow you to login to different systems without providing password.
$ ssh-keygen -t rsa -b 4096
These keys will be created in default location: /home/USER/.ssh

Authorization of keys on virtual machines

Next step is to upload newly created ssh keys to all of the virtual machines.
For each of virtual machine you need to invoke:
$ ssh-copy-id USER@IP_ADDRESS
This command will copy your public key to the authorized_keys file and will allow you to login without password.

SSH root access

By default root account can't be accessed with ssh only by password. It can be accessed with ssh keys (which you created earlier). Assuming the default configurations of files you can copy the ssh directory from user to root directory.

This step needs to invoked on all virtual machines:
$ sudo cp -r /home/USER/.ssh /root/

You can check it by running below command on main host:
$ ssh root@IP_ADDRESS

If you can connect without password it means that the keys are configured correctly.

Checking connection between virtual machines and Ansible:

Testing the connection

You need to check if Ansible can connect to all of the virtual machines. To do that you need 2 things:

  • Hosts file with information about hosts (virtual machines in that case)
  • Playbook file with statements what you require from Ansible to do

Example hosts file:

[kubernetes:children]  
master  
nodes  

[kubernetes:vars]  
ansible_user=root  
ansible_port=22  

[master]  
kubernetes-master ansible_host=10.0.0.10  

[nodes]  
kubernetes-node1 ansible_host=10.0.0.11  
kubernetes-node2 ansible_host=10.0.0.12  
kubernetes-node3 ansible_host=10.0.0.13

Hosts file consists of 2 main groups of hosts:

  • master - group created for master node
  • nodes - group created for worker nodes

Variables specific to group are stored in section [kubernetes:vars].

Example playbook:

- name: Playbook for checking connection between hosts  
  hosts: all  
  gather_facts: no  

  tasks:
  - name: Task to check the connection  
    ping:

Main purpose of above playbook is to check connection between host and virtual machines.
You can test the connection by invoking command:
$ ansible-playbook -i hosts_file ping.yaml

Output of this command should be like this:

PLAY [Playbook for checking connection between hosts] *****************************************************  

TASK [Task to check the connection] ***********************************************************************  

ok: [kubernetes-node1]  
ok: [kubernetes-node2]  
ok: [kubernetes-node3]  
ok: [kubernetes-master]  

PLAY RECAP ************************************************************************************************  

kubernetes-master : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0  
kubernetes-node1 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0  
kubernetes-node2 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0  
kubernetes-node3 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0  

The output above proves that connection between Ansible and virtual machines have been successful.

Configuration before cluster deployment:

Configure hostnames

Hostnames can be configured with Ansible. Each vm should connect with each vm by their hostnames. Ansible can modify hostnames as well as /etc/hosts file. Example playbook: hostname.yaml

Disable SWAP

Swap needs to be disabled when working with Kubernetes. Example playbook: disable_swap.yaml

Additional software installation

Some packages are required before provisioning. All of them can be installed with Ansible:
Example playbook: apt_install.yaml

Container Runtime Interface

In this example you will install Docker as your CRI. Playbook docker_install.yaml will:

  • Add apt signing key for Docker
  • Add Docker's repository
  • Install Docker with specific version (latest recommended)

Docker configuration

[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd"

When deploying Kubernetes cluster kubeadm will give above warning about Docker cgroup driver. Playbook docker_configure.yaml was created to resolve this issue.

Kubernetes tools installation

There are some core components of Kubernetes that need to be installed before cluster deployment. Playbook kubetools_install.yaml will:

  • For master and worker nodes:
    • Add apt signing key for Kubernetes
    • Add Kubernetes repository
    • Install kubelet and kubeadm
  • Additionally for master node:
    • Install kubectl

Reboot

Playbook reboot.yaml will reboot all the virtual machines.

Cluster deployment:

Cluster initalization

After successfully completing all the steps above, cluster can be created. Command below will initialize a cluster:

$ kubeadm init --apiserver-advertise-address=IP_ADDRESS_OF_MASTER_NODE --pod-network-cidr=192.168.0.0/16

Kubeadm can give warning about number of CPU's. It can be ignored by passing additional argument to kubeadm init command: --ignore-preflight-errors=NumCPU

Sucessful kubeadm provisioning should output something similar to this:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.0.10:6443 --token SECRET-TOKEN \
    --discovery-token-ca-cert-hash sha256:SECRET-CA-CERT-HASH

Copy kubeadm join command for all the worker nodes:

kubeadm join 10.0.0.10:6443 --token SECRET-TOKEN \
    --discovery-token-ca-cert-hash sha256:SECRET-CA-CERT-HASH

Run commands below as regular user:

 mkdir -p $HOME/.kube
 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 sudo chown $(id -u):$(id -g) $HOME/.kube/config

Deploying Container Network Interface (CNI)

CNI is responsible for networking between pods and nodes. There are many examples like:

  • Flannel
  • Calico
  • Weave
  • Multus

Command below will install Calico:

$ kubectl apply -f https://docs.projectcalico.org/v3.10/manifests/calico.yaml

Provisioning worker nodes

Run previously stored command from kubeadm init output on all worker nodes:

kubeadm join 10.0.0.10:6443 --token SECRET-TOKEN \
    --discovery-token-ca-cert-hash sha256:SECRET-CA-CERT-HASH

All of the worker nodes should output:

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

Testing:

Run below command on master node as regular user to check if nodes are properly connected:

$ kubectl get nodes

Output of this command:

NAME                STATUS   ROLES    AGE    VERSION
kubernetes-master   Ready    master   115m   v1.16.2
kubernetes-node1    Ready    <none>   106m   v1.16.2
kubernetes-node2    Ready    <none>   105m   v1.16.2
kubernetes-node3    Ready    <none>   105m   v1.16.2

Above output concludes that all the nodes are configured correctly.

Pods can now be deployed on the cluster!

-- Dawid Kruk
Source: StackOverflow

11/7/2019

Hope this helps. The most simple way I found, after trying almost every other way to do this. Rancher 2.0 is an orchestration tool to get started with cluster creation specifically for kubernetes with ease and get into deploying your first service as quick as possible. This helps in understanding the minutes of the kubernetes via a top-down approach.

Rancher provides a very simple to use user-friendly UI to get into it with well-written guides. If visualizing stuff helps you this is the best way to do it.

This is a use case of an architecture we have and what can be achieved with Rancher RKE.

enter image description here

Some references + there are videos out there as well.

-- damitj07
Source: StackOverflow