Goal: have a yaml deployment that I simply run kubectl create -f deployment.yaml
on and it create nodes with pods spread accross them in the manner I describe in the yaml file.
Goal output. 3 nodes on google cloud with 2 nodes with 2 pods front-end/back-end and a 3rd node with the database running with a persistent volume.
There is no way to test this locally since docker-desktop is a single node system. Google cloud initially put all my pods on one node and they did not schedule because of lack of resources.
So I looked up the docs and asked several slack channels. The consensus is to use pod/node affinity and antiAffinity to make sure the pods I want are on the nodes I want.
Project overhead, k8s+Kustomize
Here are the github repos I am testing this with and trying to build what I am describing. I am trying to stay as cloud agnostic as possibly but if I need to be more specific with cloud keywords please leave a comment about it. The whole deployment is spread between the /kubernetes directory in each project. They did work locally before I added the pod/node affinity items.
What you're looking for here is Kubeadm. It's a tool used to bootstrap Kubernetes masters and join new nodes to clusters. Here is a reference document and here are the API docs for Kubelet and all of its arguments. Using Kubeadm and these arguments, you can build an initConfiguration, clusterConfiguration, and nodeConfiguration.
1) After you create these three files, put initConfiguration
and clusterConfiguration
into a single YAML file. And put nodeConfiguration
into its own file. Put the first file on your master(s) and the second file on your node(s). If you're installing with a Multi-Master configuration, keep this document in mind when building your configurations. You will also need a load balancer, proxy, or clever DNS trickery to enable this.
2) Install the following packages:
docker
containerd
kubelet
kubeadm
3) Disable any Swap partitions, SELinux, and configure your firewall appropriately.
3) Bootstrap Kubernetes on your Master(s) with your config file: kubeadm init --config /tmp/master.yml
.
4) Add your workers nodes to the cluster with your config file: kubeadm join --config /tmp/node.yml
.
5) Install a network plugin.
6) Deploy the rest of your software using a Kustomize project. Hell, you could even manage your Kubeadm configuration files with Kustomize. I have. You could even throw the network plugin manifest in here.
This Kubernetes solution is fully cloud agnostic and all of Kubernetes is automated but the configuration isn't. I would automate steps 1-5 with something like Ansible. Where you would declare each step as a task or role in a playbook. It supports a ton of modules. And there are open source roles on Galaxy when there aren't.
That leaves your infrastructure. Making this cloud agnostic is impossible. No two vendors have the same API. That's why abstraction layers like Kubernetes exist to begin with. To solve this problem, attempt to be Cloud Aware instead. Use a tool like Terraform to manage your infrastructure for you. Then, add your required Terraform commands to your Ansible playbook.
Bam, now you can deploy your entire infrastructure, configuration, and application with a single command. From a single YAML file:
ansible-playbook -i inventory.ini site.yml
Well, a single YAML file and a bunch of other roles and templates, playbooks, and Terraform configuration files. ;)
EDIT: Here are some open source projects I've written that leverage these ideas and design principles. You should be able to at least gleam a lot of Ansible logic from them. The first is meant to build a single node cluster on a bare metal server. The latter is meant to build a four node cluster on Proxmox.
Both work fully with all of the manifests I have written here. Which should also provide you with a wealth of Kustomize and Ansible examples.