I like to find out the current best practice for setting up a kubernetes cluster on a Dell Alienware Aurora workstation running Ubuntu 18.04 LTS for GPU based tensorflow workload. This will be a staging ground for my services/containers before I deploy them to a full-blown k8s cluster. I am not sure what the correct strategy for such a setup looks like. Here are some possibilities:
Update: added kubeadm options. Can someone also comment on the docker in docker solution. Will services/pods work seamlessly from docker in docker setup to multi-node remote machines/cloud instance setups?
Would love to hear from the kubernetes experts or someone familiar with tensorflow/GPU workloads on a single physical machine.
I'd go with 2 or 3 vm's and using kubeadm. You'll have real cluster to play with. There's some vagrant/ansible playbooks out there. GPU/Tensorflow is kinda new, so play ;)