I have setup Kubernetes cluster on Ubuntu Virtual Machines using the Vagrant and Oracle VirtualBox. I have created 3 node cluster i.e. 1 master and 2 worker nodes. I can connect to these nodes successfully and can run the kubectl commands.
I'm running these VM's on my laptop where the host machine is Mac OS. Now, I want to access the same Kubernetes cluster and run kubectl commands on my Mac Terminal.
Following is my Vagrant file:-
# -*- mode: ruby -*-
# vi:set ft=ruby sw=2 ts=2 sts=2:
# Define the number of master and worker nodes
# If this number is changed, remember to update setup-hosts.sh script with the new hosts IP details in /etc/hosts of each VM.
NUM_MASTER_NODE = 1
NUM_WORKER_NODE = 2
IP_NW = "192.168.56."
MASTER_IP_START = 1
NODE_IP_START = 2
LB_IP_START = 30
# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
Vagrant.configure("2") do |config|
# The most common configuration options are documented and commented below.
# For a complete reference, please see the online documentation at
# https://docs.vagrantup.com.
# Every Vagrant development environment requires a box. You can search for
# boxes at https://vagrantcloud.com/search.
# config.vm.box = "base"
config.vm.box = "ubuntu/bionic64"
# Disable automatic box update checking. If you disable this, then
# boxes will only be checked for updates when the user runs
# `vagrant box outdated`. This is not recommended.
config.vm.box_check_update = false
# Create a public network, which generally matched to bridged network.
# Bridged networks make the machine appear as another physical device on
# your network.
# config.vm.network "public_network"
#config.vm.network "private_network", ip: "55.55.55.5"
#config.vm.network "private_network", type: "dhcp"
#config.vm.network "public_network", :bridge => "en0: Wi-Fi (Wireless)", :ip => "192.168.56.2"
#config.vm.network "public_network", :bridge => "en0: Wi-Fi (Wireless)", type: "dhcp"
# Share an additional folder to the guest VM. The first argument is
# the path on the host to the actual folder. The second argument is
# the path on the guest to mount the folder. And the optional third
# argument is a set of non-required options.
# config.vm.synced_folder "../data", "/vagrant_data"
# Provider-specific configuration so you can fine-tune various
# backing providers for Vagrant. These expose provider-specific options.
# Example for VirtualBox:
#
# config.vm.provider "virtualbox" do |vb|
# # Customize the amount of memory on the VM:
# vb.memory = "1024"
# end
#
# View the documentation for the provider you are using for more
# information on available options.
# Provision Master Nodes
(1..NUM_MASTER_NODE).each do |i|
config.vm.define "kubemaster" do |node|
# Name shown in the GUI
node.vm.provider "virtualbox" do |vb|
vb.name = "kubemaster"
vb.memory = 2048
vb.cpus = 2
end
node.vm.hostname = "kubemaster"
node.vm.network :private_network, ip: IP_NW + "#{MASTER_IP_START + i}"
node.vm.network "forwarded_port", guest: 22, host: "#{2710 + i}"
node.vm.provision "setup-hosts", :type => "shell", :path => "ubuntu/vagrant/setup-hosts.sh" do |s|
s.args = ["enp0s8"]
end
node.vm.provision "setup-dns", type: "shell", :path => "ubuntu/update-dns.sh"
end
end
# Provision Worker Nodes
(1..NUM_WORKER_NODE).each do |i|
config.vm.define "kubenode0#{i}" do |node|
node.vm.provider "virtualbox" do |vb|
vb.name = "kubenode0#{i}"
vb.memory = 2048
vb.cpus = 2
end
node.vm.hostname = "kubenode0#{i}"
node.vm.network :private_network, ip: IP_NW + "#{NODE_IP_START + i}"
node.vm.network "forwarded_port", guest: 22, host: "#{2720 + i}"
node.vm.provision "setup-hosts", :type => "shell", :path => "ubuntu/vagrant/setup-hosts.sh" do |s|
s.args = ["enp0s8"]
end
node.vm.provision "setup-dns", type: "shell", :path => "ubuntu/update-dns.sh"
end
end
end
The IP address of my Host machine is 192.168.1.5 whereas for Kubernetes, these are 192.168.56.2, 192.168.56.3 and 192.168.56.4 for master, worker1 and worker2 nodes respectively.
I tried a lot but did not find any concrete solution. Would really appreciate your suggestions on this. Thanks
I think you just try to use ssh for remote the master server
Using kubeconfig
file is standard way to interact with a kubernetes cluster from outside the cluster. So there is nothing wrong with that. From security standpoint it's not a good idea to use admin user credential in a kubeconfig
file. To avoid that you can generate a service account token and use that in the kubeconfig
file. Limit the privilege of the service account using appropriate Role
and RoleBinding
.
Check this to know how to create a kubeconfig with a service account token
https://stackoverflow.com/questions/47770676/how-to-create-a-kubectl-config-file-for-serviceaccount