Access local Kubernetes cluster running in Virtualbox

3/27/2018

I have configured a Kubernetes cluster using kubeadm, by creating 3 Virtualbox nodes, each node running CentOS (master, node1, node2). Each virtualbox virtual machine is configured using 'Bridge' networking. As a result, I have the following setup:

  1. Master node 'master.k8s' running at 192.168.19.87 (virtualbox)
  2. Worker node 1 'node1.k8s' running at 192.168.19.88 (virtualbox)
  3. Worker node 2 'node2.k8s' running at 192.168.19.89 (virtualbox

Now I would like to access services running in the cluster from my local machine (the physical machine where the virtualbox nodes are running).

Running kubectl cluster-info I see the following output:

Kubernetes master is running at https://192.168.19.87:6443
KubeDNS is running at ...

As an example, let's say I deploy the dashboard inside my cluster, how do I open the dashboard UI using a browser running on my physical machine?

-- Salvatore
kubeadm
kubernetes
virtualbox

2 Answers

3/28/2018

The traditional way of getting access to the kubernetes dashboard is documented in their readme and is to use kubectl proxy.

One should not have to ssh into the cluster to access any kubernetes service, since that would defeat the purpose of having a cluster, and would absolutely shoot a hole in the cluster's security model. Any ssh to Nodes should be reserved for "in case of emergency, break glass" situations.

More generally speaking, a well configured Ingress controller will surface services en-masse and also has the very pleasing side-effect of meaning your local cluster will operate exactly the same as your "for real" cluster, without any underhanded ssh-ery rules required

-- mdaniel
Source: StackOverflow

6/25/2019

The traditional way is to use kubectl proxy or a Load Balancer, but since you are in a development machine a NodePort can be used to publish the applications, as a Load balancer is not available in VirtualBox.

The following example deploys 3 replicas of an echo server running nginx and publishes the http port using a NodePort:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: my-echo
          image: gcr.io/google_containers/echoserver:1.8          
---

apiVersion: v1
kind: Service
metadata:
  name: nginx-service-np
  labels:
    name: nginx-service-np
spec:
  type: NodePort
  ports:
    - port: 8082        # Cluster IP http://10.109.199.234:8082
      targetPort: 8080  # Application port
      nodePort: 30000   # Example (EXTERNAL-IP VirtualBox IPs) http://192.168.50.11:30000/ http://192.168.50.12:30000/ http://192.168.50.13:30000/
      protocol: TCP
      name: http
  selector:
    app: nginx

You can access the servers using any of the VirtualBox IPs, like http://192.168.50.11:30000 or http://192.168.50.12:30000 or http://192.168.50.13:30000

See a full example at Building a Kubernetes Cluster with Vagrant and Ansible (without Minikube).

-- Javier Ruiz
Source: StackOverflow