Kubernetes HyperV Cluster Expose Service

7/28/2018

TL;DR;

How do I connect to my kubernetes cluster from my host machine, through Hyper-V and into the Kubernetes Proxy (kube-proxy).


So I have hyper-v setup with two Ubuntu 18.04.1 LTS Servers. Identical setup.

One is a master

OS Image:                   Ubuntu 18.04.1 LTS
Operating System:           linux
Architecture:               amd64
Container Runtime Version:  docker://18.6.0
Kubelet Version:            v1.11.1
Kube-Proxy Version:         v1.11.1

The other a node:

OS Image:                   Ubuntu 18.04.1 LTS
Operating System:           linux
Architecture:               amd64
Container Runtime Version:  docker://18.6.0
Kubelet Version:            v1.11.1
Kube-Proxy Version:         v1.11.1

It has these pods running by default:

kube-system   coredns-78fcdf6894-6ld8l               1/1       Running   1          4h
kube-system   coredns-78fcdf6894-ncp79               1/1       Running   1          4h
kube-system   etcd-node1                             1/1       Running   1          4h
kube-system   kube-apiserver-node1                   1/1       Running   1          4h
kube-system   kube-controller-manager-node1          1/1       Running   1          4h
kube-system   kube-proxy-942xh                       1/1       Running   1          4h
kube-system   kube-proxy-k6jl4                       1/1       Running   1          4h
kube-system   kube-scheduler-node1                   1/1       Running   1          4h
kube-system   kubernetes-dashboard-6948bdb78-9fbv8   1/1       Running   0          25m
kube-system   weave-net-fzj8h                        2/2       Running   2          3h
kube-system   weave-net-s648g                        2/2       Running   3          3h

These two nodes are exposed to my LAN via two IP addresses:

192.168.1.116
192.168.1.115

I've exposed my deployment:

service.yml:

apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  type: NodePort # internal cluster management
  ports:
  - port: 80 # container port
    nodePort: 30001 # outside port
    protocol: TCP
    targetPort: http
  selector:
    app: my-api
    tier: backend

List out:

$ kubectl get svc -o wide
my-service   NodePort    10.105.166.48   <none>        80:30001/TCP   50m       app=my-api,tier=backend
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        4h        <none>

If I sit on my master node and curl the pod

$ kubectl get pods -o wide
my-api-86db46fc95-2d6wf   1/1       Running   0          22m       10.32.0.7   node2
$ curl 10.32.0.7:80/api/health
{"success": true}

My api is clearly up in the pods.

When I query the service IP

$ curl 10.105.166.48:80/api/health

OR

$ curl 10.105.166.48:30001/api/health

It just timeouts

My network config for the master:

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
    inet 192.168.1.116  netmask 255.255.255.0  broadcast 192.168.1.255

weave: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1376
    inet 10.40.0.0  netmask 255.240.0.0  broadcast 10.47.255.255

docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
    inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255

My iptables are just listing everything as source anywhere destination anywhere with loads of references to KUBE and DOCKER.

I've even tried to setup dashboard to no avail...

accessing the url:

https://192.168.1.116:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

Doing nslookup reveals no host name:

$ nslookup my-service
Server:         127.0.0.53
Address:        127.0.0.53#53

** server can't find eyemenu-api-service: SERVFAIL
-- Callum Linington
docker
kubernetes
ubuntu
ubuntu-18.04

1 Answer

7/29/2018

To hit the nodeport 30001, you need to use your node's ip.

curl nodeip:30001/api/health

Pods inside the cluster doesn't know about the node port 30001.

The nodePort will expose the port to all worker nodes of the kubernetes cluster, hence you can use either:

curl node1:30001/api/health or curl node2:30001/api/health

-- Bal Chua
Source: StackOverflow