The way it is single node kubernetes on a OpenStack VM is:
VMs IP : 10.120.20.227 (from outside)
etcd version 3.0.16
kubectl --version 1.5.7
Flannel version 0.6.1
When I ssh in to the machine I could see the ip 192.168.0.5 So etcd service is running on 192.168.0.5 I could access every application launched on in the VMs from VM its self. But from out side openstack cluster I am unable to access the application using VMs Public IP.
The kube-proxy errors are
May 22 18:38:16 poc-desktop kube-proxy[1246]: I0522 18:38:16.293261 1246 server.go:215] Using iptables Proxier.
May 22 18:38:16 poc-desktop kube-proxy[1246]: W0522 18:38:16.293629 1246 server.go:468] Failed to retrieve node info: Get http://192.168.0.5:8080/api/v1/nodes/poc-desktop: dial tcp 192.168.0.5:8
May 22 18:38:16 poc-desktop kube-proxy[1246]: W0522 18:38:16.293761 1246 proxier.go:249] invalid nodeIP, initialize kube-proxy with 127.0.0.1 as nodeIP
May 22 18:38:16 poc-desktop kube-proxy[1246]: W0522 18:38:16.293773 1246 proxier.go:254] clusterCIDR not specified, unable to distinguish between internal and external traffic
If Iaunch any webservice on VM on any random port I access the web app. But if I launch application using kubectl I am unable to access from other areas Does this require any special routing ? or something is wrong with kube-proxy ?
Checkout your security group, may your pack from outside be dropped.
I tried to similar thing. So I did setup a single node k8s cluster using kubeadm
. To setup k8s cluster using kubeadm
read more here.
And while starting the cluster I took care that I am exposing the public IP. Now if you look at this machines' default gateway address which is 172.17.133.24
.
$ ip a sh eth0 | grep inet
inet 172.17.133.24/24 brd 172.17.133.255 scope global dynamic eth0
But this IP address is internal to the machine and you cannot reach this IP address from outside. And on the OpenStack console I could see one more address associated with the instance which is what I can ping this machine on(from outside) and it is 10.3.8.95
.
Now if you see while starting the cluster using kubeadm
I used this IP address of 10
series.
# kubeadm init --skip-preflight-checks --apiserver-advertise-address=10.3.8.95
...
[addons] Created essential addon: kube-proxy
[addons] Created essential addon: kube-dns
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run (as a regular user):
sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf
...
Now when I started an app on k8s I exposed it via a service
and then changed the service
's type
from ClusterIP
to NodePort
.
kubectl run web --image centos/httpd
kubectl expose deployment web --port 80
kubectl edit svc web
Changed the service
's type
to NodePort
in the last command.
Now find the port on the machine this svc
is exposed on:
$ kubectl get svc web
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
web 10.105.242.27 <nodes> 80:31628/TCP 19m
In above command you can see that the service web
is exposed on two ports 80
and 31628
. When you expose a service as NodePort
it is exposed on a random port which is more than 30000
.
Now to access this port from outside I have created a security group in OpenStack and allowerd tcp ports from 30000
to 60000
. And added this security group to the machine.
Now from my laptop I can curl on the machine
$ curl 10.3.8.95:31628
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd"><html><head>
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<title>Apache HTTP Server Test Page powered by CentOS</title>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
...