Instead of working with google cloud, I decided to set up Kuberenetes on my own machine. I made a docker image of my hello-world web server. I set up hello-controller.yaml:
apiversion: v1
kind: ReplicationController
metadata:
name: hello
labels:
name: hello
spec:
replicas: 1
selector:
name: hello
template:
metadata:
labels:
name: hello
spec:
containers:
- name: hello
image: flaggy/hello
ports:
- containerPort: 8888
Now I want to expose the service to the world. I don't think vagrant provider has a load balancer (which seems to be the best way to do it). So I tried with NodePort service type. However, the newly created NodePort does not seem to be listened on any IP I try. Here's hello-service.yaml:
apiversion: v1
kind: Service
metadata:
name: hello
labels:
name: hello
spec:
type: NodePort
selector:
name: hello
ports:
- port: 8888
If I log into my minion I can access port 8888
$ curl 10.246.1.3:8888
Hello!
When I describe my service this is what I get:
$ kubectl.sh describe service/hello
W0628 15:20:45.049822 1245 request.go:302] field selector: v1 - events - involvedObject.name - hello: need to check if this is versioned correctly.
W0628 15:20:45.049874 1245 request.go:302] field selector: v1 - events - involvedObject.namespace - default: need to check if this is versioned correctly.
W0628 15:20:45.049882 1245 request.go:302] field selector: v1 - events - involvedObject.kind - Service: need to check if this is versioned correctly.
W0628 15:20:45.049887 1245 request.go:302] field selector: v1 - events - involvedObject.uid - 2c0005e7-1dc2-11e5-8369-0800279dd272: need to check if this is versioned correctly.
Name: hello
Labels: name=hello
Selector: name=hello
Type: NodePort
IP: 10.247.5.87
Port: <unnamed> 8888/TCP
NodePort: <unnamed> 31423/TCP
Endpoints: 10.246.1.3:8888
Session Affinity: None
No events.
I cannot find anyone listening on port 31423 which, as I gather, should be the external port for my service. I am also puzzled about IP 10.247.5.87.
I note this
$ kubectl.sh get nodes
NAME LABELS STATUS
10.245.1.3 kubernetes.io/hostname=10.245.1.3 Ready
Why is that IP different from what I see on describe for the service? I tried accessing both IPs on my host:
$ curl 10.245.1.3:31423
curl: (7) Failed to connect to 10.245.1.3 port 31423: Connection refused
$ curl 10.247.5.87:31423
curl: (7) Failed to connect to 10.247.5.87 port 31423: No route to host
$
So IP 10.245.1.3 is accessible, although port 31423 is not binded to it. I tried routing 10.247.5.87 to vboxnet1, but it didn't change anything:
$ sudo route add -net 10.247.5.87 netmask 255.255.255.255 vboxnet1
$ curl 10.247.5.87:31423
curl: (7) Failed to connect to 10.247.5.87 port 31423: No route to host
If I do sudo netstat -anp | grep 31423
on the minion nothing comes up. Strangely, nothing comes up if I do sudo netstat -anp | grep 8888
either. There must be either some iptables magic or some interface in promiscuous mode being abused.
Would it be this difficult to get things working on bare metal as well? I haven't tried AWS provider either, but I am getting worried.
A few things.
Your single pod is 10.246.1.3:8888 - that seems to work.
Your service is 10.247.5.87:8888 - that should work as long as you are within your cluster (it's virtual - you will not see it in netstat
). This is the first thing to verify.
Your node is 10.245.1.3 and your service should ALSO be on 10.245.1.3:31423 - this is the part that does not seem to be working correctly. Like service IPs, this binding is virtual - it should show up in iptables-save
but not netstat
. If you log into your node (minion), can you curl localhost:31423
?
You might find this doc useful: https://github.com/thockin/kubernetes/blob/docs-debug-svcs/docs/debugging-services.md