I'm having a bit of hard time figuring out whether the Guestbook example is working in Minikube. My main issue is possibly that the example description here details all the steps but there is no indication about how to connect to the web application once it's running from the default YAML files.
I'm using Minikube v. 0.10.0
in Mac OS X 10.9.5 (Mavericks) and this is what I eventually ended up with (which seems pretty good according to what I read from the example document):
PolePro:all-in-one poletti$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend 10.0.0.140 <none> 80/TCP 8s
kubernetes 10.0.0.1 <none> 443/TCP 2h
redis-master 10.0.0.165 <none> 6379/TCP 53m
redis-slave 10.0.0.220 <none> 6379/TCP 37m
PolePro:all-in-one poletti$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
frontend 3 3 3 3 20s
redis-master 1 1 1 1 42m
redis-slave 2 2 2 2 37m
PolePro:all-in-one poletti$ kubectl get pods
NAME READY STATUS RESTARTS AGE
frontend-708336848-0h2zj 1/1 Running 0 29s
frontend-708336848-ds8pn 1/1 Running 0 29s
frontend-708336848-v8wp9 1/1 Running 0 29s
redis-master-2093957696-or5iu 1/1 Running 0 43m
redis-slave-109403812-12k68 1/1 Running 0 37m
redis-slave-109403812-c7zmo 1/1 Running 0 37m
I thought that I might connect to http://10.0.0.140:80/
(i.e. the frontend
address and port as returned by kubectl get svc
above) and see the application running, but I'm getting a Connection refused
:
PolePro:all-in-one poletti$ curl -v http://10.0.0.140:80
* About to connect() to 10.0.0.140 port 80 (#0)
* Trying 10.0.0.140...
* Adding handle: conn: 0x7fb0f9803a00
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0x7fb0f9803a00) send_pipe: 1, recv_pipe: 0
* Failed connect to 10.0.0.140:80; Connection refused
* Closing connection 0
curl: (7) Failed connect to 10.0.0.140:80; Connection refused
It's somehow suspicious that the example description misses such an important step though. What am I missing?
Well, it seems I figured it out myself (I'll probably send a PR too)
The main thing is that, at least in the Minikube setup, the kubectl
command is run in Mac OS X but all the cool stuff happens inside a virtual machine. In my case, it's a VirtualBox VM (I'm still on Mavericks).
When kubectl
shows addresses for services, like in this case:
PolePro:all-in-one poletti$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend 10.0.0.140 <none> 80/TCP 8s
kubernetes 10.0.0.1 <none> 443/TCP 2h
redis-master 10.0.0.165 <none> 6379/TCP 53m
redis-slave 10.0.0.220 <none> 6379/TCP 37m
these addresses are accessible from within the node, not necessarily from the outside. In my case, they were not accessible from the outside.
So what can you do about it?
First of all, to just check that it's actually running, you can log into the node and run curl from there:
# get the list of nodes, to get the name of the node we're interested into
PolePro:all-in-one poletti$ kubectl get nodes
NAME STATUS AGE
minikube Ready 3h
# that was easy. Now we can get the address of the node
PolePro:all-in-one poletti$ kubectl describe node/minikube | grep '^Address'
Addresses: 192.168.99.100,192.168.99.100
# now we can log into the node. The username is "docker", the password is "tcuser"
# by default (without quotes):
PolePro:all-in-one poletti$ ssh docker@192.168.99.100
docker@192.168.99.100's password:
## .
## ## ## ==
## ## ## ## ## ===
/"""""""""""""""""\___/ ===
~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ / ===- ~~~
\______ o __/
\ \ __/
\____\_______/
_ _ ____ _ _
| |__ ___ ___ | |_|___ \ __| | ___ ___| | _____ _ __
| '_ \ / _ \ / _ \| __| __) / _` |/ _ \ / __| |/ / _ \ '__|
| |_) | (_) | (_) | |_ / __/ (_| | (_) | (__| < __/ |
|_.__/ \___/ \___/ \__|_____\__,_|\___/ \___|_|\_\___|_|
Boot2Docker version 1.11.1, build master : 901340f - Fri Jul 1 22:52:19 UTC 2016
Docker version 1.11.1, build 5604cbe
docker@minikube:~$ curl -v http://10.0.0.140/
* Trying 10.0.0.140...
* Connected to 10.0.0.140 (10.0.0.140) port 80 (#0)
> GET / HTTP/1.1
> Host: 10.0.0.140
> User-Agent: curl/7.49.1
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Mon, 19 Sep 2016 13:37:56 GMT
< Server: Apache/2.4.10 (Debian) PHP/5.6.20
< Last-Modified: Wed, 09 Sep 2015 18:35:04 GMT
< ETag: "399-51f54bdb4a600"
< Accept-Ranges: bytes
< Content-Length: 921
< Vary: Accept-Encoding
< Content-Type: text/html
<
<html ng-app="redis">
<head>
<title>Guestbook</title>
<link rel="stylesheet" href="//netdna.bootstrapcdn.com/bootstrap/3.1.1/css/bootstrap.min.css">
<script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.2.12/angular.min.js"></script>
<script src="controllers.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/angular-ui-bootstrap/0.13.0/ui-bootstrap-tpls.js"></script>
</head>
<body ng-controller="RedisCtrl">
<div style="width: 50%; margin-left: 20px">
<h2>Guestbook</h2>
<form>
<fieldset>
<input ng-model="msg" placeholder="Messages" class="form-control" type="text" name="input"><br>
<button type="button" class="btn btn-primary" ng-click="controller.onRedis()">Submit</button>
</fieldset>
</form>
<div>
<div ng-repeat="msg in messages track by $index">
{{msg}}
</div>
</div>
</div>
</body>
</html>
* Connection #0 to host 10.0.0.140 left intact
Yay! There's actually something running on port 80.
Anyway, this is still a bit cumbersome and we would like to see this inside a browser in Mac OS X. One way to do this is to use NodePort
to make the node map a Service's port to a Node's port; this is accomplished adding the following line in the frontend
service definition, which becomes:
apiVersion: v1
kind: Service
metadata:
name: frontend
labels:
app: guestbook
tier: frontend
spec:
# if your cluster supports it, uncomment the following to automatically create
# an external load-balanced IP for the frontend service.
# type: LoadBalancer
type: NodePort
ports:
# the port that this service should serve on
- port: 80
selector:
app: guestbook
tier: frontend
This change might be requested in either frontend-service.yaml
, all-in-one/frontend.yaml
or all-in-one/guestbook-all-in-one.yaml
depending on which file you are using.
If you re-create the whole guestbook (I don't know if it's necessary but I'll remain on the safe side) you will get a message about ports and firewalls, like this:
# delete previous instance to start from "scratch"
PolePro:all-in-one poletti$ kubectl delete deployments,svc -l 'app in (redis, guestbook)'
deployment "frontend" deleted
deployment "redis-master" deleted
deployment "redis-slave" deleted
service "frontend" deleted
service "redis-master" deleted
service "redis-slave" deleted
# we'll use the all-in-one here to get quickly to the point
PolePro:all-in-one poletti$ vi guestbook-all-in-one.yaml
# with the new NodePort change in place, we're ready to start again
PolePro:all-in-one poletti$ kubectl create -f guestbook-all-in-one.yaml
service "redis-master" created
deployment "redis-master" created
service "redis-slave" created
deployment "redis-slave" created
You have exposed your service on an external port on all nodes in your
cluster. If you want to expose this service to the external internet, you may
need to set up firewall rules for the service port(s) (tcp:30559) to serve traffic.
See http://releases.k8s.io/release-1.3/docs/user-guide/services-firewalls.md for more details.
service "frontend" created
deployment "frontend" created
Now, port 30559
on the node maps onto the frontend port 80
, so we can open the browser at address http://192.168.99.100:30559/
(i.e. http://<NODE-IP>:<EXTERNAL-PORT>/
) and we can use the guestbook!
Quick and dirty: kubectl port-forward frontend-708336848-0h2zj 80:80