I have a docker image that serves a simple static web page. I have a working Kubernetes cluster of 4 nodes (physical servers not in the cloud anywhere).
I want to run that docker image on 2 of the 4 Kubernetes nodes and have it be accessible to the world outside the cluster and load balanced and have it move it to another node if one dies.
Do I need to make a pod then a replication controller then a kube proxy something? Or do I need to just make a replication controller and expose it somehow? Do I need to make service?
I don't need help with how to make any of those things, that seems well documented, but what I can't tell what I need to make.
What you need is to expose your service (that consists of pods which are run/scaled/restarted by your replication controller). Using deployment instead of replication controller has additional benefits (mainly for updating the app).
If you are on bare metal then you probably wish to expose your service via type: NodePort
- so every node in your cluster will open a static port that routes traffic to pods.
You can then either point your load balancer to that nodes on that port, or make a DNS entry with all Kubernetes nodes.
You'll need:
1) A load balancer on one of your nodes in your cluster, that is a reverse proxy Pod like nginx
to proxy the traffic to an upstream
.
This Pod will need to be exposed to the outside using hostPort
like
ports:
- containerPort: 80
hostPort: 80
name: http
- containerPort: 443
hostPort: 443
name: https
2) A Service that will use the web server selector as target.
3) Set the Service name (which will resolve to the Service IP) as the upstream in nginx config
4) Deploy your web server Pods, which will have the selector to be targeted by the Service.
You might also want to look at External IP for the Service http://kubernetes.io/docs/user-guide/services/#external-ips
but I personally never managed to get that working on my bare metal cluster.