I am trying to set up a single node kubernetes cluster for demo and testing purposes, and I want it to behave like a 'full blown' k8s cluster (like google container engine). My client has their own k8s installation, which for this discussion we can assume acts pretty much like google container engine's k8s installation.
Getting the Ingress IP on Full Blown K8s
I am creating a wordpress pod and exposing it as a service, as described in this tutorial: https://cloud.google.com/container-engine/docs/tutorials/hello-wordpress
If you want to replicate the issue, just can just copy paste the commands below, which I lifted from the tutorial: (This assumes you have a project called 'stellar-access-117903'.. if not please set to name of your Google Container Engine project.)
# set up the cluster (this will take a while to provision)
#
gcloud config set project stellar-access-117903
gcloud config set compute/zone us-central1-b
gcloud container clusters create hello-world \
--num-nodes 1 \
--machine-type g1-small
# Create the pod, and expose it as a service
#
kubectl run wordpress --image=tutum/wordpress --port=80
kubectl expose rc wordpress --type=LoadBalancer
# Describe the service
kubectl describe services wordpress
The output of the describe command contains a line 'LoadBalancer Ingress: {some-ip-address}' which is exactly what I'd expect. Now, when I do the same thing with the single node cluster setup i don't get that line. I am able to hit the wordpress service at the IP that appears in the output of the 'describe service' command.. But in 'single node' mode, the IP that is printed out is the >cluster IP< of the service, which typically (as I understand it) is not publicly accessible. For some reason it is publicly accessible in single node mode. We can replicate this with the following steps.
NOT Getting the Ingress IP on Single Node K8s
First setup single node k8s, as described in this tutorial: https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/docker.md
For easy reproducibility, I have included all the commands below, so you can just copy/paste:
K8S_VERSION=1.1.1
sudo docker run --net=host -d gcr.io/google_containers/etcd:2.0.12 /usr/local/bin/etcd --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 --data-dir=/var/etcd/data
sudo docker run \
--volume=/:/rootfs:ro \
--volume=/sys:/sys:ro \
--volume=/dev:/dev \
--volume=/var/lib/docker/:/var/lib/docker:ro \
--volume=/var/lib/kubelet/:/var/lib/kubelet:rw \
--volume=/var/run:/var/run:rw \
--net=host \
--pid=host \
--privileged=true \
-d \
gcr.io/google_containers/hyperkube:v${K8S_VERSION} \
/hyperkube kubelet --containerized --hostname-override="127.0.0.1" --address="0.0.0.0" --api-servers=http://localhost:8080 --config=/etc/kubernetes/manifests
sudo docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v${K8S_VERSION} /hyperkube proxy --master=http://127.0.0.1:8080 --v=2
# set your context to use the locally running k8s API server
#
kubectl config set-cluster dev --server=http://localhost:8080
kubectl config set-context dev --cluster=dev --namespace=$NS
kubectl config use-context dev
Now, execute the very same commands that you performed against Google Container Engine's k8s
# Create the pod, and expose it as a service
#
kubectl run wordpress --image=tutum/wordpress --port=80
kubectl expose rc wordpress --type=LoadBalancer
# Describe the service
kubectl describe services wordpress
The output of the last command (which you will see has no 'Ingress' information) is:
Name: wordpress
Namespace: default
Labels: run=wordpress
Selector: run=wordpress
Type: LoadBalancer
IP: 10.0.0.61
Port: <unnamed> 80/TCP
NodePort: <unnamed> 31795/TCP
Endpoints: 172.17.0.30:80
Session Affinity: None
No events.
In google container engine's k8s, I see events like ' Creating load balancer ', ' Load balancer created '. But nothing like that happens in the single node instance.
I am wondering ... is there some configuration I need to do to get them to work identically ? It is very important that they work identically... differing only in their scalability, because we want to run tests against the single node version, and it will be very confusing if it behaves differently.
Thanks in advance for your help -chris
LoadBalancer is a feature that's implemented by the backend cloud provider, hence you don't see one created in your local setup
(see cloud providers: https://github.com/kubernetes/kubernetes/tree/master/pkg/cloudprovider/providers)
Here is the solution we came up with. When we are running against single node Kubernetes we realized by trial and error that when you expose a service the external IP does not come back via IngressIP; rather, it comes back via the clusterIP, which as mentioned above is publicly viewable. So, we just modified our code to work with that. We use the clusterIP in the single node case. Here is the code we use to establish a watch on the service to figure out when k8s has allocated our externally visible IP:
First we use the fabric8 API to create the service configuration:
case "Service" =>
val serviceConf = mapper.readValue(f, classOf[Service])
val service = kube.services().inNamespace(namespaceId).create(serviceConf)
watchService(service)
The 'watchService' method is defined below:
private def watchService(service: Service) = {
val namespace = service.getMetadata.getNamespace
val name = service.getMetadata.getName
logger.debug("start -> watching service -> namespace: " + namespace + " name: " + name)
val kube = createClient()
try {
@volatile var complete = false
val socket = kube.services().inNamespace(namespace).withName(name).watch(new Watcher[Service]() {
def eventReceived(action: Action, resource: Service) {
logger.info(action + ":" + resource)
action match {
case Action.MODIFIED =>
if (resource.getMetadata.getName == name) {
complete = isServiceComplete(resource)
}
// case Action.DELETED =>
// complete = true
case _ =>
}
}
})
while (!complete) {
Thread.sleep(5000)
complete = isServiceComplete(kube.services().inNamespace(namespace).withName(name).get)
}
logger.info("Closing socket connection")
socket.close()
} finally {
logger.info("Closing client connection")
kube.close()
}
logger.debug("complete -> watching services , namespace: " + namespace + " name: " + name)
}
The key hack we introduced was in the method 'isServiceComplete' .. when using single node k8s the value of 'isUsingMock' is true. so that makes us use clusterIP to determine if service configuration has completed or not.
private def isServiceComplete(service: Service) = {
!service.getStatus.getLoadBalancer.getIngress.isEmpty || mockServiceComplete(service)
}
def mockServiceComplete(service: Service): Boolean = {
val clusterIP = service.getSpec.getClusterIP
logger.trace(s"mockServiceComplete: $isUsingMock / $clusterIP / $KUBE_SERVER" )
isUsingMock && ! clusterIP.isEmpty
}
Sorry if there is not a lot of extra context here. Eventually our project should be open source and we can post a complete solution.
-chris