I am setting up a single node k8s cluster for testing, and we've run into a confusing problem with services. I've distilled the example down to one of deploying a word press service, which i can do with kubectl create -f wordpress-rc.json followed by an expose. But when i follow the create of the rep controller by kubectl create -f it fails. i show all the json file content below.
rep controller:
{
"kind": "ReplicationController",
"apiVersion": "v1",
"metadata": {
"name": "wordpress",
"labels": {
"app": "wordpress"
}
},
"spec": {
"replicas": 1,
"selector": {
"app": "wordpress"
},
"template": {
"metadata": {
"labels": {
"app": "wordpress"
}
},
"spec": {
"containers": [
{
"name": "wordpress",
"image": "tutum/wordpress",
"ports": [
{
"containerPort": 80,
"name": "http-server",
"protocol": "TCP"
}
],
"imagePullPolicy": "IfNotPresent"
}
],
"restartPolicy": "Always",
"dnsPolicy": "ClusterFirst"
}
}
}
}
service:
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "wordpress",
"labels": {
"name": "wordpress"
}
},
"spec": {
"type": "LoadBalancer",
"ports": [
{
"name":"wordpress1",
"protocol":"TCP",
"port": 80,
"targetPort": 80
}
],
"selector": {
"name": "wordpress"
}
}
}
Working Command Sequence
alias kk kubectl
kk create -f /tmp/wp-rc.json
kubectl expose rc wordpress --type=LoadBalancer
Failed Command Sequence
alias kk kubectl
kk create -f /tmp/wp-rc.json
kk create -f /tmp/wp-service.json
My question is why wouldn't the service definition work, while the expose command does ?
For completeness.. here is how i start up my single node k8s cluster. This is all running on centos 7, b.t.w:
# magic selinux context set command is required. for details, see: http://stackoverflow.com/questions/34777111/cannot-create-a-shared-volume-mount-via-emptydir-on-single-node-kubernetes-on
#
sudo chcon -Rt svirt_sandbox_file_t /var/lib/kubelet
docker run --net=host -d gcr.io/google_containers/etcd:2.0.12 /usr/local/bin/etcd --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 --data-dir=/var/etcd/data
docker run \
--volume=/:/rootfs:ro \
--volume=/sys:/sys:ro \
--volume=/dev:/dev \
--volume=/var/lib/docker/:/var/lib/docker:ro \
--volume=/var/lib/kubelet/:/var/lib/kubelet:rw \
--volume=/var/run:/var/run:rw \
--net=host \
--pid=host \
--privileged=true \
-d \
gcr.io/google_containers/hyperkube:v1.0.1 \
/hyperkube kubelet --containerized --hostname-override="127.0.0.1" --address="0.0.0.0" --api-servers=http://localhost:8080 --config=/etc/kubernetes/manifests
docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v1.0.1 /hyperkube proxy --master=http://127.0.0.1:8080 --v=2
sleep 20 # give everything time to launch
The service json file used label selector name: wordpress
, which is different from replication controller's label selector app: wordpress
. This means the service created with that json file was targeting pods with name: wordpress
label, but the replication controller was targeting pod with app: wordpress
label. This is why your service created with json file didn't work as expected.
You may use kubectl get svc wordpress -o yaml
to compare both created services.
Also, according to config best practice, it's recommended to create a service first, and then the replication controller:
Create a service before corresponding replication controllers so that the scheduler can spread the pods comprising the service. You can also create the replication controller without specifying replicas, create the service, then scale up the replication controller, which may work better in an example using progressive disclosure and may have benefits in real scenarios also, such as ensuring one replica works before creating lots of them)