Run Docker Containers on Different Machines with Kubernetes

2/28/2018

New to Kubernetes but want to quickly run some Docker containers on different machines, e.g, containers 1, 2 and 3 on node 1 (physical machine 1) and container 4, 5, and 6 on node 2 (physical machine 2). Can someone help me with the config files and commands to get it up and running, and all containers can communicate with each other?

I found the example in https://gettech1.wordpress.com/2016/10/03/kubernetes-forcefully-run-pod-on-specific-node/ close to what I want, but there is only one pod. How do I do it with two pods (assuming that I can add more containers in each pod) and run the two pods together in one deployment (so that containers are within the same network, therefore, can communicate with each other)?

I also want to run a Docker container with a bind mount with "shared" bind-propagation, how can I specify it?

Personally, I found the Kubernetes documentation a little hard to navigate with layers of concepts referencing each other. Anyone can point to a clean tutorial would be a help too. I'd like to learn how to run containers on multiple machines, then how to autoscale by adding more containers in a pod, adding more pods on a node and adding more nodes in a cluster. Then the different type of networking and volume management.

-- hanaZ
docker
kubernetes

1 Answer

2/28/2018

The simple way to assign Pods to Nodes is to use label selectors.

Labels and Selectors are a concept you will need to understand throughout Kubernetes.

First add labels to the nodes:

kubectl local nodes node-a podwants=somefeatureon-nodea
kubectl local nodes node-b podwants=somefeatureon-nodeb

A nodeSelector can then be set in the Pod definitions spec.

apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    app: my-app
spec:
  nodeSelector:
    podwants: somefeatureon-nodea
  container:
    - name: nginx
      image: nginx:1.8
      ports:
      - containerPort: 80

As a Pod will always be co-located in Kubernetes and containers in the Pod will all be able to access each other, Pod to Pod communication is done via exposing the Pod as a Service. Note the Service also uses a label selector to find it's Pods

kind: Service
apiVersion: v1
metadata:
  name: web-svc
spec:
  selector:
    app: my-app
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80

Then you can discover the available Services in other Pods via environment variables or via DNS if you have added CoreDNS to your cluster.

 WEB_SVC_SERVICE_HOST=x.x.x.x
 WEB_SVC_SERVICE_PORT=80

You won't often define and schedule Pods themselves. You will probably use a Deployment that describes your Pods and will help you scale them.

Once you've got the simple case down the documentation follows on to describe Node affinity which allows you to define more complex rule sets. Even down to the level of making scheduling decisions based on what Pods are currently scheduled on the Node.

-- Matt
Source: StackOverflow