Kubernetes - Deploying Multiple Images into a single Pod

9/8/2018

I'm having an issue where because an application was originally configured to execute on docker-compose. I managed to port and rewrite the .yaml deployment files to Kubernetes, however, the issue lies within the communication of the pods.

The frontend communicates with the backend to access the services, and I assume as it should be in the same network, the frontend calls the services from the localhost. I don't have access to the code, as it is an proprietary application that was developed by a company and it does not support Kubernetes, so modifying the code is out of question.

I believe the main reason is because the frontend and backend are runnning on different pods, with different IPs.

When the frontend tries to call the APIs, it does not find the service, and returns an error. Therefore, I'm trying to deploy both the frontend image and backend image into the same pod, so they share the same Cluster IP.

Unfortunately, I do not know how to make a yaml file to create both containers within a single pod.

Is it possible to have both frontend and backend containers running on the same pod, or would there be another way to make the containers communicate (maybe a proxy)?

-- user3653379
cluster-computing
containers
docker
kubernetes
pod

2 Answers

9/8/2018

Yes, you just add entries to the containers section in your yaml file, example:

apiVersion: v1
kind: Pod
metadata:
name: two-containers
spec:
    restartPolicy: Never
containers:
    - name: nginx-container
      image: nginx
    - name: debian-container
      image: debian
-- Chris Johnson
Source: StackOverflow

9/8/2018

Therefore, I'm trying to deploy both the frontend image and backend image into the same pod, so they share the same Cluster IP.

Although you have the accepted answer already in place that is tackling example of running more containers in the same pod I'd like to point out few details:

  • Containers should be in the same pod only if they scale together (not if you want to communicate over clusterIP amongst them). Your scenario of frontend/backend division doesn't really look like good candidate to cram them together.

  • If you opt for containers to be in the same pod they can communicate over localhost (they see each other as if two processes are running on the same host (except the part their file systems are different) and can use localhost for direct communication and because of that can't allocate both the same port. Using cluster IP is as if on same host two processes are communicating over external ip).

  • More kubernetes philosophy approach here would be to:

    • Create deployment for backend
    • Create service for backend (exposing necessary ports)
    • Create deployment for frontend
    • Communicate from frontend to backend using backend service name (kube-dns resolves this to cluster ip of backend service) and designated backend ports.
    • Optionally (for this example) create service for frontend for external access or whatever goes outside. Note that here you can allocate same port as for backend service since they are not living on the same pod (host)...

Some of the benefits of this approach include: you can isolate backend better (backend-frontend communication is within cluster only, not exposed to outside world), you can schedule them independently on nodes, you can scale them independently (say you need more backend power but fronted is handling traffic ok or viceversa), you can replace any of them independently etc...

-- Const
Source: StackOverflow