How can run docker image in kubernetes initiate from another and pass arguments

1/4/2018

I am having two dockerized application which needs to run in kubernetes.

Here is the scenario which needs to achieve.

Docker-1 which is flask application.

Docker-2 which is python script will take input from the Docker-1 and execute and need to write some file in a shared volume of the Docker-1 container.

Here is the flask web-app code.

from flask import Flask, request, Response, jsonify
app = Flask(__name__)

@app.route('/')
def root():
  return "The API is working fine"

@app.route('/run-docker')
def run_docker_2():
   args = "input_combo"
   query = <sql query>
   <initiate the docker run and pass params>
   exit
   #No return message need run as async

if __name__ == "__main__":
    app.run(debug=True, host='0.0.0.0', port=8080, threaded=True)

Docker file

FROM ubuntu:latest
MAINTAINER Abhilash KK "abhilash.kk@searshc.com"
RUN apt-get update -y
RUN apt-get install -y python-pip python-dev build-essential python-tk
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
ENTRYPOINT ["/usr/bin/python"]
CMD ["app.py"]

requirements.txt

flask

Python script for the second docker. start_docker.py

import sys

input_combo = sys.argv[1]
query = sys.argv[2]

def function_to_run(input_combination,query):
    #starting the model final creating file

function_to_run(input_combo, query)

Docker file 2

FROM python

COPY . /script
WORKDIR /script

CMD ["python", "start_docker.py"]

Please help me to connect with the docker images. or let me know any other way to achieve this problem. The basic requirement is to add a message to some queue and that queue listens for in time interval and starts the process with FIFO manner.

Any other approach in GCP service to initiate an async job will take input from the client and create a file which is accessible from web-app python.

-- Abhilash
docker
kubernetes
python

2 Answers

1/4/2018

From what I can understand you'd want the so called "sidecar pattern", you can run multiple containers in one pod and share a volume, e.g.:

apiVersion: v1
kind: Pod
metadata:
  name: www
spec:
  containers:
  - name: nginx
    image: nginx
    volumeMounts:
    - mountPath: /srv/www
      name: www-data
      readOnly: true
  - name: git-monitor
    image: kubernetes/git-monitor
    env:
    - name: GIT_REPO
      value: http://github.com/some/repo.git
    volumeMounts:
    - mountPath: /data
      name: www-data
  volumes:
  - name: www-data
    emptyDir: {}

You could also benefit from getting to know the basics of how Kubernetes work: Kubernetes Basics

-- Paweł Prażak
Source: StackOverflow

1/5/2018

First, create a Pod running "Docker-1" application. Then Kubernetes python client to spawn a second pod with "Docker-2". You can share a volume between your pods in order to return the data to Docker1. In my code sample I'm using a host_path volume but you need to ensure that both pods are on the same node. I did add that code for readability.

apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: docker1
  labels:
    app: docker1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: docker1
  template:
    metadata:
      labels:
        app: docker1
    spec:
      containers:
      - name: docker1
        image: abhilash/docker1
        ports:
        - containerPort: 8080
        volumeMounts:
        - mountPath: /shared
          name: shared-volume
      volumes:
      - name: shared-volume
        hostPath:
          path: /shared

The code of run_docker_2 handler:

from kubernetes import client, config

...

args = "input_combo"
config.load_incluster_config()
pod = client.V1Pod()
pod.metadata = client.V1ObjectMeta(name="docker2")
container = client.V1Container(name="docker2")
container.image = "abhilash/docker2"
container.args = [args]
volumeMount = client.V1VolumeMount(name="shared", mount_path="/shared")
container.volume_mounts = [volumeMount]
hostpath = client.V1HostPathVolumeSource(path = "/shared")
volume = client.V1Volume(name="shared")
volume.host_path = hostpath
spec = client.V1PodSpec(containers = [container])
spec.volumes = [volume]
pod.spec = spec
v1.create_namespaced_pod(namespace="default", body=pod)
return "OK"

A handler to read the returned results:

@app.route('/read-results')
def run_read():
   with open("/shared/results.data") as file:
      return file.read()

Note that it could be useful to add a watcher to wait for the pod to finish the job and then do some cleanup (delete the pod for instance)

-- Jcs
Source: StackOverflow