What is the equivalent for depends_on in kubernetes

3/19/2018

I have a docker compose file with the following entries


version: '2.1'

services:
  mysql:
    container_name: mysql 
    image: mysql:latest 
    volumes:
      - ./mysqldata:/var/lib/mysql 
    environment: 
      MYSQL_ROOT_PASSWORD: 'password' 
    ports: 
      - '3306:3306' 
    healthcheck: 
        test: ["CMD", "curl", "-f", "http://localhost:3306"] 
        interval: 30s 
        timeout: 10s 
        retries: 5 

  test1: 
    container_name: test1 
    image: test1:latest 
    ports: 
      - '4884:4884' 
      - '8443' 
    depends_on: 
      mysql: 
        condition: service_healthy 
    links: 
     - mysql 

The Test-1 container is dependent on mysql and it needs to be up and running.

In docker this can be controlled using health check and depends_on attributes. The health check equivalent in kubernetes is readinessprobe which i have already created but how do we control the container startup in the pod's?????

Any directions on this is greatly appreciated.

My Kubernetes file:

apiVersion: apps/v1beta1 
kind: Deployment 
metadata: 
  name: deployment 
spec: 
  replicas: 1 
  template: 
    metadata: 
      labels: 
        app: deployment 

    spec: 
      containers: 
      - name: mysqldb 
        image: "dockerregistry:mysqldatabase" 
        imagePullPolicy: Always 
        ports: 
        - containerPort: 3306 
        readinessProbe: 
          tcpSocket: 
            port: 3306 
          initialDelaySeconds: 15 
          periodSeconds: 10 
      - name: test1 
        image: "dockerregistry::test1" 
        imagePullPolicy: Always 
        ports: 
        - containerPort: 3000 
-- anish anil
kubeadm
kubectl
kubernetes
kubernetes-helm

6 Answers

3/19/2018

In Kubernetes terminology one your docker-compose set is a Pod.

So, there is no depends_on equivalent there. Kubernetes will check all containers in a pod and they all have to be alive for a mark that pod as Healthy and will always run them together.

In your case, you need to prepare configuration of Deployment like that:

 apiVersion: extensions/v1beta1
 kind: Deployment
 metadata:
   name: my-app
 spec:
   replicas: 1
   template:
     metadata:
       labels:
         app: app-and-db
     spec:
       containers:
         - name: app
           image: nginx
           ports:
             - containerPort: 80
         - name: db
           image: mysql
           ports:
             - containerPort: 3306

After pod will be started, your database will be available on localhost interface for your application, because of network conception:

Containers within a pod share an IP address and port space, and can find each other via localhost. They can also communicate with each other using standard inter-process communications like SystemV semaphores or POSIX shared memory.

But, as @leninhasda mentioned, it is not a good idea to run database and application in your pod and without Persistent Volume. Here is a good tutorial on how to run a stateful application in the Kubernetes.

-- Anton Kostenko
Source: StackOverflow

12/19/2018

That's the beauty of Docker Compose and Docker Swarm... Their simplicity.

We came across this same Kubernetes shortcoming when deploying the ELK stack. We solved it by using a side-car (initContainer), which is just another container in the same pod thats run first, and when it's complete, kubernetes automatically starts the [main] container. We made it a simple shell script that is in loop until Elasticsearch is up and running, then it exits and Kibana's container starts.

Below is an example of a side-car that waits until Grafana is ready.

Add this 'initContainer' block just above your other containers in the Pod:

spec:
      initContainers:
      - name: wait-for-grafana
        image: darthcabs/tiny-tools:1
        args:
        - /bin/bash
        - -c
        - >
          set -x;
          while [[ "$(curl -s -o /dev/null -w ''%{http_code}'' http://grafana:3000/login)" != "200" ]]; do 
            echo '.'
            sleep 15;
          done
      containers:
          .
          .
  (your other containers)
          .
          .
-- David Cardoso
Source: StackOverflow

10/30/2018

This was purposefully left out. The reason being is that applications should be responsible for their connect/re-connect logic for connecting to service(s) such as a database. This is outside the scope of Kubernetes.

-- yomateo
Source: StackOverflow

3/19/2018

While I don't know the direct answer to your question except this link (k8s-AppController), I don't think it's wise to use same deployment for DB and app. Because you are tightly coupling your db with app and loosing awesome k8s option to scale any one of them as needed. Further more if your db pod dies you loose your data as well.

Personally what I would do is to have a separate StatefulSet with Persistent Volume for database and Deployment for app and use Service to make sure their communication.
Yes I have to run few different commands and may need at least two separate deployment files but this way I am decoupling them and can scale them as needed. And my data is being persistent as well!

-- leninhasda
Source: StackOverflow

10/24/2018

As mentioned, you should run the database and the application containers in separate pods and connect them with a service.

Unfortunately, both Kubernetes and Helm don't provide a functionality similar to what you've described. We had many issues with that and tried a few approaches until we have decided to develop a smallish utility that solved this problem for us.

Here's the link to the tool we've developed: https://github.com/Opsfleet/depends-on

You can make pods wait until other pods become ready according to their readinessProbe configuration. It's very close to Docker's depends_on functionality.

-- Leonid Mirsky
Source: StackOverflow

10/25/2018

There is no equivalent to docker swarm depends_on in kubernetes. The solution to such scenario is to use helm charts for kubernetes deployments In Helm you can specify dependency list. Helm is now a days getting much popularity and is great tool to manage complex kubernetes deployments.

https://helm.sh/

-- SHAHS
Source: StackOverflow