K
Q

What is the equivalent for depends_on in kubernetes

March 19, 2018

I have a docker compose file with the following entries


<!-- language: yaml -->
version: '2.1'


services:

  mysql:

    container_name: mysql 

    image: mysql:latest 

    volumes:
      - ./mysqldata:/var/lib/mysql 

    environment: 

      MYSQL_ROOT_PASSWORD: 'password' 

    ports: 
      - '3306:3306' 

    healthcheck: 

        test: ["CMD", "curl", "-f", "http://localhost:3306"] 

        interval: 30s 

        timeout: 10s 

        retries: 5 


  test1: 

    container_name: test1 

    image: test1:latest 

    ports: 
      - '4884:4884' 
      - '8443' 

    depends_on: 

      mysql: 

        condition: service_healthy 

    links: 
     - mysql 

The Test-1 container is dependent on mysql and it needs to be up and running.

In docker this can be controlled using health check and depends_on attributes. The health check equivalent in kubernetes is readinessprobe which i have already created but how do we control the container startup in the pod's?????

Any directions on this is greatly appreciated.

My Kubernetes file:

<!-- language: yaml -->
apiVersion: apps/v1beta1 
kind: Deployment 
metadata: 
  name: deployment 
spec: 
  replicas: 1 
  template: 
    metadata: 
      labels: 
        app: deployment 

    spec: 
      containers: 
      - name: mysqldb 
        image: "dockerregistry:mysqldatabase" 
        imagePullPolicy: Always 
        ports: 
        - containerPort: 3306 
        readinessProbe: 
          tcpSocket: 
            port: 3306 
          initialDelaySeconds: 15 
          periodSeconds: 10 
      - name: test1 
        image: "dockerregistry::test1" 
        imagePullPolicy: Always 
        ports: 
        - containerPort: 3000 
-- anish anil
kubernetes

8 Answers

December 19, 2018

That's the beauty of Docker Compose and Docker Swarm... Their simplicity.

We came across this same Kubernetes shortcoming when deploying the ELK stack. We solved it by using a side-car (initContainer), which is just another container in the same pod thats run first, and when it's complete, kubernetes automatically starts the [main] container. We made it a simple shell script that is in loop until Elasticsearch is up and running, then it exits and Kibana's container starts.

Below is an example of a side-car that waits until Grafana is ready.

Add this 'initContainer' block just above your other containers in the Pod:

spec:
      initContainers:
      - name: wait-for-grafana
        image: darthcabs/tiny-tools:1
        args:
        - /bin/bash
        - -c
        - >
          set -x;
          while [[ "$(curl -s -o /dev/null -w ''%{http_code}'' http://grafana:3000/login)" != "200" ]]; do 
            echo '.'
            sleep 15;
          done
      containers:
          .
          .
  (your other containers)
          .
          .
-- David Cardoso
Source: StackOverflow

October 30, 2018

This was purposefully left out. The reason being is that applications should be responsible for their connect/re-connect logic for connecting to service(s) such as a database. This is outside the scope of Kubernetes.

-- yomateo
Source: StackOverflow

March 19, 2018

While I don't know the direct answer to your question except this link (k8s-AppController), I don't think it's wise to use same deployment for DB and app. Because you are tightly coupling your db with app and loosing awesome k8s option to scale any one of them as needed. Further more if your db pod dies you loose your data as well.

Personally what I would do is to have a separate StatefulSet with Persistent Volume for database and Deployment for app and use Service to make sure their communication.
Yes I have to run few different commands and may need at least two separate deployment files but this way I am decoupling them and can scale them as needed. And my data is being persistent as well!

-- leninhasda
Source: StackOverflow

October 24, 2018

As mentioned, you should run the database and the application containers in separate pods and connect them with a service.

Unfortunately, both Kubernetes and Helm don't provide a functionality similar to what you've described. We had many issues with that and tried a few approaches until we have decided to develop a smallish utility that solved this problem for us.

Here's the link to the tool we've developed: https://github.com/Opsfleet/depends-on

You can make pods wait until other pods become ready according to their readinessProbe configuration. It's very close to Docker's depends_on functionality.

-- Leonid Mirsky
Source: StackOverflow

January 3, 2024

There is not a 1:1 direct replacement for

depends_on
but you can get really close using
InitContainers
and
nc
(netcat).

The problem I have with David Cardoso's answer is it depends on the image

darthcabs/tiny-tools:1
and I would rather use
busybox
. However busybox does not come with cURL and the implementation of
wget
it ships with has long-running TLS termination bugs with self-signed certificates, so I don't consider it reliable as of Q1 2024.

What are we left with? We can use

netcat
to check for open ports in the container we want to "depend on" which will get us as close to
depends_on
as possible with minimal dependencies. Here is what I am using in my prod stack:

spec:

  initContainers:
    - name: wait-for-services

      image: busybox

      command: ["/bin/sh","-c"]

      args: ["until echo 'Waiting for postgres...' && nc -vz -w 2 postgres 5432 && echo 'Waiting for minio...' && nc -vz -w 2 minio 9000; do echo 'Looping forever...'; sleep 2; done;"]

What this does:

  1. InitContainer starts before the main container (handled by Kubernetes)
  2. until
    allows us to loop forever, until all the chained commands run successfully
  3. nc -vz -w 2 <service> <port>
    runs
    netcat
    (nc) with a 2 second timeout (
    -w 2
    ), verbose output (
    -v
    ), and listens for an open socket only, then disconnects (
    -z
    ).
  4. If the netcat connection to postgres is successful, then we continue down the command chain, attempting to connect to minio. All the
    echo
    commands are purely for log transparency and can be removed.
  5. After both connections are successful, the InitContainer exits and kubernetes will start the main container (handled by Kubernetes).

Feel free to improve my answer as I cobbled this together from many separate sources, but it works in my prod stack. Keep in mind this is not a replacement for Readiness or Liveness probes, those are still needed to determine if a pod is healthy. This solution only fills the gap of

depends_on
not existing in Kubernetes.

If you need to check the

response code
of the TCP connection, for example
200 OK
of a healthcheck endpoint, you will want to use cURL instead: https://varlogdiego.com/tag/init-container

If the service you are depending on doesn't have a TCP service you can connect to, then it's a bit more difficult. You will need to share a volume between the InitContainer and the dependent service and check that a shared file has been written to the disk, or you can implement a healthcheck into the dependent service itself, such as gRPC.

-- degenerate
Source: StackOverflow

March 19, 2018

In Kubernetes terminology one your docker-compose set is a Pod.

So, there is no

depends_on
equivalent there. Kubernetes will check all containers in a pod and they all have to be alive for a mark that pod as Healthy and will always run them together.

In your case, you need to prepare configuration of Deployment like that:

 apiVersion: extensions/v1beta1
 kind: Deployment
 metadata:
   name: my-app
 spec:
   replicas: 1
   template:
     metadata:
       labels:
         app: app-and-db
     spec:
       containers:
         - name: app
           image: nginx
           ports:
             - containerPort: 80
         - name: db
           image: mysql
           ports:
             - containerPort: 3306

After pod will be started, your database will be available on

localhost
interface for your application, because of network conception:

Containers within a pod share an IP address and port space, and can find each other via localhost. They can also communicate with each other using standard inter-process communications like SystemV semaphores or POSIX shared memory.

But, as @leninhasda mentioned, it is not a good idea to run database and application in your pod and without Persistent Volume. Here is a good tutorial on how to run a stateful application in the Kubernetes.

-- Anton Kostenko
Source: StackOverflow

March 16, 2022

https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/

what about liveness and readiness ??? supports commands, http requests and more to check if another service responds OK

apiVersion: v1

kind: Pod

metadata:

  labels:

    test: liveness

  name: liveness-exec

spec:

  containers:
  - name: liveness

    image: k8s.gcr.io/busybox

    args:
    - /bin/sh
    - -c
    - touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600

    livenessProbe:

      exec:

        command:
        - curl http://service2/api/v2/health-check

      initialDelaySeconds: 5

      periodSeconds: 5
-- david grinstein
Source: StackOverflow

August 16, 2025

Sidecar containers were released as alpha in k8s 1.28 and stabilized in 1.33. They're defined as

initContainers
but their behavior changes when
restartPolicy: Always
is specified. These sidecar containers also support probes, and using a
startupProbe
will cause the main application to wait for the sidecar to become ready according to the specified condition before starting.

The below example is taken from this blog post: https://kubernetes.io/blog/2025/06/03/start-sidecar-first/

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
  labels:
    app: myapp
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
        - name: myapp
          image: alpine:latest
          command: ["sh", "-c", "sleep 3600"]
      initContainers:
        - name: nginx
          image: nginx:latest
          ports:
            - containerPort: 80
              protocol: TCP
          restartPolicy: Always
          startupProbe:
            httpGet:
              path: /
              port: 80
            initialDelaySeconds: 5
            periodSeconds: 30
            failureThreshold: 10
            timeoutSeconds: 20
      volumes:
        - name: data
          emptyDir: {}

This avoids the need to write a custom script like in some of the other answers here or the

postStart
lifecycle hook approach explored in the blog post, which looks very similar to David Cardoso's answer using a plain init container.

As an aside, I will also note that it's generally recommended to solve this in the application level and make it robust to conditions like the absence of the sidecar.

-- aenda
Source: StackOverflow