Is there a way to mount volumes to a kubernetes docker compose deployment?

11/5/2019

I am trying to use kompose convert on my docker-compose.yaml files however, when I run the command:

kompose convert -f docker-compose.yaml

I get the output:

WARN Volume mount on the host "/home/centos/Sprint0Demo/Servers/elasticSearchConnector/etc/kafka-connect" isn't supported - ignoring path on the host
WARN Volume mount on the host "/home/centos/Sprint0Demo/Servers/elasticSearchConnector/etc/kafka-elasticsearch" isn't supported - ignoring path on the host
WARN Volume mount on the host "/home/centos/Sprint0Demo/Servers/elasticSearchConnector/etc/kafak" isn't supported - ignoring path on the host

It also says more warning for the other persistent volumes

My docker-compose file is:

version: '3'
services:
  es01:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.2.1
    container_name: es01
    environment:
      [env]
    ulimits:
      nproc: 3000
      nofile: 65536
      memlock: -1
    volumes:
      - /home/centos/Sprint0Demo/Servers/elasticsearch:/usr/share/elasticsearch/data
    ports:
      - 9200:9200
    networks:
      - kafka_demo
  zookeeper:
    image: confluentinc/cp-zookeeper
    container_name: zookeeper
    ports:
      - "2181:2181"
    environment:
        ZOOKEEPER_CLIENT_PORT: 2181
    volumes:
      - /home/centos/Sprint0Demo/Servers/zookeeper/zk-data:/var/lib/zookeeper/data
      - /home/centos/Sprint0Demo/Servers/zookeeper/zk-txn-logs:/var/lib/zookeeper/log
    networks:
      kafka_demo:
  kafka0:
    image: confluentinc/cp-kafka
    container_name: kafka0
    environment:
      [env]
    volumes:
      - /home/centos/Sprint0Demo/Servers/kafkaData:/var/lib/kafka/data
    ports:
      - "9092:9092"
    depends_on:
      - zookeeper
      - es01
    networks:
      kafka_demo:
  schema_registry:
    image: confluentinc/cp-schema-registry:latest
    container_name: schema_registry
    environment:
      [env]
    ports:
      - 8081:8081
    networks:
      - kafka_demo
    depends_on:
      - kafka0
      - es01
  elasticSearchConnector:
    image: confluentinc/cp-kafka-connect:latest
    container_name: elasticSearchConnector
    environment:
        [env]
    volumes:
      - /home/centos/Sprint0Demo/Servers/elasticSearchConnector/etc/kafka-connect:/etc/kafka-connect
      - /home/centos/Sprint0Demo/Servers/elasticSearchConnector/etc/kafka-elasticsearch:/etc/kafka-elasticsearch
      - /home/centos/Sprint0Demo/Servers/elasticSearchConnector/etc/kafak:/etc/kafka
    ports:
      - "28082:28082"
    networks:
      - kafka_demo
    depends_on:
      - kafka0
      - es01
networks:
  kafka_demo:
    driver: bridge

Does anyone know how I can fix this issue? I was thinking it has to do with the error message saying that its a volume mount vs host mount?

-- James Ukilin
docker
docker-compose
docker-volume
kubernetes

1 Answer

11/6/2019

I have made some research and there are three things to point out:

  1. kompose does not support volume mount on host. You might consider using emptyDir instead.

  2. Kubernetes makes it difficult to pass in host/root volumes. You can try with hostPath. kompose convert --volumes hostPath works for k8s.

  3. Also you can check out Compose on Kubernetes if you'd like to run things on a single machine.

Please let me know if that helped.

-- OhHiMark
Source: StackOverflow