Unable to deploy Minio in kubernetes cluster using Helm

9/10/2018

I am trying to deploy minio in kubernetes using helm stable charts, and when I try to check the status of the release

  • helm status minio

the pod desired capacity is 4, but current is 0 I tried to look the journalctl logs for any logs from kubelet, but found none I have attached all helm charts can some one please point out what wrong am I doing?

---
# Source: minio/templates/secrets.yaml
apiVersion: v1
kind: Secret
metadata:
  name: RELEASE-NAME-minio
  labels:
    app: minio
    chart: minio-1.7.0
    release: RELEASE-NAME
    heritage: Tiller
type: Opaque
data:
  accesskey: RFJMVEFEQU1DRjNUQTVVTVhOMDY=
  secretkey: bHQwWk9zWmp5MFpvMmxXN3gxeHlFWmF5bXNPUkpLM1VTb3VqeEdrdw==
---
# Source: minio/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: RELEASE-NAME-minio
  labels:
    app: minio
    chart: minio-1.7.0
    release: RELEASE-NAME
    heritage: Tiller
data:
  initialize: |-
    #!/bin/sh
    set -e ; # Have script exit in the event of a failed command.

    # connectToMinio
    # Use a check-sleep-check loop to wait for Minio service to be available
    connectToMinio() {
      ATTEMPTS=0 ; LIMIT=29 ; # Allow 30 attempts
      set -e ; # fail if we can't read the keys.
      ACCESS=$(cat /config/accesskey) ; SECRET=$(cat /config/secretkey) ;
      set +e ; # The connections to minio are allowed to fail.
      echo "Connecting to Minio server: http://$MINIO_ENDPOINT:$MINIO_PORT" ;
      MC_COMMAND="mc config host add myminio http://$MINIO_ENDPOINT:$MINIO_PORT $ACCESS $SECRET" ;
      $MC_COMMAND ;
      STATUS=$? ;
      until [ $STATUS = 0 ]
      do
        ATTEMPTS=`expr $ATTEMPTS + 1` ;
        echo \"Failed attempts: $ATTEMPTS\" ;
        if [ $ATTEMPTS -gt $LIMIT ]; then
          exit 1 ;
        fi ;
        sleep 2 ; # 1 second intervals between attempts
        $MC_COMMAND ;
        STATUS=$? ;
      done ;
      set -e ; # reset `e` as active
      return 0
    }

    # checkBucketExists ($bucket)
    # Check if the bucket exists, by using the exit code of `mc ls`
    checkBucketExists() {
      BUCKET=$1
      CMD=$(/usr/bin/mc ls myminio/$BUCKET > /dev/null 2>&1)
      return $?
    }

    # createBucket ($bucket, $policy, $purge)
    # Ensure bucket exists, purging if asked to
    createBucket() {
      BUCKET=$1
      POLICY=$2
      PURGE=$3

      # Purge the bucket, if set & exists
      # Since PURGE is user input, check explicitly for `true`
      if [ $PURGE = true ]; then
        if checkBucketExists $BUCKET ; then
          echo "Purging bucket '$BUCKET'."
          set +e ; # don't exit if this fails
          /usr/bin/mc rm -r --force myminio/$BUCKET
          set -e ; # reset `e` as active
        else
          echo "Bucket '$BUCKET' does not exist, skipping purge."
        fi
      fi

      # Create the bucket if it does not exist
      if ! checkBucketExists $BUCKET ; then
        echo "Creating bucket '$BUCKET'"
        /usr/bin/mc mb myminio/$BUCKET
      else
        echo "Bucket '$BUCKET' already exists."
      fi

      # At this point, the bucket should exist, skip checking for existence
      # Set policy on the bucket
      echo "Setting policy of bucket '$BUCKET' to '$POLICY'."
      /usr/bin/mc policy $POLICY myminio/$BUCKET
    }

    # Try connecting to Minio instance
    connectToMinio
    # Create the bucket
    createBucket bucket none false

  config.json: |-
    {
      "version": "26",
      "credential": {
        "accessKey": "DR06",
        "secretKey": "lt0ZxGkw"
      },
      "region": "us-east-1",
      "browser": "on",
      "worm": "off",
      "domain": "",
      "storageclass": {
        "standard": "",
        "rrs": ""
      },
      "cache": {
        "drives": [],
        "expiry": 90,
        "maxuse": 80,
        "exclude": []
      },
      "notify": {
        "amqp": {
          "1": {
            "enable": false,
            "url": "",
            "exchange": "",
            "routingKey": "",
            "exchangeType": "",
            "deliveryMode": 0,
            "mandatory": false,
            "immediate": false,
            "durable": false,
            "internal": false,
            "noWait": false,
            "autoDeleted": false
          }
        },
        "nats": {
          "1": {
            "enable": false,
            "address": "",
            "subject": "",
            "username": "",
            "password": "",
            "token": "",
            "secure": false,
            "pingInterval": 0,
            "streaming": {
              "enable": false,
              "clusterID": "",
              "clientID": "",
              "async": false,
              "maxPubAcksInflight": 0
            }
          }
        },
        "elasticsearch": {
          "1": {
            "enable": false,
            "format": "namespace",
            "url": "",
            "index": ""
          }
        },
        "redis": {
          "1": {
            "enable": false,
            "format": "namespace",
            "address": "",
            "password": "",
            "key": ""
          }
        },
        "postgresql": {
          "1": {
            "enable": false,
            "format": "namespace",
            "connectionString": "",
            "table": "",
            "host": "",
            "port": "",
            "user": "",
            "password": "",
            "database": ""
          }
        },
        "kafka": {
          "1": {
            "enable": false,
            "brokers": null,
            "topic": ""
          }
        },
        "webhook": {
          "1": {
            "enable": false,
            "endpoint": ""
          }
        },
        "mysql": {
          "1": {
            "enable": false,
            "format": "namespace",
            "dsnString": "",
            "table": "",
            "host": "",
            "port": "",
            "user": "",
            "password": "",
            "database": ""
          }
        },
        "mqtt": {
          "1": {
            "enable": false,
            "broker": "",
            "topic": "",
            "qos": 0,
            "clientId": "",
            "username": "",
            "password": "",
            "reconnectInterval": 0,
            "keepAliveInterval": 0
          }
        }
      }
    }
---
# Source: minio/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: RELEASE-NAME-minio
  labels:
    app: minio
    chart: minio-1.7.0
    release: RELEASE-NAME
    heritage: Tiller
spec:
  type: ClusterIP
  clusterIP: None

  ports:
    - name: service
      port: 9000
      targetPort: 9000
      protocol: TCP
  selector:
    app: minio
    release: RELEASE-NAME

---
# Source: minio/templates/statefulset.yaml


apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: RELEASE-NAME-minio
  labels:
    app: minio
    chart: minio-1.7.0
    release: RELEASE-NAME
    heritage: Tiller
spec:
  serviceName: RELEASE-NAME-minio
  replicas: 4
  selector:
    matchLabels:
      app: minio
      release: RELEASE-NAME
  template:
    metadata:
      name: RELEASE-NAME-minio
      labels:
        app: minio
        release: RELEASE-NAME
    spec:
      containers:
        - name: minio
          image: node1:5000/minio/minio:RELEASE.2018-09-01T00-38-25Z
          imagePullPolicy: IfNotPresent
          command: [ "/bin/sh", 
          "-ce", 
          "cp /tmp/config.json  &&
          /usr/bin/docker-entrypoint.sh minio -C  server
          http://RELEASE-NAME-minio-0.RELEASE-NAME-minio.default.svc.cluster.local/export
          http://RELEASE-NAME-minio-1.RELEASE-NAME-minio.default.svc.cluster.local/export
          http://RELEASE-NAME-minio-2.RELEASE-NAME-minio.default.svc.cluster.local/export
          http://RELEASE-NAME-minio-3.RELEASE-NAME-minio.default.svc.cluster.local/export" ]
          volumeMounts:
            - name: export
              mountPath: /export
            - name: minio-server-config
              mountPath: "/tmp/config.json"
              subPath: config.json
            - name: minio-config-dir
              mountPath: 
          ports:
            - name: service
              containerPort: 9000
          env:
            - name: MINIO_ACCESS_KEY
              valueFrom:
                secretKeyRef:
                  name: RELEASE-NAME-minio
                  key: accesskey
            - name: MINIO_SECRET_KEY
              valueFrom:
                secretKeyRef:
                  name: RELEASE-NAME-minio
                  key: secretkey
          livenessProbe:
            tcpSocket:
              port: service
            initialDelaySeconds: 5
            periodSeconds: 30
            timeoutSeconds: 1
            successThreshold: 1
            failureThreshold: 3
          readinessProbe:
            tcpSocket:
              port: service
            periodSeconds: 15
            timeoutSeconds: 1
            successThreshold: 1
            failureThreshold: 3
          resources:
            requests:
              cpu: 250m
              memory: 256Mi

      volumes:
        - name: minio-user
          secret:
            secretName: RELEASE-NAME-minio
        - name: minio-server-config
          configMap:
            name: RELEASE-NAME-minio
        - name: minio-config-dir
          emptyDir: {}
  volumeClaimTemplates:
    - metadata:
        name: export
      spec:
        accessModes: [ "ReadWriteOnce" ]
        storageClassName: local-fast
        resources:
          requests:
            storage: 49Gi

---
# Source: minio/templates/ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: RELEASE-NAME-minio
  labels:
    app: minio
    chart: minio-1.7.0
    release: RELEASE-NAME
    heritage: Tiller
  annotations:
    nginx.ingress.kubernetes.io/affinity: cookie
    nginx.ingress.kubernetes.io/session-cookie-hash: sha1
    nginx.ingress.kubernetes.io/session-cookie-name: route

spec:
  tls:
    - hosts:
        - minio.sample.com
      secretName: tls-secret
  rules:
    - host: minio.sample.com
      http:
        paths:
          - path: /
            backend:
              serviceName: RELEASE-NAME-minio
              servicePort: 9000
-- jamuna
kubernetes
kubernetes-helm
minio

2 Answers

9/12/2018

bit more digging gave me the answer, statefulset was deployed but pods were not created

kubectl describe statefulset -n <namespace> minio

the log said it was looking for mount path which was "" (in previous versions of charts), changing it solved my issue.

-- jamuna
Source: StackOverflow

9/10/2018

I suspect you are not getting the physical volume. Check your kube-controller-manager logs on your active master. This will vary depending on the cloud you are using: AWS, GCP, Azure, Openstack, etc. The kube-controller-manager is usually running on a docker container on the master. So you can do something like:

docker logs <kube-controller-manager-container>

Also, check:

kubectl get pvc
kubectl get pv

Hope it helps.

-- Rico
Source: StackOverflow