Why persistentvolumeclaim "static-claim0" not found when deploy to k8s using kompose file?

11/13/2017

Installed CentOS 7 hosts in Vagrant/VirtualBox. Then lunched Rancher server/k8s cluster.

Using kompose to convert docker-compose file to kubernetes config files.

Such as:

static-deployment.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    kompose.cmd: kompose convert
    kompose.version: 1.4.0 (c7964e7)
  creationTimestamp: null
  labels:
    io.kompose.service: static
  name: static
spec:
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      creationTimestamp: null
      labels:
        io.kompose.service: static
    spec:
      containers:
      - args:
        - ./entrypoint.sh
        image: 192.168.33.13/myapp/static
        name: orange-static
        ports:
        - containerPort: 10301
        resources: {}
        volumeMounts:
        - mountPath: /var/www
          name: static-claim0
        - mountPath: /var/www/dist/assets
          name: static-claim1
        - mountPath: /var/www/dist/api-mock
          name: static-claim2
      restartPolicy: Always
      volumes:
      - name: static-claim0
        persistentVolumeClaim:
          claimName: static-claim0
      - name: static-claim1
        persistentVolumeClaim:
          claimName: static-claim1
      - name: static-claim2
        persistentVolumeClaim:
          claimName: static-claim2
status: {}

static-service.yaml

apiVersion: v1
kind: Service
metadata:
  annotations:
    kompose.cmd: kompose convert
    kompose.version: 1.4.0 (c7964e7)
  creationTimestamp: null
  labels:
    io.kompose.service: static
  name: static
spec:
  ports:
  - name: "10301"
    port: 10301
    targetPort: 10301
  selector:
    io.kompose.service: static
status:
  loadBalancer: {}

static-claim0-persistentvolumeclaim.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  creationTimestamp: null
  labels:
    io.kompose.service: static-claim0
  name: static-claim0
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 100Mi
status: {}

static-claim1-persistentvolumeclaim.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  creationTimestamp: null
  labels:
    io.kompose.service: static-claim1
  name: static-claim1
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 100Mi
status: {}

static-claim2-persistentvolumeclaim.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  creationTimestamp: null
  labels:
    io.kompose.service: static-claim2
  name: static-claim2
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 100Mi
status: {}

After run kubectl create -f static-deployment.yaml and check k8s dashboard:

enter image description here

What should to do? Is it the reason didn't exists volume in Vagrant now?

-- online
centos7
kubernetes
rancher
vagrant
volume

0 Answers