How to persist latest queues after pod recreation

1/18/2019

I am trying to run ActiveMQ in Kubernetes. I want to keep the queues even after the pod is terminated and recreated. So far I got the queues to stay even after pod deletion and recreation. But, there is a catch, it seems to be storing the list of queues one previous.

Ex: I create 3 queues a, b, and c. I delete the pod and its recreated. The queue list is empty. I then go ahead and create queues x and y. When I delete and the pod gets recreated, it loads queues a, b, and c. If I add a queue d to it and pod is recreated, it shows x and y.

I have created a configMap like below and I'm using the config map in my YAML file as well.

kubectl create configmap amq-config-map --from-file=/opt/apache-activemq- 
5.15.6/data    



apiVersion: apps/v1
kind: Deployment
metadata:
  name: activemq-deployment-local
  labels:
    app: activemq
spec:
  replicas: 1
  selector:
    matchLabels:
      app: activemq
  template:
    metadata:
      labels:
        app: activemq
    spec:
      containers:
      - name: activemq
        image: activemq:1.0
        ports:
        - containerPort: 8161
        volumeMounts:
        - name: activemq-data-local
          mountPath: /opt/apache-activemq-5.15.6/data
          readOnly: false
      volumes:
      - name: activemq-data-local
        persistentVolumeClaim:
          claimName: amq-pv-claim-local
      - name: config-vol
        configMap:
          name: amq-config-map
---
apiVersion: v1
kind: Service
metadata:
  name: my-service-local
spec:
  selector:
    app: activemq
  ports:
  - port: 8161
    targetPort: 8161
  type: NodePort
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: amq-pv-claim-local
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi
---
kind: PersistentVolume
apiVersion: v1
metadata:
  name: amq-pv-claim-local
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 3Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /tmp

When the pod is recreated, I want the queues to stay the same. I'm almost there, but I need some help.

-- SamK
kubernetes

2 Answers

2/26/2019

With this deployment plan, I'm able to have activemq working in a Kubernetes cluster running in AWS. However, I'm still trying to figure out why it does not work for mysql in the same way.

Simply running

    kubectl create -f activemq.yaml 

does the trick. Queues are persistent and even terminating the pod and restarting brings up the queues. They remain until the Persistent volume and claim are removed. With this template, I dont need to explicitly create a volume even.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: activemq-deployment
      labels:
        app: activemq
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: activemq
      template:
        metadata:
          labels:
            app: activemq
        spec:
          securityContext:
            fsGroup: 2000
          containers:
          - name: activemq
            image: activemq:1.0
            ports:
            - containerPort: 8161
            volumeMounts:
            - name: activemq-data
              mountPath: /opt/apache-activemq-5.15.6/data
              readOnly: false
          volumes:
          - name: activemq-data
            persistentVolumeClaim:
              claimName: amq-pv-claim
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: amq-nodeport-service
    spec:
      selector:
        app: activemq
      ports:
      - port: 8161
        targetPort: 8161
      type: NodePort
    ---
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: amq-pv-claim
    spec:
      #storageClassName: manual
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 2Gi
-- SamK
Source: StackOverflow

1/20/2019

You might be missing a setting in you volume claim:

kind: PersistentVolume
apiVersion: v1
metadata:
  name: amq-pv-claim-local
  labels:
    type: local
spec:
  storageClassName: manual
  persistentVolumeReclaimPolicy: Retain
  capacity:
    storage: 3Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /tmp

Also there is still a good change that this does not work due to the use of hostPath: HostPath means it is stored on the server the volume started. It does not migrate along with the restart of the pod, and can lead to very odd behavior in a pv. Look at using NFS, gluster, or any other cluster file system to store your data in a generically accessible path.

If you use a cloud provider, you can also have auto disk mounts from kubernetes, so you can use gcloud, AWS, Azure, etc to provide the storage for you and be mounted by kubernetes where kubernetes wants it be.

-- Norbert van Nobelen
Source: StackOverflow