Kubernetes Deployment: preStop does not execute aws commands

2/21/2019

I am trying to transfer logs over to S3 just before the pod is terminated. For this, we need to

  1. Configure our container to have AWS-CLI. I did this successfully using a script in postStart hook.

  2. Execute AWS S3 command to transfer files from a hostPath to S3 bucket. Almost had this one !!!

Here is my Kube Deployment (running on minikube):

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: logtransfer-poc
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: logs
    spec:
      volumes:
      - name: secret-resources
        secret:
          secretName: local-secrets
      - name: testdata
        hostPath:
          path: /data/testdata
      containers:
        - name: logtransfer-poc
          image: someImage
          ports:
          - name: https-port
            containerPort: 8443
          command: ["/bin/bash","-c","--"]
          args: ["while true; do sleep 30; done;"]
          volumeMounts:
          - name: secret-resources
            mountPath: "/data/apache-tomcat/tomcat/resources"
          - name: testdata
            mountPath: "/data/testdata"
          lifecycle:
            postStart:
              exec:
                command: ["/bin/sh", "-c", "cd /data/testdata/ && chmod u+x installS3Script.sh && ./installS3Script.sh > postInstall.logs"]
            preStop:
              exec:
                command: ["/bin/sh", "-c", "cd /data/testdata/ && chmod u+x transferFilesToS3.sh && ./transferFilesToS3.sh > postTransfer.logs"]
          terminationMessagePath: /data/testdata/termination-log
      terminationGracePeriodSeconds: 30
      imagePullSecrets:
        - name: my-docker-credentials

installS3Script.sh

#!/bin/bash

apt-get update
curl -O https://bootstrap.pypa.io/get-pip.py
python3 get-pip.py --user
chmod u+x get-pip.py
echo "PATH=$PATH:/root/.local/bin" >> ~/.bashrc && echo "Path Exported !!"
source ~/.bashrc && echo "Refreshed profile !"
pip3 install awscli --upgrade --user
mkdir -p ~/.aws
cp /data/testdata/config/config ~/.aws
cp /data/testdata/config/credentials ~/.aws

transferFilesToS3.sh

#!/bin/bash

# export AWS_DEFAULT_PROFILE=admin
echo "Transfering files to S3.."
aws s3 cp /data/testdata/data s3://testpratham --recursive --profile admin
aws s3 ls s3://testpratham --profile admin
echo "Transfer to S3 successfull !!"

What failed: the transferFilesToS3.sh runs successfully BUT its does NOT execute the AWS commands..

What works: I created test-log files and put the aws commands in postStart hook (installS3Script.sh) and it works fine !!

I think I might be looking into preStop hooks differently. I read a few articles on lifecycle and preStop hook. Also had a relative question on use of preStop hook with grace period.

Any suggestions/help on what I might be missing are appreciated.

-- Prathamesh dhanawade
amazon-web-services
bash
kubernetes
kubernetes-deployment
minikube

1 Answer

2/22/2019

Maybe it would be easier using Skbn.

Skbn is a tool for copying files and directories between Kubernetes and cloud storage providers. It is named after the 1981 video game Sokoban. Skbn uses an in-memory buffer for the copy process, to avoid excessive memory consumption. Skbn currently supports the following providers: - AWS S3 - Minio S3 - Azure Blob Storage

You could use:

skbn cp \
    --src k8s://<namespace>/<podName>/<containerName>/<path> \
    --dst s3://<bucket>/<path>

You should look at in-cluster usage, as it will require setting up ClusterRole, ClusterRoleBinding and ServiceAccount.

-- Crou
Source: StackOverflow