I am using the following job template:
apiVersion: batch/v1
kind: Job
metadata:
name: rotatedevcreds2
spec:
template:
metadata:
name: rotatedevcreds2
spec:
containers:
- name: shell
image: akanksha/dsserver:v7
env:
- name: DEMO
value: "Hello from the environment"
- name: personal_AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: rotatecreds-env
key: personal_aws_secret_access_key
- name: personal_AWS_SECRET_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: rotatecreds-env
key: personal_aws_secret_access_key_id
- name: personal_GIT_TOKEN
valueFrom:
secretKeyRef:
name: rotatecreds-env
key: personal_git_token
command:
- "bin/bash"
- "-c"
- "whoami; pwd; /root/rotateCreds.sh"
restartPolicy: Never
imagePullSecrets:
- name: regcred
The shell script runs some ansible tasks which results in:
TASK [Get the existing access keys for the functional backup ID] ***************
fatal: [localhost]: FAILED! => {"changed": false, "cmd": "aws iam list-access-keys --user-name ''", "failed_when_result": true, "msg": "[Errno 2] No such file or directory", "rc": 2}
However if I spin a pod using the same iamge using the following
apiVersion: batch/v1
kind: Job
metadata:
name: rotatedevcreds3
spec:
template:
metadata:
name: rotatedevcreds3
spec:
containers:
- name: shell
image: akanksha/dsserver:v7
env:
- name: DEMO
value: "Hello from the environment"
- name: personal_AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: rotatecreds-env
key: personal_aws_secret_access_key
- name: personal_AWS_SECRET_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: rotatecreds-env
key: personal_aws_secret_access_key_id
- name: personal_GIT_TOKEN
valueFrom:
secretKeyRef:
name: rotatecreds-env
key: personal_git_token
command:
- "bin/bash"
- "-c"
- "whoami; pwd; /root/rotateCreds.sh"
restartPolicy: Never
imagePullSecrets:
- name: regcred
This creates a POD and I am able to login to the pod and run /root/rotateCreds.sh
While running the job it seems it not able to recognose the aws cli. I tried debugging whoami
and pwd
which is equal to root
and /
respectively and that is fine. Any pointers what is missing? I am new to jobs.
For further debugging in the job template I added a sleep for 10000
seconds so that I can login to the container and see what's happening. I noticed after logging in I was able to run the script manually too. aws
command was recognised properly.
Ok so I added an export command to update the path and that fixed the issue. The issue was: I was using command resource so it was not in bash environment. So either we can use a shell resource with bash argument as described here: https://docs.ansible.com/ansible/latest/modules/shell_module.html or export new PATH.
It is likely your PATH
is not set correctly, a quick fix is to define the absolute path of aws-cli like /usr/local/bin/aws
in /root/rotateCreds.sh
script