How to execute shell commands with k8s module on a Pod through Ansible

6/28/2020

I am having some trouble trying to execute shell commands with k8s module. As far as I am aware, k8s only possess k8s_exec to pass commands. Although, this module is similar to Ansible command module, which does not have a very important functionality that I desperately need: manage and use environment variables on a pod. A workaround that I have currently found is using kubectl exec function to execute shell commands on the remote machines. I am aware this is not the best approach.

Here is a playbook that illustrates some examples for the problem:

---
- hosts: localhost #group of hosts on host file
  connection: local
  remote_user: root
  vars:
    ansible_python_interpreter: '{{ ansible_playbook_python }}'
  collections:
    - community.kubernetes

    - name: Define Retail Home Path with k8s_exec module (not working!!!)
      k8s_exec:
        kubeconfig: "{{ kubeconfig_path | mandatory }}"
        namespace: redmine
        pod: redminetisl-gitlab-54d7759df8-l52cb #pod name
        command: export RETAIL_HOME=/u01/app/rms
    - name: Define Retail Home Path with kubectl exec module (working!!!)
      command: kubectl --namespace=redmine exec redminetisl-gitlab-54d7759df8-l52cb -- /bin/bash -c "export RETAIL_HOME=/u01/app/rms"

Is there any way to execute shell commands and use/manage environment variables on a remote machine with k8s module?

Ansible version:

ansible 2.9.9
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/home/ansible/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3.6/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 3.6.8 (default, Apr 16 2020, 01:36:27) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
-- João Pacheco
ansible
kubernetes

1 Answer

6/28/2020

You can't use this mechanism to change process environment variables. This is generally true in Unix: one process can't change another's environment variables, except that the parent can specify them at the moment it creates the child (and even then technically it's the child process running the parent's code that sets them). You can see the same thing using for example ssh:

<!-- language: lang-sh -->
ssh somewhere export RETAIL_HOME=/u01/app/rms
ssh somewhere echo \$RETAIL_HOME

If you want to set an environment variable and you need it to affect the pod's main process, you need to edit the Deployment spec; when you change this Kubernetes will redeploy the Pod(s) to have the new variables. For something like a filesystem path also consider just baking it into your image's Dockerfile.

Since it is very routine for Pods to get deleted (a Deployment update will cause all of its Pods to get deleted and recreated; the cluster can do it on its own if a node needs to get shut down) trying to use kubectl exec to make changes directly inside a pod isn't especially reliable. I wouldn't try to "manage" a pod with Ansible at all.

-- David Maze
Source: StackOverflow