My .bash_profile
has many aliases that I use regularly. When I exec into a kubernetes pod, though, those aliases become (understandably) inaccessible. And when I say "exec into" I mean:
kubectl exec -it [pod-name] -c [container-name] bash
Is there any way to make it so that I can still use my bash profile after exec'ing in?
You said those are only the aliases. In that case and only in that case you could save the .bash_profile
in the ConfigMap
using --from-env-file
kubectl create configmap bash-profile --from-env-file=.bash_profile
Keep in mind that each line in the env file has to be in VAR=VAL format.
Lines with # at the beginning and blank lines will be ignored.
You can then load all the key-value pairs as container environment variables:
apiVersion: v1
kind: Pod
metadata:
name: bash-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
envFrom:
- configMapRef:
name: bash-profile
restartPolicy: Never
Or Populate a Volume with data stored in a ConfigMap:
apiVersion: v1
kind: Pod
metadata:
name: bash-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "ls /etc/config/" ]
volumeMounts:
- name: config-volume
mountPath: /root/.bash_profile
volumes:
- name: config-volume
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: bash-profile
restartPolicy: Never
The idea mentioned by @Mark should also work.
If you do kubectl cp .bash_profile <pod_name>:/root/
if you need to put it into a specific containers you can add option -c, --container='': Container name. If omitted, the first container in the pod will be chosen
.