Using Ubuntu 19.10 using standard GNOME terminal.
I have build a docker image with nvm using the following dockerfile (its going to be a npm diagnostic/debug command line container so not application):
FROM ubuntu:19.10
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
RUN apt-get update && \
apt-get -y dist-upgrade && \
apt-get -y autoremove && \
apt-get clean
RUN apt-get install -y \
curl \
nano \
git
ARG NODE_VERSION='12.0.0'
ARG NVM_DIR=/root
ARG NVM_VERSION='v0.35.3'
RUN curl -o- "https://raw.githubusercontent.com/nvm-sh/nvm/$NVM_VERSION/install.sh" | bash \
&& source $NVM_DIR/nvm.sh \
&& nvm install $NODE_VERSION
I have build an image and pushed it to an image registry and started it in k8s. I have the accessed the running container with:
kubectl exec my-app-xx25 -it bash
But when inside the container I cannot start e.g. nano
:
root@my-app-xx25:/# nano
Error opening terminal: unknown.
or reset the terminal for that sake:
root@my-app-xx25:/# reset
reset: unknown terminal type unknown
vi
/vim
works though.
Based on:
https://github.com/moby/moby/issues/9299
If I do:
kubectl exec my-app-xx25 -it -- bash -c "export TERM=xterm && bash"
I can start nano just fine but seems like a messy workaround.
Notice if I run it locally with docker it works fine (starting nano, reset etc):
docker run -it my-image /bin/bash
Any suggestions to what goes on and why I need to pass export TERM=xterm
when running kubectl exec
and not when running docker run
(locally)?
It's kind of odd that you are running nano inside a Kubernetes pod/container. Given that your kubectl exec ...
is not a root login shell, a workaround is to put the value in your /root/.bashrc
:
export TERM=xterm
I'd recommend you build the container with the file built-in if you are going to run this on a regular basis because if it's not then every time your pod/container restart you will have to manually modify the /.bashrd
content.