Bash script from a BAT file not running after connecting to a kubectl pod in Google Cloud Shell editor

9/9/2021

For my project, I have to connect to a postgres Database in Google Cloud Shell using a series of commands:

gcloud config set project <project-name><br>gcloud auth activate-service-account <keyname>@<project-name>.iam.gserviceaccount.com --key-file=<filename>.json<br>gcloud container clusters get-credentials banting --region <region> --project <project><br>kubectl get pods -n <node><br>kubectl exec -it <pod-name> -n <node> bash<br>apt-get update<br>apt install postgresql postgresql-contrib<br>psql -h <hostname> -p <port> -d <database> -U <userId><br>`

I am a beginner to this and just running the scripts provided to me by copy pasting till now. But to make things easier, I have created a .bat file in the Shell editor with all the above commands and tried to run it using bash <filename>

But once the kubectl exec -it <pod-name> -n <node> bash command runs and new directory is opened like below, the rest of the commands do not run.

Defaulted container "<container>" out of: <node>, istio-proxy, istio-init (init)<br>root@<pod-name>:/#

So how can I make the shell run the rest of these scripts from the .bat file:

apt-get update<br>apt install postgresql postgresql-contrib<br>psql -h <hostname> -p <port> -d <database> -U <userId><br>`

-- Hemendra
google-cloud-shell
google-cloud-shell-editor
kubernetes
postgresql
shell

1 Answer

9/9/2021

Cloud Shell is a Linux instance and default to the Bash shell.

BAT commonly refers to Windows|DOS batch files.

On Linux, shell scripts are generally .sh.

Your script needs to be revised in order to pass the commands intended for the kubectl exec command to the Pod and not to the current script.

You can try (!) the following. It creates a Bash (sub)shell on the Pod and runs the commands listed after -c in it:

gcloud config set project <project-name>

gcloud auth activate-service-account <keyname>@<project-name>.iam.gserviceaccount.com \
--key-file=<filename>.json

gcloud container clusters get-credentials banting \
--region <region> \
--project <project>

kubectl get pods -n <node>

kubectl exec -it <pod-name> -n <node> bash -c "apt-get update && apt install postgresql postgresql-contrib && psql -h <hostname> -p <port> -d <database> -U <userId>"

However, I have some feedback|recommendations:

  1. It's unclear whether even this approach will work because your running psql but doing nothing with it. In theory, I think you could then pass a script to the psql command too but then your script is becoming very janky.
  2. It is considered not good practice to install software in containers as you're doing. The recommendation is to create the image that you want to run beforehand and use that. It is recommended that containers be immutable
  3. I encourage you to use long flags when you write scripts as short flags (-n) can be confusing whereas --namespace= is more clear (IMO). Yes, these take longer to type but your script is clearer as a result. When you're hacking on the command-line, short flags are fine.
  4. I encourage you to not use gcloud config set e.g. gcloud config set project ${PROJECT}. This sets global values. And its use is confusing because subsequent commands use the values implicitly. Interestingly, you provide a good example of why this can be challenging. Your subsequent command gcloud container clusters get-credentials --project=${PROJECT} explicitly uses the --project flag (this is good) even though you've already implicitly set the value for project using gcloud config set project.
-- DazWilkin
Source: StackOverflow