In IBM Cloud Private EE, I need to go to the Web UI User > Configure client
, copy the kubectl
config commands and then run these 5 commands on my client machine.
I deployed the IBM Cloud private EE on 5 VMs and have access to the master node. I am wondering if there is a way to capture these kubectl config
commands directly from the docker containers without having a need to go to the Web UI.
For example: I did not want to download the kubectl
client from google (as I just want to use same kubectl
version which is in the ICP containers) and I used the following command to get it from the container itself.
docker run --rm -v $(pwd):/data -e LICENSE=accept \
ibmcom/icp-inception:2.1.0.1-ee \
cp -r /usr/local/bin/kubectl /data
Then, I copied this to all VM guests so that I could access kubectl
from any guest.
chmod +x kubectl
for host in $(awk '/192.168.142/ {print $3}' /etc/hosts)
do
scp kubectl $host:/bin
done
Where - 192.168.142
is the subnet of my VM guests.
But, I could not figure out how to get Configure Client
commands without having to go to the Web UI. I need this to automate client kubectl
command so that my environment is ready for kubectl
commands through simple scripts.
# get token
icp_auth_token=`curl -s -k -H "Content-Type: application/x-www-form-urlencoded;charset=UTF-8" \
-d "grant_type=password&username=${myuser}&password=${mypass}&scope=openid" \
https://${icp_server}:8443/idprovider/v1/auth/identitytoken --insecure | \
sed 's/{//g;s/}//g;s/\"//g' | \
awk -F ':' '{print $7}'`
# setup context
kubectl config set-cluster ${icp_server} --server=https://${icp_server}:8001 --insecure-skip-tls-verify=true
kubectl config set-credentials ${icp_server}-user --token=${icp_auth_token}
kubectl config set-context ${icp_server}-context --cluster=${icp_server} --user=${icp_server}-user
kubectl config use-context ${icp_server}-context
@VonC provided useful tips. This is how the service account token can be obtained.
Get the token from a running container - Tip from this link.
RUNNIGCONTAINER=$(docker ps | grep k8s_cloudiam-apikeys_auth | awk '{print $1}')
TOKEN=$(docker exec -t $RUNNIGCONTAINER cat /var/run/secrets/kubernetes.io/serviceaccount/token)
I already know the name of the IBM Cloud Private cluster name, master node and the default user name. The only missing link was the token. Please note that the script used by Tim is using password and the only difference was - I wanted to use token instead of the password.
So use the scripts.
kubectl config set-cluster ${CLUSTERNAME}.icp --server=https://$MASTERNODE:8001 --insecure-skip-tls-verify=true
kubectl config set-context ${CLUSTERNAME}.icp-context --cluster=${CLUSTERNAME}.icp
kubectl config set-credentials admin --token=$TOKEN
kubectl config set-context ${CLUSTERNAME}.icp-context --user=$DEFAULTUSERNAME --namespace=default
kubectl config use-context ${CLUSTERNAME}.icp-context
You should use Vagrant to automate those steps.
For instance, IBM/deploy-ibm-cloud-private/Vagrantfile
has this section:
install_kubectl = <<SCRIPT
echo "Pulling #{image_repo}/kubernetes:v#{k8s_version}..."
sudo docker run -e LICENSE=#{license} --net=host -v /usr/local/bin:/data #{image_repo}/kubernetes:v#{k8s_version} cp /kubectl /data &> /dev/null
kubectl config set-credentials icpadmin --username=admin --password=admin &> /dev/null
kubectl config set-cluster icp --server=http://127.0.0.1:8888 --insecure-skip-tls-verify=true &> /dev/null
kubectl config set-context icp --cluster=icp --user=admin --namespace=default &> /dev/null
kubectl config use-context icp &> /dev/null
SCRIPT
See more at "Kubernetes, IBM Cloud Private, and Vagrant, oh my!", from Tim Pouyer.