How to get all running PODs on Kubernetes cluster

6/12/2019

This simple Node.js program works fine on local because it pulls the kubernetes config from my local /root/.kube/config file

const Client = require('kubernetes-client').Client;
const Config = require('kubernetes-client/backends/request').config;

const client = new K8sClient({ config: Config.fromKubeconfig(), version: '1.13' });

const pods = await client.api.v1.namespaces('xxxxx').pods.get({ qs: { labelSelector: 'application=test' } });
console.log('Pods: ', JSON.stringify(pods));

Now I want to run it as a Docker container on cluster and get all current cluster's running PODs (for same/current namespace). Now of course it fails:

Error:  { Error: ENOENT: no such file or directory, open '/root/.kube/config'

So how make it work when deployed as a Docker container to cluster? This little service needs to scan all running PODs... Assume it doesn't need pull config data since it's already deployed.. So it needs to access PODs on current cluster

-- John Glabb
docker
docker-compose
kubernetes
node.js

1 Answer

6/12/2019

Couple of concepts to grab your head around first:

To perform you end goal (which if i understand correct): Containerize Node js application

Step 1: Put application in a container

Step 2: Create a deployment/statefulset/daemonset as per you requirement using the container created above in step 1

Explanation:

In step 2 above {by default} if you do not (explicitly) mention a serviceaccount (custom) then it will be the default account the credentials of which are mounted inside the container (by default) here

volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-xxxx
      readOnly: true

which can be verified by this command after (successful) pod creation

kubectl get pod -n {yournamespace(by default is default)} POD_NAME -o yaml

Now (Gotchas!!) if you cannot access the cluster with those credentials then depending on which service account you are using and the rights of that serviceaccount needs to be accessed. For example if you are using abc serviceaccount which does not have rolebinding to it then you will not be able to view the cluster. In that case you need to create (first) a role (to read pods) and a rolebinding (for that role) to the serviceaccount.


UPDATE:
The problem got resolved by changing Config.fromKubeconfig() to Config.getInCluster() Ref
Clarification: fromKubeconfig() function is good if you are running your application on a node which is a part of kubernetes cluster and has cluster accessing token saved here: /$USER/.kube/config but if you want to run the nodeJS appilcation in a container in a pod then you need this Config.getInCluster() to load the token. if you are nosy enough then check the comments of this answer! :P

Note: here the nodejs library in discussion is this

-- garlicFrancium
Source: StackOverflow