Kubernetes - Pass Public IP of Load Balance as Environment Variable into Pod

10/16/2019

Gist

I have a ConfigMap which provides necessary environment variables to my pods:

apiVersion: v1
kind: ConfigMap
metadata:
  name: global-config
data:
  NODE_ENV: prod
  LEVEL: info

  # I need to set API_URL to the public IP address of the Load Balancer
  API_URL: http://<SOME IP>:3000

  DATABASE_URL: mongodb://database:27017
  SOME_SERVICE_HOST: some-service:3000

I am running my Kubernetes Cluster on Google Cloud, so it will automatically create a public endpoint for my service:

apiVersion: v1
kind: Service
metadata:
  name: gateway
spec:
  selector:
    app: gateway
  ports:
    - name: http
      port: 3000
      targetPort: 3000
      nodePort: 30000
  type: LoadBalancer

Issue

I have an web application that needs to make HTTP requests from the client's browser to the gateway service. But in order to make a request to the external service, the web app needs to know it's ip address.

So I've set up the pod, which serves the web application in a way, that it picks up an environment variable "API_URL" and as a result makes all HTTP requests to this url.

So I just need a way to set the API_URL environment variable to the public IP address of the gateway service to pass it into a pod when it starts.

-- Florian Ludewig
environment-variables
google-cloud-platform
google-kubernetes-engine
kubernetes

3 Answers

10/20/2019

I know this isn't the exact approach you were going for, but I've found that creating a static IP address and explicitly passing it in tends to be easier to work with.

First, create a static IP address:

gcloud compute addresses create gke-ip --region <region>

where region is the GCP region your GKE cluster is located in.

Then you can get your new IP address with:

gcloud compute addresses describe gke-ip --region <region>

Now you can add your static IP address to your service by specifying an explicit loadBalancerIP.1

apiVersion: v1
kind: Service
metadata:
  name: gateway
spec:
  selector:
    app: gateway
  ports:
    - name: http
      port: 3000
      targetPort: 3000
      nodePort: 30000
  type: LoadBalancer
  loadBalancerIP: "1.2.3.4"

At this point, you can also hard-code it into your ConfigMap and not worry about grabbing the value from the cluster itself.

1If you've already created a LoadBalancer with an auto-assigned IP address, setting an IP address won't change the IP of the underlying GCP load balancer. Instead, you should delete the LoadBalancer service in your cluster, wait ~15 minutes for the underlying GCP resources to get cleaned up, and then recreate the LoadBalancer with the explicit IP address.

-- supersam654
Source: StackOverflow

10/22/2019

You are trying to access gateway service from client's browser.

I would like to suggest you another solution that is slightly different from what you are currently trying to achieve but it can solve your problem.

From your question I was able to deduce that your web app and gateway app are on the same cluster.

In my solution you dont need a service of type LoadBalancer and basic Ingress is enough to make it work.

You only need to create a Service object (notice that option type: LoadBalancer is now gone)

apiVersion: v1
kind: Service
metadata:
name: gateway
spec:
selector:
  app: gateway
ports:
  - name: http
    port: 3000
    targetPort: 3000
    nodePort: 30000

and you alse need an ingress object (remember that na Ingress Controller needs to be deployed to cluster in order to make it work) like one below: More on how to deploy Nginx Ingress controller you can finde here and if you are already using one (maybe different one) then you can skip this step.

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: gateway-ingress
annotations:
  nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
  - host: gateway.foo.bar.com
    http:
      paths:
      - path: /
          backend:
            serviceName: gateway
            servicePort: 3000

Notice the host field.

The same you need to repeat for your web application. Remember to use appropriate host name (DNS name) e.g. for web app: foo.bar.com and for gateway: gateway.foo.bar.com and then just use the gateway.foo.bar.com dns name to connect to the gateway app from clients web browser.

You also need to create a dns entry that points *.foo.bar.com to Ingress's public ip address as Ingress controller will create its own load balancer.

The flow of traffic would be like below:

+-------------+   +---------+   +-----------------+   +---------------------+
| Web Browser |-->| Ingress |-->| gateway Service |-->| gateway application |
+-------------+   +---------+   +-----------------+   +---------------------+

This approach is better becaues it won't cause issues with Cross-Origin Resource Sharing (CORS) in clients browser.

Examples of Ingress and Service manifests I took from official kubernetes documentation and modified slightly.

More on Ingress you can find here and on Services here

-- HelloWorld
Source: StackOverflow

10/20/2019

The following deployment reads the external IP of a given service using kubectl every 10 seconds and patches a given configmap with it:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: configmap-updater
  labels:
    app: configmap-updater
spec:
  selector:
    matchLabels:
      app: configmap-updater
  template:
    metadata:
      labels:
        app: configmap-updater
    spec:
      containers:
      - name: configmap-updater
        image: alpine:3.10
        command: ['sh', '-c' ]
        args:
        - | #!/bin/sh
            set -x

            apk --update add curl
            curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl
            chmod +x kubectl

            export CONFIGMAP="configmap/global-config"
            export SERVICE="service/gateway"

            while true
            do
                IP=`./kubectl get services $CONFIGMAP -o go-template --template='{{ (index .status.loadBalancer.ingress 0).ip }}'`
                PATCH=`printf '{"data":{"API_URL": "https://%s:3000"}}' $IP`
                echo ${PATCH}
                ./kubectl patch --type=merge -p "${PATCH}" $SERVICE

                sleep 10
            done

You probably have RBAC enabled in your GKE cluster and would still need to create the appropriate Role and RoleBinding for this to work correctly.

You've got a few possibilities:

  • If you really need this to be hacked into your setup, you could use a similar approach with a sidecar container in your pod or a global service like above. Keep in mind that you would need to recreate your pods if the configmap actually changed for the changes to be picked up by the environment variables of your containers.

  • Watch and query the Kubernetes-API for the external IP directly in your application, eliminating the need for an environment variable.

  • Adopt your applications to not directly depend on the external IP.

-- Simon Tesar
Source: StackOverflow