Use other deployment IP in YAML deployment configuration

3/22/2018

I'm doing a prototype where one service depends on an availability of other. Scenario:

  • Service A is assumed to be already available in a local network. It was either deployed by K8S or manually (or even a managed one provided by AWS etc.).
  • Service B depends on environment variable SERVICE_A_IP and won't start without it. It's treated as a black box and can't be modified.

I want to pass Service A IP to Service B through K8S YAML configuration file. Perfect syntax for this occasion:

...
env:
  - name: SERVICE_A_IP
    valueFrom:
      k8sDeployment:
        name: service_a
        key: deploymentIP
...

During the prototyping stage Service A is an another K8S deployment but it might not be so in a production environment. Thus I need to decouple from SERVICE_A_SERVICE_IP that will be available to Service B (given it's deployed after Service A). I'm not into DNS discovery as well as it would require container modification which is far from a perfect solution.

If I would do it manually with kubectl (or with a shell script) it would be like the following:

$ kubectl run service_a --image=service_a:latest --port=8080
$ kubectl expose deployment service_a
$ SERVICE_A_IP="$(kubectl describe service service_a | \
    grep IP: | \
    cut -f2 -d ':' | \
    xargs)"
$ kubectl run service_b --image=service_b:latest --port=8080 \
    --env="SERVICE_A_IP=${SERVICE_A_IP}"

It works. Though I want to do the same using YAML configuration without injecting SERVICE_A_IP into configuration file with shell (basically modifying the file).

Is there any way to do so? Please take the above setting as set in stone.

UPDATE

Not the best way though still:

$ kubectl create -f service_a.yml
deployment "service_a" created
service "service_a" created
$ SERVICE_A_IP="$(kubectl describe service service_a | \
    grep IP: | \
    cut -f2 -d ':' | \
    xargs)"
$ kubectl create configmap service_a_meta \
    --from-literal="SERVICE_A_IP=${SERVICE_A_IP}"

And then in service_b.yml:

...
env:
  - name: SERVICE_A_IP
    valueFrom:
      configMapKeyRef:
        name: service_a_meta
        key: SERVICE_A_IP
...

That will work but still involves some shell and generally feels way too hax.

-- ddnomad
kubernetes

1 Answer

3/22/2018

You can use attach handlers to lifecycle events for update your environment variables on start.

Here is an example: apiVersion: v1 kind: Pod metadata: name: appB spec: containers: - name: appB image: nginx lifecycle: postStart: exec: command: ["/bin/sh", "-c", "export SERVICE_B_IP=$(host <SERVICE_B>.<SERVICE_B_NAMESPACE>.svc.cluster.local)"]

Kubernetes will run preStart script each time when pod with your appB container is starting right in appB container before execution of the main application.

But, because of that description:

PostStart

This hook executes immediately after a container is created. However, there is no guarantee that the hook will execute before the container ENTRYPOINT. No parameters are passed to the handler.

You need to add some sleep for your main app before the real start just to be sure that hook will be finished before application will be started.

-- Anton Kostenko
Source: StackOverflow