I have a kubernetes deployment with the below spec that gets installed via helm 3.
apiVersion: apps/v1
kind: Deployment
metadata:
name: gatekeeper
spec:
replicas: 1
template:
spec:
containers:
- name: gatekeeper
image: my-gatekeeper-image:some-sha
args:
- --listen=0.0.0.0:80
- --client-id=gk-client
- --discovery-url={{ .Values.discoveryUrl }}
I need to pass the discoveryUrl
value as a helm value, which is the public IP address of the nginx-ingress
pod that I deploy via a different helm chart. I install the above deployment like below:
helm3 install my-nginx-ingress-chart
INGRESS_IP=$(kubectl get svc -lapp=nginx-ingress -o=jsonpath='{.items[].status.loadBalancer.ingress[].ip}')
helm3 install my-gatekeeper-chart --set discovery_url=${INGRESS_IP}
This works fine, however, Now instead of these two helm3 install
, I want to have a single helm3 install, where both the nginx-ingress and the gatekeeper deployment should be created.
I understand that in the initContainer
of my-gatekeeper-image
we can get the nginx-ingress ip address, but I am not able to understand how to set that as an environment variable or pass to the container spec.
There are some stackoverflow questions that mention that we can create a persistent volume or secret to achieve this, but I am not sure, how that would work if we have to delete them. I do not want to create any extra objects and maintain the lifecycle of them.
Using init containers is indeed a valid solution but you need to be aware that by doing so you are adding complexity to your deployment.
This is because you would also need to create serviceaccount with permisions to be able to read service objects from inside of init container. Then, when having the IP, you can't just set env variable for gatekeeper container without recreating a pod so you would need to save the IP e.g. to shared file and read it from it when starting gatekeeper.
Alternatively you can reserve ip address if your cloud provided supports this feature and use this static IP when deploying nginx service:
apiVersion: v1
kind: Service
[...]
type: LoadBalancer
loadBalancerIP: "YOUR.IP.ADDRESS.HERE"
Let me know if you have any questions or if something needs clarification.
It is not possible to do this without mounting a persistent volume. But the creation of persistent volume can be backed by just an in-memory store, instead of a block storage device. That way, we do not have to do any extra lifecycle management. The way to achieve that is:
apiVersion: v1
kind: ConfigMap
metadata:
name: gatekeeper
data:
gatekeeper.sh: |-
#!/usr/bin/env bash
set -e
INGRESS_IP=$(kubectl get svc -lapp=nginx-ingress -o=jsonpath='{.items[].status.loadBalancer.ingress[].name}')
# Do other validations/cleanup
echo $INGRESS_IP > /opt/gkconf/discovery_url;
exit 0
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: gatekeeper
labels:
app: gatekeeper
spec:
replicas: 1
selector:
matchLabels:
app: gatekeeper
template:
metadata:
name: gatekeeper
labels:
app: gatekeeper
spec:
initContainers:
- name: gkinit
command: [ "/opt/gk-init.sh" ]
image: 'bitnami/kubectl:1.12'
volumeMounts:
- mountPath: /opt/gkconf
name: gkconf
- mountPath: /opt/gk-init.sh
name: gatekeeper
subPath: gatekeeper.sh
readOnly: false
containers:
- name: gatekeeper
image: my-gatekeeper-image:some-sha
# ENTRYPOINT of above image should read the
# file /opt/gkconf/discovery_url and then launch
# the actual gatekeeper binary
imagePullPolicy: Always
ports:
- containerPort: 80
protocol: TCP
volumeMounts:
- mountPath: /opt/gkconf
name: gkconf
volumes:
- name: gkconf
emptyDir:
medium: Memory
- name: gatekeeper
configMap:
name: gatekeeper
defaultMode: 0555