Based on this it is possible to create environment variables that are the same across all the pods of the deployment that you define.
Is there a way to instruct Kubernetes deployment to create pods that have different environment variables?
Use case:
Let's say that I have a monitoring container and i want to create 4 replicas of it. This container has a service that is mailing if an environment variables defines so. Eg, if the env var IS_MASTER is true, then the service proceeds to send those e-mails.
apiVersion: v1
kind: Deployment
metadata:
...
spec:
...
replicas: 4
...
template:
...
spec:
containers:
-env:
-name: IS_MASTER
value: <------------- True only in one of the replicas
(In my case I'm using helm, but the same thing can be without helm as well)
One other solution is to use the same Helm Chart for both cases and run one helm deployment for each case. You can overwrite env variables with helm (using --set .Values.foo.deployment.isFirst=
"0"
or "1"
)
Please note that Helm/K8s will not allow you to POST the very same configuration twice.
So you will have to conditionally apply some Kubernetes specific configuration (Secrets, ConfigMaps, Secrets etc) on the first deployment only.
{{- if eq .Values.foo.deployment.isFirst "1" }}
...
...
{{- end }}
What you are looking for is, as far as I know, more like an anti-pattern than impossible.
From what I understand, you seem to be looking to deploy a scalable/HA monitoring platform that wouldn't mail X times on alerts, so you can either make a sidecar container that will talk to its siblings and "elect" the master-mailer (a StatefulSet will make it easier in this case), or just separate the mailer from the monitoring and make them talk to each other through a Service. That would allow you to load-balance both monitoring and mailing separately.
monitoring-1 \ / mailer-1
monitoring-2 --- > mailer.svc -- mailer-2
monitoring-3 / \ mailer-3
Any mailing request will be handled by one and only one mailer from the pool, but that's assuming your Monitoring Pods aren't all triggered together on alerts... If that's not the case, then regardless of your "master" election for the mailer, you will have to tackle that first.
And by tackling that first I mean adding a master-election logic to your monitoring platform, to orchestrate master fail-overs on events, there are a few ways to do so, but it really depends on what your monitoring platform is and can do...
Although, if your replicas are just there to extend compute power somehow and your master is expected to be static, then simply use a StatefulSet, and add a one liner at runtime doing if hostname == $statefulset-name-0 then MASTER
, but I feel like it's not the best idea.
By definition, each pod in a deployment is identical to its other replicas. This is not possible in the yaml definition.
An optional solution will be to override the pod command
and have it process and calculate the value of the variable, set the variable (export IS_MASTER=${resolved_value}
) and trigger the default entrypoint for the container.
It means you'll have to figure out a logic to implement this (i.e. how does the pod know it should be IS_MASTER=true?). This is an implementation detail that can be done with a DB or other shared common resource used as a flag or semaphore.
All the Pod replicas in the deployment will have the same environment variables and no unique value to identify a particular Pod. Creating multiple Deployments is a better solution.
Not sure why, the OP is for only one Deployment. One solution is to use StatefulSets. The node names would be like web-0, web1, web-2 and so on. In the code check for the host name, if it is web-0 then send emails or else do something else.
It's a dirty solution, but I can't think of a better solution than creating multiple deployments.