There is any way to read a hostAliases from values with configmap in Kubernetes?

10/16/2019

I would like to know if there is any way to externalize my hostaliases in order to read from values file to vary by environment.

deployment.yaml ... hostAliases: valueFrom: configMapKeyRef: name: host-aliases-configuration key: hostaliases

`

configmap.yaml kind: ConfigMap metadata: name: host-aliases-configuration data: hostaliases: | {{ .Values.hosts }}

`

values.yaml hosts: - ip: "13.21.219.253" hostnames: - "test-test.com" - ip: "13.71.225.255" hostnames: - "test-test.net"

This doest work:

helm install --name gateway .

Error: release gateway failed: Deployment in version "v1" cannot be handled as a Deployment: v1.Deployment.Spec: v1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.HostAliases: []v1.HostAlias: decode slice: expect [ or n, but found {, error found in #10 byte of ...|Aliases":{"valueFrom|..., bigger context ...|config","name":"config-volume"}]}],"hostAliases":{"valueFrom":{"configMapKeyRef":{"key":"hostaliases|...

I would like to know if there is any way to externalize this urls by env, using another approach maybe.

-- Bruno Macedo
configmap
hosts
kubernetes

2 Answers

10/17/2019

For the main question you got an error while configMapKeyRef is expecting key - value parameters instead of the array provide by configmap.

1. You can try:

deployment.yaml
...
 hostAliases:
{{ toYaml .Values.hosts | indent 4 }}  

values.yaml
hosts:
  - ip: "13.21.219.253"
    hostnames:
    - "test-test.com"
  - ip: "13.71.225.255"
    hostnames:
    - "test-test.net"

Note - hostAliases:

Because of the managed-nature of the file, any user-written content will be overwritten whenever the hosts file is remounted by Kubelet in the event of a container restart or a Pod reschedule. Thus, it is not suggested to modify the contents of the file.

Please refer to HostAliases

In addition those addressees will be used only from the POD level.

2. It's not clear what you are trying to do.

Take a look at external IPs it should be done at service level.

If there are external IPs that route to one or more cluster nodes, Kubernetes Services can be exposed on those externalIPs. Traffic that ingresses into the cluster with the external IP (as destination IP), on the Service port, will be routed to one of the Service endpoints. externalIPs are not managed by Kubernetes and are the responsibility of the cluster administrator.

hope this help

-- Hanx
Source: StackOverflow

10/16/2019

I had the same problem.

the solution I finally came up with was to create an external-hosts chart which will include all my external IPs references, (abstracted as clusterIP services), and include that chart in the requirements.yaml of every chart

requirements.yaml of every chart:

dependencies:

- name: external-hosts
  version: "0.1.*"
  repository: "file://../external-hosts"

the external-hosts chart itself contained:

values.yaml: a list of hosts + the needed ports:

headless:
- host: test-test.com
  ip: "13.21.219.253"
  ports:
  - 80
  - 443
- host: test-test.net
  ip: "13.71.225.255"
  ports:
  - 3306

templates/headless.yaml- this one create for each host a clusterIP service with a single endpoint. a little overwhelming but it just works.

{{ range .Values.headless }}
---
kind: Service
apiVersion: v1
metadata:
 name: {{ .host }}
 labels:
{{ include "external-hosts.labels" $ | indent 4 }}
spec:
 ports:
 {{ range .ports }}
 - name: {{ . | quote }}
   port: {{ . }}
   targetPort: {{ . }}
{{ end }}
{{ end }}
---

{{ range .Values.headless }}
---
kind: Endpoints
apiVersion: v1
metadata:
 name: {{ .host }}
 labels:
{{ include "external-hosts.labels" $ | indent 4 }}
subsets:
 - addresses:
     - ip: {{ .ip }}
   ports:
   {{ range .ports }}
     - name: {{ . | quote}}
       port: {{ . }}
  {{ end }}
  {{ end }}
-- Efrat Levitan
Source: StackOverflow