Faced the following issue: I need to add a search domain on some pods to be able to communicate with headless service. Kubernetes documentation recommends to set a dnsConfig and set everything in it.That's what I did. Also there is a limitation that only 6 search domains can be set. Part of the manifest:
spec:
hostname: search
dnsPolicy: ClusterFirst
dnsConfig:
searches:
- indexer.splunk.svc.cluster.local
containers:
- name: search
Unfortunately it has no effect and resolv.conf file on targeted pod doesn't include this search domain:
search splunk.svc.cluster.local svc.cluster.local cluster.local us-east4-c.c.'project-id'.internal c.'project-id'.internal google.internal
nameserver 10.39.240.10
options ndots:5
After a quick look at this config I found that currently there are 6 search domens are specified and probably this is the reason why new search domain is not added. You can add manually and everything will work,but this isn't what I 'm trying to achieve.
Do you have any ideas how to bypass this limitation?
P.S Set dnsPolicy to None is not an option also as set prestart hooks to add my search zone.
---
# Search-head deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: search
namespace: splunk
labels:
app: splunk
spec:
replicas: 1
selector:
matchLabels:
app: splunk
template:
metadata:
labels:
app: splunk
spec:
hostname: search
dnsPolicy: ClusterFirst
dnsConfig:
searches:
- indexer.splunk.svc.cluster.local
containers:
- name: search
image: splunk/splunk
env:
- name: SPLUNK_START_ARGS
value: "--accept-license"
- name: SPLUNK_PASSWORD
valueFrom:
secretKeyRef:
name: splunk-password
key: password
- name: SPLUNK_ROLE
value: splunk_search_head
- name: SPLUNK_SEARCH_HEAD_URL
value: search
- name: SPLUNK_INDEXER_URL # TODO: make this part dynamic.
value: indexer-0,indexer-1
ports:
- name: web
containerPort: 8000
- name: mgmt
containerPort: 8089
- name: kv
containerPort: 8191
volumeMounts:
- mountPath: /opt/splunk/var
name: sh-volume
volumes:
- name: sh-volume
persistentVolumeClaim:
claimName: sh-volume
According to Pods DnsConfig Documentation:
searches
: a list of DNS search domains for hostname lookup in the Pod. This property is optional. When specified, the provided list will be merged into the base search domain names generated from the chosen DNS policy. Duplicate domain names are removed. Kubernetes allows for at most 6 search domains.
Even though resolv.conf docs mention it accepts more than 6 search domains on latest versions, it's not yet possible to surpass this number of search domains through kubernetes deployment.
I created a workaround on which an InitContainer
creates and mount to the pod a new resolv.conf
and after the container is up it replaces the automatically generated one. This way if the container crashes or gets rebooted the resolv.conf
will always be reinforced.
nginx-emulating-your-splunk-deploy.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: search
namespace: default
labels:
app: splunk
spec:
replicas: 1
selector:
matchLabels:
app: splunk
template:
metadata:
labels:
app: splunk
spec:
hostname: search
initContainers:
- name: initdns
image: nginx
imagePullPolicy: IfNotPresent
command: ["/bin/bash","-c"]
args: ["echo -e \"nameserver 10.39.240.10\nsearch indexer.splunk.svc.cluster.local splunk.svc.cluster.local svc.cluster.local cluster.local us-east4-c.c.'project-id'.internal c.'project-id'.internal google.internal\noptions ndots:5\n \" > /mnt/resolv.conf"]
volumeMounts:
- mountPath: /mnt
name: volmnt
containers:
- name: search
image: nginx
env:
- name: SPLUNK_START_ARGS
value: "--accept-license"
- name: SPLUNK_PASSWORD
value: password
- name: SPLUNK_ROLE
value: splunk_search_head
- name: SPLUNK_SEARCH_HEAD_URL
value: search
ports:
- name: web
containerPort: 8000
- name: mgmt
containerPort: 8089
- name: kv
containerPort: 8191
volumeMounts:
- mountPath: /mnt
name: volmnt
command: ["/bin/bash","-c"]
args: ["cp /mnt/resolv.conf /etc/resolv.conf ; nginx -g \"daemon off;\""]
volumes:
- name: volmnt
emptyDir: {}
namespace
, nameserver
, container.image
, container.args
$ kubectl apply -f search-head-splunk.yaml
deployment.apps/search created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
search-64b6fb5854-shm2x 1/1 Running 0 5m14sa
$ kubectl exec -it search-64b6fb5854-shm2x -- cat /etc/resolv.conf
nameserver 10.39.240.10
search indexer.splunk.svc.cluster.local splunk.svc.cluster.local svc.cluster.local cluster.local us-east4-c.c.'project-id'.internal c.'project-id'.internal google.internal
options ndots:5
You can see that the resolv.conf stays as configured, please reproduce in your environment and let me know if you find any problem.
EDIT 1:
We have to Hardcode the DNS server, but kube-dns
service sticks with the same IP during Cluster lifespan and sometimes even after Cluster recreation, it depends on network configuration.
If you need 6 or less domains you can just change dnsPolicy
to None
and skip the InitContainer
:
apiVersion: apps/v1
kind: Deployment
metadata:
name: search
namespace: splunk
labels:
app: splunk
spec:
replicas: 1
selector:
matchLabels:
app: splunk
template:
metadata:
labels:
app: splunk
spec:
hostname: search
dnsPolicy: "None"
dnsConfig:
nameservers:
- 10.39.240.10
searches:
- indexer.splunk.svc.cluster.local
- splunk.svc.cluster.local
- us-east4-c.c.'project-id'.internal
- c.'project-id'.internal
- svc.cluster.local
- cluster.local
options:
- name: ndots
- value: "5"
containers:
- name: search
image: splunk/splunk
...
{{{the rest of your config}}}