How to change the cluster.local default domain on kubernetes 1.9 deployed with kubeadm?

1/18/2018

I would like to resolve the kube-dns names from outside of the kubernets cluster by adding a stub zone to my DNS servers. This requires changing the cluster.local domain to something that fits into my DNS namespace.

The cluster dns is working fine with cluster.local. To change the domain I have modified the line with KUBELET_DNS_ARGS on /etc/systemd/system/kubelet.service.d/10-kubeadm.conf to read:

Environment="KUBELET_DNS_ARGS=--cluster-dns=x.y.z --cluster-domain=cluster.mydomain.local --resolv-conf=/etc/resolv.conf.kubernetes"

After restarting kubelet external names are resolvable but kubernetes name resolution failed.

I can see that kube-dns is still running with:

/kube-dns --domain=cluster.local. --dns-port=10053 --config-dir=/kube-dns-config --v=2

The only place I was able to find cluster.local was within the pods yaml configuration which reads:

  containers:
  - args:
    - --domain=cluster.local.
    - --dns-port=10053
    - --config-dir=/kube-dns-config
    - --v=2

After modifying the yaml and recreating the pod using

kubectl replace --force -f kube-dns.yaml

I still see kube-dns gettings started with --domain=cluster.local.

What am I missing?

-- Marcus
kube-dns
kubeadm
kubernetes

5 Answers

6/13/2018

In addition to changing /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, you should run kubeadm init with --service-dns-domain cluster.mydomain.local which will create correct manifest for kube-dns.

It's hard to tell why your mod didn't work without seeing what your current config is. Perhaps you can post the output of:

kubectl get pod -n kube-system -l k8s-app=kube-dns -o jsonpath={.items[0].spec.containers[0]}

so we can see what you got running.

-- rezroo
Source: StackOverflow

6/8/2018

I had a similar problem where I have been porting a microservices based application to Kubernetes. Changing the internal DNS zone to cluster.local was going to be a fairly complex task that we didn't really want to deal with.

In our case, we switched from KubeDNS to CoreDNS, and simply enabled the coreDNS rewrite plugin to translate our our.internal.domain to ourNamespace.svc.cluster.local.

After doing this, the corefile part of our CoreDNS configmap looks something like this:

data:
  Corefile: |
    .:53 {
        errors
        health
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          pods insecure
          upstream
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        rewrite name substring our.internal.domain ourNamespace.svc.cluster.local
        proxy . /etc/resolv.conf
        cache 30

    }

This enables our kubernetes services to respond on both the default DNS zone and our own zone.

-- simon
Source: StackOverflow

4/25/2020

I deployed internal instance of ingress controller, and added CNAME to coreDNS config. to deploy internal nginx-ingress

helm install int -f ./values.yml stable/nginx-ingress --namespace ingress-nginx

values.yaml:

controller:
  ingressClass: 'nginx-internal'
  reportNodeInternalIp: true
  service:
    enabled: true
    type: ClusterIP

to edit coreDNS config: KUBE_EDITOR=nano kubectl edit configmap coredns -n kube-system

My coredns file:

apiVersion: v1
data:
  Corefile: |
    .:53 {
        reload 5s
        log
        errors
        health {
          lameduck 5s
        }
        ready
        template ANY A int {
          match "^([^.]+)\.([^.]+)\.int\.
quot;
answer "{{ .Name }} 60 IN CNAME int-nginx-ingress-controller.ingress-nginx.svc.cluster.local" upstream 127.0.0.1:53 } template ANY CNAME int { match "^([^.]+)\.([^.]+)\.int\.
quot;
answer "{{ .Name }} 60 IN CNAME int-nginx-ingress-controller.ingress-nginx.svc.cluster.local" upstream 127.0.0.1:53 } kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure upstream fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . "/etc/resolv.conf" cache 30 loop reload loadbalance } kind: ConfigMap metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","data":{"Corefile":".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes > creationTimestamp: "2020-02-27T16:02:20Z" name: coredns namespace: kube-system resourceVersion: "16293672" selfLink: /api/v1/namespaces/kube-system/configmaps/coredns uid: 8f0ebf84-6451-4f9b-a6e1-c386d44f2d43

If you now add to ingress resource ..int domain, and add proper annotation to use nginx-internal ingress, you can have shorter domain, for example you can configure it like this in jenkins helm chart:

master:
  ingress:
    annotations:
      kubernetes.io/ingress.class: nginx-internal

    enabled: true
    hostName: jenkins.devtools.int
-- Adrian Yutrowski
Source: StackOverflow

3/28/2020

If you have deployed k8s with kubeadm, then you can change cluster.local in /var/lib/kubelet/config.yaml on every node. Also change it in kubeadm-config and kubelet-config-1.17 configmaps (kube-system namespace) if you are planing to add more nodes to cluster. And don't forget to restart nodes.

-- Kirill Bugaev
Source: StackOverflow

1/19/2018

When you modify the /etc/kubernetes/manifests/ yaml files then you would need to restart kubelet again.

Additionally, if that doesn't work, double check the kubelet logs to see that the proper yaml files are being loaded.

-- Javier Salmeron
Source: StackOverflow