acs-engine with custom vnet dns: error server misbehaving

2/12/2018

With acs-engine I have created a k8s cluster with a custom vnet. The cluster was deployed and the pods are running. When I do a kubectl get nodes or get pod I get a reply. But when I use exec to get into a pod or use helm install then I get the error:

Error from server: error dialing backend: dial tcp: lookup k8s-agentpool on 10.40.1.133:53: server misbehaving

I used the following json file to create the arm templates: acs-engine.json

When not using a custom vnet then the default azure dns is used and with a custom vnet our own dns servers are used. Is the only option to register all masters and agents to the dns server?

-- bramvdk
azure
kubernetes

1 Answer

2/13/2018

Resolved it by adding all cluster nodes to our dns servers

-- bramvdk
Source: StackOverflow