I am using google container engine and getting tons of dns errors in the logs.
Like:
10:33:11.000 I0720 17:33:11.547023 1 dns.go:439] Received DNS Request:kubernetes.default.svc.cluster.local., exact:false
And:
10:46:11.000 I0720 17:46:11.546237 1 dns.go:539] records:[0xc8203153b0], retval:[{10.71.240.1 0 10 10 false 30 0 /skydns/local/cluster/svc/default/kubernetes/3465623435313164}], path:[local cluster svc default kubernetes]
This is the payload.
{
metadata: {
severity: "ERROR"
serviceName: "container.googleapis.com"
zone: "us-central1-f"
labels: {
container.googleapis.com/cluster_name: "some-name"
compute.googleapis.com/resource_type: "instance"
compute.googleapis.com/resource_name: "fluentd-cloud-logging-gke-master-cluster-default-pool-f5547509-"
container.googleapis.com/instance_id: "instanceid"
container.googleapis.com/pod_name: "fdsa"
compute.googleapis.com/resource_id: "someid"
container.googleapis.com/stream: "stderr"
container.googleapis.com/namespace_name: "kube-system"
container.googleapis.com/container_name: "kubedns"
}
timestamp: "2016-07-20T17:33:11.000Z"
projectNumber: ""
}
textPayload: "I0720 17:33:11.547023 1 dns.go:439] Received DNS Request:kubernetes.default.svc.cluster.local., exact:false"
log: "kubedns"
}
Everything is working just the logs are polluted with errors. Any ideas on why this is happening or if I should be concerned?
Thanks for the question, Aaron. Those error messages are actually just tracing/debugging output from the container and don't indicate that anything is wrong. The fact that they get written out as error messages has been fixed in Kubernetes at head and will be better in the next release of Kubernetes.