I am seeing a lot of errors in my logs relating to watches. Here's a snippet from my apiserver log on one machine:
W0517 07:54:02.106535       1 reflector.go:289] pkg/storage/cacher.go:161: watch of *api.Service ended with: client: etcd cluster is unavailable or misconfigured
W0517 07:54:02.106553       1 reflector.go:289] pkg/storage/cacher.go:161: watch of *api.PersistentVolumeClaim ended with: client: etcd cluster is unavailable or misconfigured
E0517 07:54:02.120217       1 reflector.go:271] pkg/admission/resourcequota/admission.go:86: Failed to watch *api.ResourceQuota: too old resource version: 790115 (790254)
E0517 07:54:02.120390       1 reflector.go:271] pkg/admission/namespace/lifecycle/admission.go:126: Failed to watch *api.Namespace: too old resource version: 790115 (790254)
E0517 07:54:02.134209       1 reflector.go:271] pkg/admission/serviceaccount/admission.go:102: Failed to watch *api.ServiceAccount: too old resource version: 790115 (790254)As you can see, there are two types of errors:
etcd cluster is unavailable or misconfigured--etcd-servers=http://k8s-master-etcd-elb.eu-west-1.i.tst.nonprod-ffs.io:2379 to the apiserver (this is definitely reachable). Another question seems to suggest that this does not work, but --etcd-cluster is not a recognised option in the version I'm running (1.2.3)too old resource versionI see that you are accessing the etcd through ELB proxy on AWS.
I have similar solution, just the ETCD is decoupled from the kubmaster server to its own 3 node cluster, hidden behind a internal ELB.
I can see the same errors from the kube-apiserver when configured to use the ELB. Without the ELB, configured as usual with a list of ETCD endponts, I don't see any errors.
Unfortunately, I don't know the root cause or why is this happening, will investigate more.