On upgrading kubernetes from 1.0.6 to 1.1.3, I now see a bunch of the below errors during a rolling upgrade when any of my kube master or etcd hosts are down. We currently have a single master, with two etcd hosts.
2015-12-11T19:30:19.061+00:00 kube-master1 [err] [kube-apiserver] E1211 19:30:18.726490 26551 errors.go:62] apiserver received an error that is not an unversioned.Status: too old resource version: 3871210 (3871628)
2015-12-11T19:30:19.075+00:00 kube-master1 [err] [kube-apiserver] E1211 19:30:18.733331 26551 errors.go:62] apiserver received an error that is not an unversioned.Status: too old resource version: 3871156 (3871628)
2015-12-11T19:30:19.081+00:00 kube-master1 [err] [kube-apiserver] E1211 19:30:18.736569 26551 errors.go:62] apiserver received an error that is not an unversioned.Status: too old resource version: 3871623 (3871628)
2015-12-11T19:30:19.095+00:00 kube-master1 [err] [kube-apiserver] E1211 19:30:18.740328 26551 errors.go:62] apiserver received an error that is not an unversioned.Status: too old resource version: 3871622 (3871628)
2015-12-11T19:30:19.110+00:00 kube-master1 [err] [kube-apiserver] E1211 19:30:18.742972 26551 errors.go:62] apiserver received an error that is not an unversioned.Status: too old resource version: 3871210 (3871628)
I believe these errors are caused by this new feature in 1.1, the adding of the --watch-cache option by default. The errors cease at the end of the rolling upgrade.
I would like to know how to explain these errors, if they can be safely ignored, and how to change the system to avoid them in the future (for a longer term solution).
Yes - as you suggested, those errors are related to the new feature of serving watch from in-memory cache in apiserver.
So, if I understand correctly, what happened here is that: - you upgraded (or in general restarted) apiserver - this cause all the existing watch connections to terminate - once apiserver started successfully, it regenerated its internal in-memory cache - since watch can have some delays, it's possible that clients (that were renewing their watch connections) were slightly behind - this caused generating such error, and forced clients to relist and start watching from the new point
IIUC, those errors were present only during upgrade and disappeared after - so that's good.
In other words - such errors may appear on update (or in general immediately after any restart of apiserver). In such situations they may be safely ignore.
In fact, those shouldn't probably be errors - we can probably change them to warnings.