Today our cluster went down and gcloud console output looked like this:
]$ gcloud container clusters list
NAME ZONE MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS
xxxxxxxxxxxxxxxxxxx europe-west1-d 1.6.11-gke.0 xx.xx.xx.x n1-standard-1 1.6.4 * 2 ERROR
As one can see, the kubernetes master version differs from the version of the node pool. This introduced a conflict within the cluster. The conflict could be solved after upgrading the node pool to the proper version.
What I am trying to understand now is: Why was the master on a different version in the first place. Is there an automatic update policy, that we should have known about? But if the answer is "yes": why was the node pool not upgraded accordingly?