I'm trying to run E2E Kubernetes tests using Sonobuoy, but if one of the nodes has custom taints (NoSchedule) I'm getting
Nov 26 08:31:14.626: INFO: >>> kubeConfig: /tmp/kubeconfig-744133672
Nov 26 08:31:14.635: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
Nov 26 09:01:14.652: INFO: Unexpected error occurred: timed out waiting for the condition
Failure [1800.028 seconds]
BeforeSuite] BeforeSuite
/workspace/anago-v1.12.1-beta.0.52+4ed3216f3ec431/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:137
Expected error:
<*errors.errorString | 0xc4200836b0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/workspace/anago-v1.12.1-beta.0.52+4ed3216f3ec431/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:175
Kubernetes version is 1.12. If I remove taints in my test cluster E2E tests passed successfully.
I've found that it was fixed in 1.17 version (see: https://github.com/vmware-tanzu/sonobuoy/issues/599 https://github.com/kubernetes/kubernetes/issues/74282 https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.17.md )
Is there a workaround to run e2e tests on production 1.12 Kubernetes cluster having node taints? Except for waiting for 1.17 version and cluster upgrade of course.
I think there's a workaround for kubernetes versions prior to 1.17.
On kubernetes version v1.16 you can run Sonobuoy (Sonobuoy version v0.16.1 or higher) with providing the test framework flag: --allowed-not-ready-nodes=1
sonobuoy run --plugin-env=e2e.E2E_EXTRA_ARGS="--allowed-not-ready-nodes=1"
And on kubernetes version prior to v1.16 it was more complicated. I haven't tested this but according to docs:
--allowed-not-ready-nodes=1