Running e2e-test on the local cluster kubernetes, with command:
go run hack/e2e.go -- --provider=local --test --check-version-skew=false --test_args="--host=https://192.168.1.5:6443 --ginkgo.focus=\[Feature:Performance\]"
Showing the errors:
[Feature:Performance] should allow starting 30 pods per node using { ReplicationController} with 0 secrets, 0 configmaps and 0 daemons [BeforeEach]
• Failure in Spec Setup (BeforeEach) [6.331 seconds]
[sig-scalability] Density
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scalability/framework.go:22
[Feature:Performance] should allow starting 30 pods per node using { ReplicationController} with 0 secrets, 0 configmaps and 0 daemons [BeforeEach]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scalability/density.go:554
Expected error:
<*errors.errorString | 0xc421733010>: {
s: "Namespace e2e-tests-containers-ssgmn is active",
}
Namespace e2e-tests-containers-ssgmn is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scalability/density.go:466
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSJul 14 00:02:24.065: INFO: Running AfterSuite actions on all node
Jul 14 00:02:24.065: INFO: Running AfterSuite actions on node 1
Summarizing 2 Failures:
[Fail] [sig-scalability] Load capacity [BeforeEach] [Feature:Performance] should be able to handle 30 pods per node { ReplicationController} with 0 secrets, 0 configmaps and 0 daemons
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scalability/load.go:156
[Fail] [sig-scalability] Density [BeforeEach] [Feature:Performance] should allow starting 30 pods per node using { ReplicationController} with 0 secrets, 0 configmaps and 0 daemons
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scalability/density.go:466
Ran 2 of 998 Specs in 12.682 seconds
FAIL! -- 0 Passed | 2 Failed | 0 Pending | 996 Skipped --- FAIL: TestE2E (12.71s)
Seemingly, the local cluster Kubernetes has limitation of pods per node. How to fix this? The local cluster configuration is:
leeivan@master01:~/gowork/src/k8s.io/kubernetes$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master01 Ready master 10d v1.11.0
node01 Ready <none> 10d v1.11.0
node02 Ready <none> 10d v1.11.0
node03 Ready <none> 10d v1.11.0
node04 Ready <none> 10d v1.11.0
node05 Ready <none> 10d v1.11.0
leeivan@master01:~/gowork/src/k8s.io/kubernetes$ kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:17:28Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:08:34Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"}
According to kubelet documentation:
--max-pods int32
Number of Pods that can run on this Kubelet. (default 110)
So, 110 should be enough to pass the tests. But it is possible that test measures the real capacity of your nodes in terms of Allocatable.CPU and Allocatable.Memory
Also, before the test run, all the namespaces should be deleted:
// Terminating a namespace (deleting the remaining objects from it - which
// generally means events) can affect the current run. Thus we wait for all
// terminating namespace to be finally deleted before starting this test.
Looks like one of your namespaces still was active, so the test failed.