Kubernetes + locust (load testing)

11/30/2016

Locust worker configuration had to be modified, the current workers are 130 nodes. I exported the deployment as yaml file, edited the file and applied the modifications in the locust worker.

The workers have been restarted and re-initialized with the new configurations. The are all running with the new environment variable that I've modified previously.

The issue is that in the locust dashboard the count of the nodes has been doubled, respectively the workers have been restarted and when they got up, the locust UI has added it as a new node, but didn't delete the inactive one.

This is the current situation:

host-xxx:~/pula/distributed-load-testing-using-kubernetes/kubernetes-config # kubectl get pods -o wide|wc -l
134
host-xxx:~/pula/distributed-load-testing-using-kubernetes/kubernetes-config # kubectl get pods|grep Running|wc -l
133
host-xxx:~/pula/distributed-load-testing-using-kubernetes/kubernetes-config # 

Dashboard:

STATUS
HATCHING
85 users
Edit
**SLAVES
260**
RPS
0
FAILURES
0%
 Reset Stats
StatisticsFailuresExceptions
Type    Name    # requests  # fails Median  Average Min Max Content Size    # reqs/sec
Total   0   0   0   0   0   0   0   0
Download request statistics CSV
Download response time distribution CSV

What would be a quick re-initialization of the locust master to get the real number of nodes?

Thanks

-- Maverik
kubernetes
locust

2 Answers

12/2/2016

the only way you can reset your dashboard of the master node for now is by rescheduling the master and starting with a clean pod. You can either do this with kubectl scale deployment/locust-master --replicas=0 and scale them back up with kubectl scale deployment/locust-master --replicas=1. But this will discard any results you already gathered on the master.

It's more a Locust problem then something that k8s can solve imo.

-- jonas kint
Source: StackOverflow

10/11/2017

Issue seems to be that node once it tries to register and master if not up, wouldn't retry. Neither master does a constant communication to see i

Delete the master pod and wait for it come backup. Now it will have zero slaves.

Then delete node/worker pods so that they re-register. To delete pods with lables you can use below command

  # Delete pods and services with label name=myLabel.
  kubectl delete pods,services -l name=myLabel
-- Shambu
Source: StackOverflow