Ne4J performance on Kubernetes is too slow

5/14/2018

I am using Kubernetes for deployment. We have Node.js and Neo4j based application stack.

For lower environment, we are using single core instance, in Kubernetes itself, and configured with Node.js based application. In that case it is working fine. For example, simple login to API takes around 660ms.

But for higher environment, we are using casual cluster which is installed using this helm chart here. We have three machines cluster. Each machine holds one core and one read replica. We are using POD affinity to install all core and read replicas on t2.xlarge types of machine on AWS.

But performance for this cluster is too slow. Same code and same login to API takes around 4.93 seconds.

I have assigned 4GB heap memory for the core and 2G for read replica minimum. With all of these configuration cluster performance is too slow. I am not sure what is wrong here.

Can someone please point what am I doing wrong?

Appreciate help I always get from stackoverflow community.

-- Swapnil B.
kubernetes
kubernetes-helm
neo4j

1 Answer

5/26/2018

I was able to solve this issue. There wasn't any problem in Neo4J, which was initial understanding since that is the only change we had in new cluster. There were couple of issues. 1) One was query which used to take lot of time. We had cypher query which will search all nodes with some attributes and then using result of that find relationship between nodes and node. This is not ideal way you write query. We had to fix to use complicated query into single statement. 2) Our backend service is taking lot of CPU resources. I had put limit in backend service which was making query and processing of the result very slow. So increasing the limit of the backend service soled our issue.

-- Swapnil B.
Source: StackOverflow