I have installed neo4j cluster on AWS EKS using stable helm charts
helm install --name neo4j-stg stable/neo4j --set core.numberOfServers=3,readReplica.numberOfServers=3 --set neo4jPassword=**** --set acceptLicenseAgreement=yes
After that, list of pods launched for neo4j cluster-
# kubectl get pod
neo4j-stg-neo4j-core-0 1/1 Running 0 70m
neo4j-stg-neo4j-core-1 1/1 Running 0 70m
neo4j-stg-neo4j-core-2 1/1 Running 0 70m
neo4j-stg-neo4j-replica-554bd99b98-7chx9 1/1 Running 0 70m
neo4j-stg-neo4j-replica-554bd99b98-gr7hp 1/1 Running 0 70m
neo4j-stg-neo4j-replica-554bd99b98-jh4dj 1/1 Running 0 70m
If we check the ROLE assigned to a pod-
# kubectl exec neo-helm-neo4j-core-0 -- bin/cypher-shell --format verbose \
"CALL dbms.cluster.overview() YIELD id, role RETURN id, role"
+---------------------------------------------------------+
| id | role |
+---------------------------------------------------------+
| "3e162b58-7025-4cff-9908-a82a1739f7d7" | "LEADER" |
| "6334fb74-3933-4c39-94e8-578545f13bc6" | "FOLLOWER" |
| "1bc2e35b-fdde-48e4-ac1a-0f10bc6e5ff8" | "FOLLOWER" |
| "795b92b2-7ebc-4981-8b1f-34c7b6c10e44" | "READ_REPLICA" |
| "736cb066-aac2-49fc-8a78-bda4b3d65de0" | "READ_REPLICA" |
| "9b5d0560-f620-40f5-9b05-d8109220dc2a" | "READ_REPLICA" |
+---------------------------------------------------------+
When we trying to write data into neo4j database, it throws an error-
Neo4j::Core::CypherSession::CypherError: Cypher error:
Neo.ClientError.Cluster.NotALeader: No write operations are allowed directly on this database. Writes must pass through the leader. The role of this server is: FOLLOWER
How neo4j service discover the "LEADER" pod?
# kubectl get svc neo4j-stg-neo4j -o yaml
apiVersion: v1
kind: Service
....
....
spec:
clusterIP: None
ports:
- name: http
port: 7474
protocol: TCP
targetPort: 7474
- name: bolt
port: 7687
protocol: TCP
targetPort: 7687
selector:
app: neo4j
component: core
release: neo4j
How neo4j is ensuring write operations are only executed by "LEADER" pod.
Since the Helm chart is deploying neo4j core servers as a StatefulSet with a headless service, Kubernetes DNS create a DNS entry for the service that points at the internal IPs of the Pods, e.g.:
neo4j.default.svc.cluster.local. 30 IN A 10.233.74.147
neo4j.default.svc.cluster.local. 30 IN A 10.233.88.205
neo4j.default.svc.cluster.local. 30 IN A 10.233.88.150
So, your client should be able to connect to the domain "neo4j" (which refers to the headless service) and figure out which server is the current leader.