I used this guide to setup OrientDB cluster in Kubernetes. However, it seems that each node on each pod creates its own cluster instead of joining a shared one. So the logs on each pod display such a message:
Members [1] {
Member [pod-ip]:5701 - generated id
}
What could cause such a problem?
My orientdb-server-config file looks this way:
<handler class="com.orientechnologies.orient.server.hazelcast.OHazelcastPlugin">
<parameters>
<parameter value="true" name="enabled"/>
<parameter value="orientdb/config/default-distributed-db-config.json" name="configuration.db.default"/>
<parameter value="orientdb/config/hazelcast.xml" name="configuration.hazelcast"/>
<parameter name="nodeName" value="$pod_dns" />
</parameters>
</handler>
My hazelcast.xml file looks like this (pod_dns is the name of the pod stored in env):
</network>
<properties>
<property name="hazelcast.discovery.enabled">true</property>
</properties>
<network>
<join>
<multicast enabled="false"/>
<tcp-ip enabled="false"/>
<discovery-strategies>
<discovery-strategy enabled="true"
class="com.hazelcast.kubernetes.HazelcastKubernetesDiscoveryStrategy">
<properties>
<property name="service-dns">pod_dns.default.svc.cluster.local</property>
<property name="service-dns-timeout">10</property>
</properties>
</discovery-strategy>
</discovery-strategies>
</join>
</network>
Kubernetes StatefulSet. Bash scripts for hazelcast and orientdb-server-config files are mounted and executed (for setting update depending on the env value on each pod):
kind: StatefulSet
apiVersion: apps/v1
metadata:
name: orientdbservice
spec:
serviceName: orientdbservice
replicas: 3
podManagementPolicy: Parallel
selector:
matchLabels:
service: orientdb
type: container-deployment
template:
metadata:
labels:
service: orientdb
type: container-deployment
spec:
containers:
- name: orientdbservice
image: orientdb:2.2.36
command: ["/bin/sh","-c", " cp /configs/* /orientdb/config/ ; chmod +x /orientdb/config/hazelcast_template.sh ; chmod +x /orientdb/config/server_config_template.sh ; sh /orientdb/config/hazelcast_template.sh ; sh /orientdb/config/server_config_template.sh ; /orientdb/bin/server.sh -Ddistributed=true" ]
env:
- name: ORIENTDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: orientdb-password
key: password.txt
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
ports:
- containerPort: 2424
name: port-binary
- containerPort: 2480
name: port-http
- containerPort: 5701
name: hazelcast
volumeMounts:
- name: config
mountPath: /orientdb/config
- name: orientdb-config-template-hazelcast
mountPath: /configs/hazelcast_template.sh
subPath: hazelcast_template.sh
- name: orientdb-config-template-server
mountPath: /configs/server_config_template.sh
subPath: server_config_template.sh
- name: orientdb-config-distributed
mountPath: /configs/default-distributed-db-config.json
subPath: default-distributed-db-config.json
- name: orientdb-databases
mountPath: /orientdb/databases
- name: orientdb-backup
mountPath: /orientdb/backup
volumes:
- name: config
emptyDir: {}
- name: orientdb-config-template-hazelcast
configMap:
name: orientdb-configmap-template-hazelcast
- name: orientdb-config-template-server
configMap:
name: orientdb-configmap-template-server
- name: orientdb-config-distributed
configMap:
name: orientdb-configmap-distributed
volumeClaimTemplates:
- metadata:
name: orientdb-databases
labels:
service: orientdb
type: pv-claim
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 20Gi
- metadata:
name: orientdb-backup
labels:
service: orientdb
type: pv-claim
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
The problem is in the Hazelcast0Kubernetes plugin configuration. First of all, it is necessary to update OrientDB version to latest - 3.0.10 with embedded newest Hazelcast version. Also, I have mounted hazelcast-kubernetes.jar dependency file directly into /orientdb/lib folder and it started to work properly. The problem was not about the config files but about the dependency setup for OrientDB.