Currently, I have one Kubernetes with 2 namespaces: NS1 and NS2. I’m using jboss/keycloak
Docker image.
I am operating 2 Keycloak instances in those 2 namespaces and I expect that will run independently. But it is not true for Infinispan caching inside Keycloak. I got a problem that all sessions of KC in NS1 will be invalidated many times when the KC pod in NS2 is being stated “Crash Loopback”.
The logs said as following whenever the “Crash Loopback” KC pod in NS2 tries to restart:
15:14:46,784 INFO [org.infinispan.CLUSTER] (remote-thread--p10-t412) [Context=clientSessions] ISPN100002: Starting rebalance with members [keycloak-abcdef, keycloak-qwerty], phase READ_OLD_WRITE_ALL, topology id 498
keycloak-abcdef
is the KC pod in NS1 and keycloak-qwerty
is the KC pod in NS2. So, the KC pod in NS1 can see and be affected by KC pod from NS2.
After researching, I see that Keycloak uses Infinispan cache to manage session data and Infinispan uses JGroups to discover nodes with the default method PING. I am assuming that this mechanism is the root cause of the problem “invalidated session” because it will try to contact other KC pods in the same cluster (even different namespaces) to do something like synchronization.
Is there any way that can isolate the working of Infinispan in Keycloak between namespaces?
Thank you!
Posting comment as the community wiki answer for better visibility
I would use JDBC_PING
for discovery, so only nodes which are using the same DB will be able to discover each other