I am using Apache Ignite .Net v2.7 in a Kubernetes environment. I use TcpDiscoverySharedFsIpFinder as a node discovery mechanism in the cluster.
I noticed a strange behaviour in a running cluster. The cluster starts up successfully and works fine for a couple of hours. Then, a node goes offline and then every other node writes the similar log:
[20:03:44] Topology snapshot [ver=45, locNode=fd32d5d7, servers=3, clients=0, state=ACTIVE, CPUs=3, offheap=4.7GB, heap=1.5GB]
[20:03:44] Topology snapshot [ver=46, locNode=fd32d5d7, servers=2, clients=0, state=ACTIVE, CPUs=2, offheap=3.1GB, heap=1.0GB]
[20:03:44] Coordinator changed [prev=TcpDiscoveryNode [id=c954042e-5756-4fed-b82a-b8b1d79889ce, addrs=[10.0.0.28, 127.0.0.1], sockAddrs=[/10.0.0.28:47500, /127.0.0.1:47500], discPort=47500, order=36, intOrder=21, lastExchangeTime=1562009450041, loc=false, ver=2.7.0#20181130-sha1:256ae401, isClient=false], cur=TcpDiscoveryNode [id=293902ba-b28d-4a44-8d5f-9cad23a9d7c4, addrs=[10.0.0.11, 127.0.0.1], sockAddrs=[/127.0.0.1:47500, /10.0.0.11:47500], discPort=47500, order=37, intOrder=22, lastExchangeTime=1562009450061, loc=false, ver=2.7.0#20181130-sha1:256ae401, isClient=false]]
Jul 01, 2019 8:03:44 PM org.apache.ignite.logger.java.JavaLogger error
SEVERE: Failed to send message to remote node [node=TcpDiscoveryNode [id=c954042e-5756-4fed-b82a-b8b1d79889ce, addrs=[10.0.0.28, 127.0.0.1], sockAddrs=[/10.0.0.28:47500, /127.0.0.1:47500], discPort=47500, order=36, intOrder=21, lastExchangeTime=1562009450041, loc=false, ver=2.7.0#20181130-sha1:256ae401, isClient=false], msg=GridIoMessage [plc=2, topic=TOPIC_CACHE, topicOrd=8, ordered=false, timeout=0, skipOnTimeout=false, msg=GridDhtPartitionsSingleMessage [parts={-2100569601=GridDhtPartitionMap [moving=0, top=AffinityTopologyVersion [topVer=44, minorTopVer=1], updateSeq=107, size=100]}, partCntrs={-2100569601=CachePartitionPartialCountersMap {22=(0,32), 44=(0,31), 59=(0,31), 64=(0,35), 66=(0,31), 72=(0,31), 78=(0,35), 91=(0,35)}}, partsSizes={-2100569601={64=2, 66=2, 22=2, 72=2, 59=2, 91=2, 44=2, 78=2}}, partHistCntrs=null, err=null, client=false, compress=true, finishMsg=null, activeQryTrackers=null, super=GridDhtPartitionsAbstractMessage [exchId=GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=45, minorTopVer=0], discoEvt=DiscoveryEvent [evtNode=TcpDiscoveryNode [id=f27d46f4-0700-4f54-b4b2-2c156152c49a, addrs=[10.0.0.42, 127.0.0.1], sockAddrs=[/127.0.0.1:47500, /10.0.0.42:47500], discPort=47500, order=42, intOrder=25, lastExchangeTime=1562009450061, loc=false, ver=2.7.0#20181130-sha1:256ae401, isClient=false], topVer=45, nodeId8=fd32d5d7, msg=Node failed: TcpDiscoveryNode [id=f27d46f4-0700-4f54-b4b2-2c156152c49a, addrs=[10.0.0.42, 127.0.0.1], sockAddrs=[/127.0.0.1:47500, /10.0.0.42:47500], discPort=47500, order=42, intOrder=25, lastExchangeTime=1562009450061, loc=false, ver=2.7.0#20181130-sha1:256ae401, isClient=false], type=NODE_FAILED, tstamp=1562011424092], nodeId=f27d46f4, evt=NODE_FAILED], lastVer=GridCacheVersion [topVer=173444804, order=1562009448132, nodeOrder=44], super=GridCacheMessage [msgId=69, depInfo=null, err=null, skipPrepare=false]]]]]
class org.apache.ignite.internal.cluster.ClusterTopologyCheckedException: Failed to send message (node left topology): TcpDiscoveryNode [id=c954042e-5756-4fed-b82a-b8b1d79889ce, addrs=[10.0.0.28, 127.0.0.1], sockAddrs=[/10.0.0.28:47500, /127.0.0.1:47500], discPort=47500, order=36, intOrder=21, lastExchangeTime=1562009450041, loc=false, ver=2.7.0#20181130-sha1:256ae401, isClient=false]
at org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createTcpClient(TcpCommunicationSpi.java:3270)
at org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createNioClient(TcpCommunicationSpi.java:2987)
at org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.reserveClient(TcpCommunicationSpi.java:2870)
at org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.sendMessage0(TcpCommunicationSpi.java:2713)
at org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.sendMessage(TcpCommunicationSpi.java:2672)
at org.apache.ignite.internal.managers.communication.GridIoManager.send(GridIoManager.java:1656)
at org.apache.ignite.internal.managers.communication.GridIoManager.sendToGridTopic(GridIoManager.java:1731)
at org.apache.ignite.internal.processors.cache.GridCacheIoManager.send(GridCacheIoManager.java:1170)
at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.sendLocalPartitions(GridDhtPartitionsExchangeFuture.java:1880)
at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.sendPartitions(GridDhtPartitionsExchangeFuture.java:2011)
at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.distributedExchange(GridDhtPartitionsExchangeFuture.java:1501)
at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:806)
at org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:2667)
at org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2539)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
at java.lang.Thread.run(Thread.java:748)
[22:25:17] Topology snapshot [ver=47, locNode=fd32d5d7, servers=1, clients=0, state=ACTIVE, CPUs=1, offheap=1.6GB, heap=0.5GB]
[22:25:17] Coordinator changed [prev=TcpDiscoveryNode [id=293902ba-b28d-4a44-8d5f-9cad23a9d7c4, addrs=[10.0.0.11, 127.0.0.1], sockAddrs=[/127.0.0.1:47500, /10.0.0.11:47500], discPort=47500, order=37, intOrder=22, lastExchangeTime=1562009450061, loc=false, ver=2.7.0#20181130-sha1:256ae401, isClient=false], cur=TcpDiscoveryNode [id=fd32d5d7-720f-4c85-925e-01a845992df9, addrs=[10.0.0.60, 127.0.0.1], sockAddrs=[product-service-deployment-76bdb6fffb-bvjx9/10.0.0.60:47500, /127.0.0.1:47500], discPort=47500, order=44, intOrder=26, lastExchangeTime=1562019906752, loc=true, ver=2.7.0#20181130-sha1:256ae401, isClient=false]]
[22:28:29] Joining node doesn't have encryption data [node=adc204a0-3cc7-45da-b512-dd69b9a23674]
[22:28:30] Topology snapshot [ver=48, locNode=fd32d5d7, servers=2, clients=0, state=ACTIVE, CPUs=2, offheap=3.1GB, heap=1.0GB]
[22:31:42] Topology snapshot [ver=49, locNode=fd32d5d7, servers=1, clients=0, state=ACTIVE, CPUs=1, offheap=1.6GB, heap=0.5GB]
As you can see, the number of servers in the cluster steadily reduces until only one server remains in the cluster (Topology snapshot [.. servers=1..] on every node). If I understand the log correctly, the cluster collapses into a group of individual independent nodes where each node represents a separate cluster. I should emphasize that all other nodes (except for the crashed one) are up and running.
I guess that the failing node might be a cluster leader and when it dies, the cluster fails to elect a new leader and it disintegrates into a number of independent nodes.
Can you comment on this? Am I right in my guessing? Could you tell me what should I check to diagnose and resolve this problem? Thank you!
Nodes segmentation usually mean there are long pauses: either GC pauses, I/O pauses or network pauses.
You can try increasing failureDetectionTimeout
, see if the problem goes away. Or, you can try getting rid of pauses.