Installing percona xtradb cluster on kubernetes and pods keep going to a crushLoop State

5/18/2020

I am following the instructions provided by percona xtradb cluster to install percona xtradb cluster in kubernetes but the pods keep going into a crushLoopState.

Instructions from percona: https://www.percona.com/doc/kubernetes-operator-for-pxc/kubernetes.html

When i check the logs, this is what I get:

2020-05-18T14:04:43.389154Z 0 [ERROR] [MY-000000] [Galera] failed to open gcomm backend connection: 110: failed to reach primary view (pc.wait_prim_timeout): 110 (Connection timed out)
         at gcomm/src/pc.cpp:connect():159
2020-05-18T14:04:43.389197Z 0 [ERROR] [MY-000000] [Galera] gcs/src/gcs_core.cpp:gcs_core_open():220: Failed to open backend connection: -110 (Connection timed out)
2020-05-18T14:04:43.389584Z 0 [ERROR] [MY-000000] [Galera] gcs/src/gcs.cpp:gcs_open():1694: Failed to open channel 'cluster1-pxc' at 'gcomm://<ip>': -110 (Connection timed out)
2020-05-18T14:04:43.389610Z 0 [ERROR] [MY-000000] [Galera] gcs connect failed: Connection timed out
2020-05-18T14:04:43.389631Z 0 [ERROR] [MY-000000] [WSREP] Provider/Node (gcomm://10.244.1.232,10.244.2.121) failed to establish connection with cluster (reason: 7)
2020-05-18T14:04:43.389652Z 0 [ERROR] [MY-010119] [Server] Aborting
2020-05-18T14:04:43.390312Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.18-9)  Percona XtraDB Cluster (GPL), Release rel9, Revision 6490e00, WSREP version 26.4.3.
2020-05-18T14:04:43.391133Z 0 [Note] [MY-000000] [Galera] dtor state: CLOSED
2020-05-18T14:04:43.391334Z 0 [Note] [MY-000000] [Galera] MemPool(TrxHandleSlave): hit ratio: 0, misses: 0, in use: 0, in pool: 0
2020-05-18T14:04:43.395163Z 0 [Note] [MY-000000] [Galera] apply mon: entered 0
2020-05-18T14:04:43.398964Z 0 [Note] [MY-000000] [Galera] apply mon: entered 0
2020-05-18T14:04:43.402880Z 0 [Note] [MY-000000] [Galera] apply mon: entered 0
2020-05-18T14:04:43.403227Z 0 [Note] [MY-000000] [Galera] cert index usage at exit 0
2020-05-18T14:04:43.403440Z 0 [Note] [MY-000000] [Galera] cert trx map usage at exit 0
2020-05-18T14:04:43.403652Z 0 [Note] [MY-000000] [Galera] deps set usage at exit 0
2020-05-18T14:04:43.403872Z 0 [Note] [MY-000000] [Galera] avg deps dist 0
2020-05-18T14:04:43.404079Z 0 [Note] [MY-000000] [Galera] avg cert interval 0
2020-05-18T14:04:43.404366Z 0 [Note] [MY-000000] [Galera] cert index size 0
2020-05-18T14:04:43.404659Z 0 [Note] [MY-000000] [Galera] Service thread queue flushed.
2020-05-18T14:04:43.404957Z 0 [Note] [MY-000000] [Galera] wsdb trx map usage 0 conn query map usage 0
2020-05-18T14:04:43.405179Z 0 [Note] [MY-000000] [Galera] MemPool(LocalTrxHandle): hit ratio: 0, misses: 0, in use: 0, in pool: 0
2020-05-18T14:04:43.655562Z 0 [Note] [MY-000000] [Galera] Flushing memory map to disk...

Is this a known bug in this system and what is the work a round?

-- Margach Chris
kubernetes
mysql
percona

0 Answers