SolrCloud's Cluster status API returns internal kubernetes hostnames which can't be reached from outside the kubernetes cluster

3/5/2020

I setup SolrCloud cluster using helm chart on Kubernetes cluster and exposed the solr-svc as NodePort i.e.,

service/solr-svc  NodePort  10.252.234.133   <none>   8983:32470/TCP

I get the following error message with apache Solrj client.

Caused by: java.lang.RuntimeException: Tried fetching cluster state using the node names we knew of, i.e. [solr-2.solr-headless.default:8983_solr, solr-0.solr-headless.default:8983_solr, solr-1.solr-headless.default:8983_solr]. However, succeeded in obtaining the cluster state from none of them.

This is because, SolrCloud's Cluster status API returns the following response when queried for cluster status

live_nodes: [
"solr-2.solr-headless.default:8983_solr",
"solr-0.solr-headless.default:8983_solr",
"solr-1.solr-headless.default:8983_solr"]

API I used for fetching the cluster status: http://<k8snodeIP:NodePort>/solr/admin/collections?action=CLUSTERSTATUS

Basically it returns the k8s internal cluster hostnames and those can’t be reached from outside the cluster.

May I know how to fix it. Thank you very much.

-- chakra
kubernetes
kubernetes-helm
solr
solrcloud
solrj

0 Answers