"CoreContainer is either not initialized or shutting down" error while trying to create new Solr collection - solr

I've just downloaded the latest Solr version from the official website (8.9.0) and tried to create a collection with
solr create -c portal
But the command fails with the error
Caused by:</h3><pre>javax.servlet.ServletException: javax.servlet.UnavailableException: Error processing the request. CoreContainer is either not initialized or shutting down.
at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:162)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
at org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:322)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
at org.eclipse.jetty.server.Server.handle(Server.java:516)
at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:388)
at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:633)
There is no other (more specific) error message. Just that CoreContainer is either not initialized or shutting down.
For info, I can start solr with solr start which returns
Found 1 Solr nodes:
Solr process 81485 running on port 8983
But then a few seconds later I also get
javax.servlet.ServletException: javax.servlet.UnavailableException: Error processing the request. CoreContainer is either not initialized or shutting down.
at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:162)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
at org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:322)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
with no other specific error message. I have not changed anything from the default values in any config file. This is how I start my solr:
solr start -c -s /path/to/server/solr -m 1g -z localhost:2181,server2:2181,server3:2181
where server2 and server3 are IP addresses of the other two machines where I also have solr and zookeeper installed.
What can I do?

Related

Solr / Zookeeper : "An exception was thrown while closing send thread"

I am trying Solr for the first time on RHEL 8 with Openjdk version "17.0.2".
I am following the tutorial https://solr.apache.org/guide/8_11/solr-tutorial.html. I get the warning:
WARN - 2022-04-20 12:07:20.762; org.apache.zookeeper.ClientCnxn; An exception was thrown while closing send thread for session 0x10003e1057e0003. => EndOfStreamException: Unable to read additional data from server sessionid 0x10003e1057e0003, likely server has closed socket
at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:77)
org.apache.zookeeper.ClientCnxn$EndOfStreamException: Unable to read additional data from server sessionid 0x10003e1057e0003, likely server has closed socket
at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:77) ~[zookeeper-3.6.2.jar:3.6.2]
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350) ~[zookeeper-3.6.2.jar:3.6.2]
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1275) ~[zookeeper-3.6.2.jar:3.6.2]
This should be a straight forward tutorial. Do you know what I am missing?
Here is tutorial from the start:
[solr#abc294837 ~]$ ./bin/solr start -e cloud
Welcome to the SolrCloud example!
This interactive session will help you launch a SolrCloud cluster on your local workstation.
To begin, how many Solr nodes would you like to run in your local cluster? (specify 1-4 nodes) [2]:
Ok, let's start up 2 Solr nodes for your example SolrCloud cluster.
Please enter the port for node1 [8983]:
Please enter the port for node2 [7574]:
Solr home directory /opt/solr/example/cloud/node1/solr already exists.
/opt/solr/example/cloud/node2 already exists.
Starting up Solr on port 8983 using command:
"/opt/solr/bin/solr" start -cloud -p 8983 -s "/opt/solr/example/cloud/node1/solr"
Waiting up to 180 seconds to see Solr running on port 8983 [\]
Started Solr server on port 8983 (pid=50226). Happy searching!
Starting up Solr on port 7574 using command:
"/opt/solr/bin/solr" start -cloud -p 7574 -s "/opt/solr/example/cloud/node2/solr" -z localhost:2181
Waiting up to 180 seconds to see Solr running on port 7574 [-]
Started Solr server on port 7574 (pid=50417). Happy searching!
INFO - 2022-04-20 12:07:20.502; org.apache.solr.common.cloud.ConnectionManager; Waiting for client to connect to ZooKeeper
INFO - 2022-04-20 12:07:20.553; org.apache.solr.common.cloud.ConnectionManager; zkClient has connected
INFO - 2022-04-20 12:07:20.556; org.apache.solr.common.cloud.ConnectionManager; Client is connected to ZooKeeper
INFO - 2022-04-20 12:07:20.631; org.apache.solr.common.cloud.ZkStateReader; Updated live nodes from ZooKeeper... (0) -> (2)
INFO - 2022-04-20 12:07:20.737; org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider; Cluster at localhost:2181 ready
WARN - 2022-04-20 12:07:20.762; org.apache.zookeeper.ClientCnxn; An exception was thrown while closing send thread for session 0x10003e1057e0003. => EndOfStreamException: Unable to read additional data from server sessionid 0x10003e1057e0003, likely server has closed socket
at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:77)
org.apache.zookeeper.ClientCnxn$EndOfStreamException: Unable to read additional data from server sessionid 0x10003e1057e0003, likely server has closed socket
at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:77) ~[zookeeper-3.6.2.jar:3.6.2]
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350) ~[zookeeper-3.6.2.jar:3.6.2]
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1275) ~[zookeeper-3.6.2.jar:3.6.2]
Now let's create a new collection for indexing documents in your 2-node cluster.
Please provide a name for your new collection: [gettingstarted]
ยดยดยด
You are not missing anything, this is Zookeeper falsely warning about a socket connection being closed.
[EDIT] : This has been fixed in Solr versions 8.11.2, and 9.0.0 (Zookeeper versions 3.6.4, 3.7.1, 3.8.1, 3.9.0).
We can see in this commit that the exception is caught and expected (comment says closing so this is expected), yet it is now reported as a warning and a stack trace is logged, although this is not an error per se. So you can consider this message a debug message (as it was before that commit).
See for reference this issue, caused by this issue, and this pull request for the fix.
We can still make Zookeeper quiet from Solr/log4j config, by changing the level of its logger from "warn" to "error" :
solr/solr/server/resources/log4j2-console.xml
<AsyncLogger name="org.apache.zookeeper" level="ERROR"/>

Indexing problem with SOLR (MultiMaxScoreQParserPlugin)

I'm trying to integrate SOLR with Hybris but both of them are running on Kubernetes as a diffferent pod.
If I'm trying indexing SOLR on Hybris, it throws the error below;
ERROR [BackofficeLO-47] (000001JT) [SolrStandaloneSearchProvider] Error from server at http://10.10.100.181:34324/solr: Error CREATEing SolrCore 'master_backoffice_backoffice_product_flip': Unable to create core [master_backoffice_backoffice_product_flip] Caused by: de.hybris.platform.solr.search.MultiMaxScoreQParserPlugin
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://10.10.100.181:34324/solr: Error CREATEing SolrCore 'master_backoffice_backoffice_product_flip': Unable to create core [master_backoffice_backoffice_product_flip] Caused by: de.hybris.platform.solr.search.MultiMaxScoreQParserPlugin
I guess somethings wrong with SOLR "deafult" indexing directory.
Solr is running as process like below inside the pod;
solr#solr-fsd33wdf-qteg:/opt/solr-8.5.2$ ps -ef | grep solr
solr 10 1 0 Jun23 ? 00:15:55 /usr/local/openjdk-11/bin/java -server -Xms512m -Xmx512m -XX:+UseG1GC -XX:+PerfDisableSharedMem -XX:+ParallelRefProcEnabled -XX:MaxGCPauseMillis=250 -XX:+UseLargePages -XX:+AlwaysPreTouch -Xlog:gc*:file=/var/solr/logs/solr_gc.log:time,uptime:filecount=9,filesize=20M -Dsolr.jetty.inetaccess.includes= -Dsolr.jetty.inetaccess.excludes= -Dsolr.log.dir=/var/solr/logs -Djetty.port=8983 -DSTOP.PORT=7983 -DSTOP.KEY=solrrocks -Duser.timezone=UTC -Djetty.home=/opt/solr/server -Dsolr.solr.home=/var/solr/data -Dsolr.data.home= -Dsolr.install.dir=/opt/solr -Dsolr.default.confdir=/opt/solr/server/solr/configsets/_default/conf -Dlog4j.configurationFile=/var/solr/log4j2.xml -Xss256k -Dsolr.jetty.https.port=8983 -jar start.jar --module=http
So default confdir is /opt/solr/server/solr/configsets/_default/conf
If I'm check SOLR_HOME variable, it's different directory;
solr#solr-f575dcfdf-qtnpg:/opt/solr-8.5.2$ echo $SOLR_HOME
/var/solr/data
So, how can I change confdir to /var/solr/data ? I guess this is the problem here?
Thanks!
This page provides you with information on how to use the standalone setup. The ant configureSolrServer takes an argument of the path to the original Solr binary that you should download from here. This overwrites the files in the directory with the SAP Commerce specific setup. The MultiMaxScoreQParserPlugin is part of solr-hybris-components-<version_of_solr>.jar file, where the <version_of_solr> corresponds to the solr version your SAP Commerce is running on. Note that SAP Commerce also supports multiple Solr versions and it depends on what configuration you have.
You may then extend the default Solr docker image as provided here to have your setup running.

Getting error on starting Solr from command prompt

I am newbie and in the learning path of Apache Solr .I am unable to start the Solr instance after hard deleting due to space issues . I get this error below if i try to start .
javax.servlet.UnavailableException: Error processing the request. CoreContainer is either not initialized or shutting down.I am trying in Windows 10.

Solr server is running but no admin panel

I am trying to get a solr server running to use with Sitecore, but I can't seem to get it to work.
When I start solr (6.6.1) I get the message:
> bin\solr.cmd -p 8983
Waiting up to 30 to see Solr running on port 8983
Started Solr server on port 8983. Happy searching!
But when I go to localhost:8983/solr/ I get an empty page or some messages about not being able to connect (differs from each browser).
When I do a status it says that the server is running and some information about the usage so this seems fine.
But when I do a healthcheck on the server I get a lot of warnings saying:
WARN - 2018-02-27 09:48:27.768; org.apache.zookeeper.ClientCnxn$SendThread; Session 0x0 for server BBLP-JSCHOOT.colo.betabit.nl/0:0:0:0:0:0:0:1:8983, unexpected error, closing socket connection and attempting reconnect
java.io.IOException: Packet len352518912 is out of range!
at org.apache.zookeeper.ClientCnxnSocket.readLength(ClientCnxnSocket.java:112)
at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:79)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:366)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141)
WARN - 2018-02-27 09:48:28.240; org.apache.zookeeper.ClientCnxn$SendThread; Session 0x0 for server pso/127.0.0.1:8983, unexpected error, closing socket connection and attempting reconnect
and after some of these I get:
ERROR: java.util.concurrent.TimeoutException: Could not connect to ZooKeeper localhost:8983 within 10000 ms
Anyone has any idea what can cause this? Seems that something is wrong with zookeeper but I can't quite figure out what that is.

Solr 6.x cannot create collection on one Debian Jessie 8.6

Trying to install Solr 6.2.1 on a Debian Jessie I hit a roadblock whereby I am not able to create a collection. The sequence of commands
unzip solr-6.2.1.zip
export JAVA_HOME=/opt/java64/jdk1.8.0_101
./solr-6.2.1/bin/solr -c
./solr-6.2.1/bin/solr create_collection -c hktesting
could not be more innocent and works on another Debian Jessie as well as on an Ubuntu 16.10. On this machine, however, the create_collection runs into a timeout. I even used a freshly added user without any dotfile customizations.
The full log is quite long, so I try to pick the lines I think are relevant. It all starts nicely with:
2016-11-03 08:08:36.360 INFO (qtp110456297-20) [ ] o.a.s.h.a.CollectionsHandler Invoked Collection Action :create with params replicationFactor=1&maxShardsPerNode=1&collection.configName=hktesting&name=hktesting&action=CREATE&numShards=1&wt=json and sendToOCPQueue=true
Messages follow which all look ok showing a lot going-ons in creating the collection. It all comes to an intermediate end with:
2016-11-03 08:08:38.399 WARN (qtp110456297-18) [c:hktesting s:shard1 r:core_node1 x:hktesting_shard1_replica1] o.a.s.c.SolrCore [hktesting_shard1_replica1] Solr index directory '/home/badsolr/tmp/solr-6.2.1/server/solr/hktesting_shard1_replica1/data/index' doesn't exist. Creating new index...
2016-11-03 08:08:38.408 INFO (qtp110456297-18) [c:hktesting s:shard1 r:core_node1 x:hktesting_shard1_replica1] o.a.s.c.CachingDirectoryFactory return new directory for /home/badsolr/tmp/solr-6.2.1/server/solr/hktesting_shard1_replica1/data/index
which still looks ok to me. Then comes a three minute hole in the log with nothing happening after which a failure is reported:
2016-11-03 08:11:36.373 ERROR (qtp110456297-20) [ ] o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: create the collection time out:180s
at org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:289)
at org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:215)
at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:154)
at org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:658)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:440)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:518)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
at java.lang.Thread.run(Thread.java:745)
This machine has kerberos authentication enabled, which is about the only possibly relevant difference to the other machines I would know of (but don't jump to conclusions).
It turned out that a mount on /media was unresponsive. Even an ls /media hung forever. With strace I could see that Solr got stuck when it tried to access /media. In the strace log I could see that Solr first read in mtab and then steps into /media. Not that I would know why Solr should care about mtab and this mount point, but after getting the mount fixed, Solr started to work normal.

Resources