I cannot create core in Solr 5.2.1 - solr

I have solr clouds 5.2.1. I deploy solr and zookeeper. When I try to create a core this errors are throwing :
org.apache.solr.common.SolrException: Could not load conf for core contracts_shard1_replica1: Error loading solr config from solrconfig.xml
at org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:78)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:635)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:611)
at org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:628)
at org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:213)
at org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:193)
at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
at org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:660)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:431)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
org.apache.solr.common.SolrException: Error CREATEing SolrCore 'contracts_shard1_replica1': Unable to create core [contracts_shard1_replica1] Caused by: Can't find resource 'solrconfig.xml' in classpath or '/configs/contracts', cwd=C:\CM_10.1.0\INDEXSERVER\searchserver-distribution\target\searchserver\solr\server
at org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:661)
at org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:213)
at org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:193)
at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
at org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:660)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:431)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
]
I created contracts inside of C:\CM_10.1.0\INDEXSERVER\searchserver-distribution\target\searchserver\solr\server and copied " conf" folder which is in solr\configsets\basic_configs" into contracts. But problem didn't solved.
I do need help to solve this problem. Does anyone help me?
Thanks

Since you are using the zookeeper, you must first send the config files to the zookeeper. I'm not sure how it is in Windows :P, but in Linux it would be:
cd /searchserver/solr/server/scripts/cloud-scripts
./zkcli.sh -cmd upconfig -confdir /searchserver/solr/server/solr/corename/conf -confname myconfname -z zoo1:2181,zoo2:2181,zoo3:2181
In Windows, use zkcli.bat in the same directory.
Another way to do this is by adding
SOLR_OPTS="$SOLR_OPTS -Dbootstrap_confdir=./solr/corename/conf/"
SOLR_OPTS="$SOLR_OPTS -Dcollection.configName=myconfname"
to the solr.in.sh file, then (re)starting solr. In Windows, the file is solr.in.cmd, and you add the following lines:
set SOLR_OPTS=%SOLR_OPTS% -Dbootstrap_confdir=./solr/corename/conf/
set SOLR_OPTS=%SOLR_OPTS% -Dcollection.configName=myconfname
The solr.in.sh/solr.in.cmd file is included into the solr (colr.cmd) command that you use to start the solr server. Myconfname above (in both methods) is an arbitrary name you give to indicate the sets of config files that you've added to the zookeeper. Then you can create the core using the collections API:
http://localhost:8983/solr/admin/collections?action=CREATE&name=coreName&numShards=2&shards=shard1,shard2&collection.configName=myconfname&createNodeSet=localhost:8983_solr

Related

Can not apply patch LUCENE-2899.patch to SOLR on Windows

I am trying to apply patch LUCENE-2899.patch to Solr.
I have done this:
Cloned solr from official repo (I am on master branch)
Downloaded and installed ant and GNU patch, i get it here http://gnuwin32.sourceforge.net/packages/patch.htm
Put Ant and GNU patch to PATH env var.
And I got this...
```
D:\utils\solr_master\lucene-solr>patch -p1 -i LUCENE-2899.patch --dry-run
patching file dev-tools/idea/.idea/ant.xml
Assertion failed: hunk, file ../patch-2.5.9-src/patch.c, line 354
This application has requested the Runtime to terminate it in an unusual way.
Please contact the application's support team for more information.
```
UPDATE 1
I am trying to compile, but build failed.
D:\utils\solr_master\lucene-solr>ant compile
Buildfile: D:\utils\solr_master\lucene-solr\build.xml
BUILD FAILED
D:\utils\solr_master\lucene-solr\build.xml:21: The following error occurred while executing this line:
D:\utils\solr_master\lucene-solr\lucene\common-build.xml:623: java.lang.NullPointerException
at java.util.Arrays.stream(Arrays.java:5004)
at java.util.stream.Stream.of(Stream.java:1000)
at java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:267)
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:545)
at java.util.stream.AbstractPipeline.evaluateToArrayNode(AbstractPipeline.java:260)
at java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:438)
at org.apache.tools.ant.util.ChainedMapper.lambda$mapFileName$1(ChainedMapper.java:36)
at java.util.stream.ReduceOps$1ReducingSink.accept(ReduceOps.java:80)
at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.reduce(ReferencePipeline.java:484)
at org.apache.tools.ant.util.ChainedMapper.mapFileName(ChainedMapper.java:35)
at org.apache.tools.ant.util.CompositeMapper.lambda$mapFileName$0(CompositeMapper.java:32)
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:545)
at java.util.stream.AbstractPipeline.evaluateToArrayNode(AbstractPipeline.java:260)
at java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:438)
at org.apache.tools.ant.util.CompositeMapper.mapFileName(CompositeMapper.java:33)
at org.apache.tools.ant.taskdefs.PathConvert.execute(PathConvert.java:363)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:292)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
at org.apache.tools.ant.Task.perform(Task.java:346)
at org.apache.tools.ant.Target.execute(Target.java:448)
at org.apache.tools.ant.helper.ProjectHelper2.parse(ProjectHelper2.java:172)
at org.apache.tools.ant.taskdefs.ImportTask.importResource(ImportTask.java:221)
at org.apache.tools.ant.taskdefs.ImportTask.execute(ImportTask.java:165)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:292)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
at org.apache.tools.ant.Task.perform(Task.java:346)
at org.apache.tools.ant.Target.execute(Target.java:448)
at org.apache.tools.ant.helper.ProjectHelper2.parse(ProjectHelper2.java:183)
at org.apache.tools.ant.ProjectHelper.configureProject(ProjectHelper.java:93)
at org.apache.tools.ant.Main.runBuild(Main.java:824)
at org.apache.tools.ant.Main.startAnt(Main.java:228)
at org.apache.tools.ant.launch.Launcher.run(Launcher.java:283)
at org.apache.tools.ant.launch.Launcher.main(Launcher.java:101)
Total time: 0 seconds
UPDATE 2
I have downloaded Solr from
https://builds.apache.org/job/Solr-Artifacts-7.3/lastSuccessfulBuild/artifact/solr/package/ and https://builds.apache.org/job/Solr-Artifacts-master/lastSuccessfulBuild/artifact/solr/package/
but neither for 7.3 version nor for 8.0(master) version I don't see opennlp dir in contrib dir. Where can I find it?
UPDATE 3
I have run version from master branch witch I have downloaded here https://builds.apache.org/job/Solr-Artifacts-master/lastSuccessfulBuild/artifact/solr/package/ and I have trying to run OpenNLP like gentleman in this post:
Exception while integrating openNLP with Solr
But I have the same error as he.
numberplate_shard1_replica_n1:
org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: >Could not load conf for core numberplate_shard1_replica_n1: Can't load schema >managed-schema: Plugin init failure for [schema.xml] fieldType >"text_opennlp_nvf": Plugin init failure for [schema.xml] analyzer/tokenizer: >Error instantiating class: 'org.apache.lucene.analysis.opennlp.OpenNLPTokenizerFactory'
If patch LUCENE-2899 is merged into master why I have this error?
UPDATE 5
I have restarted solr and errors were gone. But...
I was trying to add fields ( to managed-schema ) to form example ( https://wiki.apache.org/solr/OpenNLP ) :
<fieldType name="text_opennlp" class="solr.TextField">
<analyzer>
<tokenizer class="solr.OpenNLPTokenizerFactory"
sentenceModel="opennlp/en-sent.bin"
tokenizerModel="opennlp/en-token.bin"
/>
</analyzer>
</fieldType>
<field name="content" type="text_opennlp" indexed="true" termOffsets="true" stored="true" termPayloads="true" termPositions="true" docValues="false" termVectors="true" multiValued="true" required="true"/>
But when I try to run Solr in Cloud mode I got this:
D:\utils\solr-7.3.0-7\solr-7.3.0-7\bin>solr -e cloud
Welcome to the SolrCloud example!
This interactive session will help you launch a SolrCloud cluster on your local workstation.
To begin, how many Solr nodes would you like to run in your local cluster? (specify 1-4 nodes) [2]:
1
Ok, let's start up 1 Solr nodes for your example SolrCloud cluster.
Please enter the port for node1 [8983]:
Solr home directory D:\utils\solr-7.3.0-7\solr-7.3.0-7\example\cloud\node1\solr already exists.
Starting up Solr on port 8983 using command:
"D:\utils\solr-7.3.0-7\solr-7.3.0-7\bin\solr.cmd" start -cloud -p 8983 -s "D:\utils\solr-7.3.0-7\solr-7.3.0-7\example\cloud\node1\solr"
Waiting up to 30 to see Solr running on port 8983
Started Solr server on port 8983. Happy searching!
INFO - 2018-03-26 14:42:26.961; org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider; Cluster at localhost:9983 ready
Now let's create a new collection for indexing documents in your 1-node cluster.
Please provide a name for your new collection: [gettingstarted]
numberplate
Collection 'numberplate' already exists!
Do you want to re-use the existing collection or create a new one? Enter 1 to reuse, 2 to create new [1]:
1
Enabling auto soft-commits with maxTime 3 secs using the Config API
POSTing request to Config API: http://localhost:8983/solr/numberplate/config
{"set-property":{"updateHandler.autoSoftCommit.maxTime":"3000"}}
ERROR: Error from server at http://localhost:8983/solr: Expected mime type application/octet-stream but got text/html. <html>
<head>
<meta http-equiv="Content-Type" content="text/html;charset=utf-8"/>
<title>Error 404 Not Found</title>
</head>
<body><h2>HTTP ERROR 404</h2>
<p>Problem accessing /solr/numberplate/config. Reason:
<pre> Not Found</pre></p>
</body>
</html>
SolrCloud example running, please visit: http://localhost:8983/solr
D:\utils\solr-7.3.0-7\solr-7.3.0-7\bin>
UPDATE 6
I have created new collection and I get more precise error:
test_collection_shard1_replica_n1: > org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: > Could not load conf for core test_collection_shard1_replica_n1: Can't load > schema managed-schema: org.apache.solr.core.SolrResourceNotFoundException: > Can't find resource 'opennlp/en-sent.bin' in classpath or '/configs/_default', > cwd=D:\utils\solr-7.3.0-7\solr-7.3.0-7\server
Please check your logs for more information
Maybe I need to copy somewhere OpenNLP models http://opennlp.sourceforge.net/models-1.5/
But where can I put this models?
Can you help me? What I do wrong?
As you can see on LUCENE-2899, the patch is already applied to 8.0 (master), as well as 7.3.
You can find pre-built nightlies at Solr-Artifacts-master for (currently) 8.0 and at Solr-Artifacts-7.3 for 7.3.
The opennlp libraries are bundled inside the artifacts:
solr-8.0.0-3304 find . -name '*nlp*'
[...]
./contrib/langid/lib/opennlp-tools-1.8.3.jar
./contrib/analysis-extras/lib/opennlp-maxent-3.0.3.jar
./contrib/analysis-extras/lib/opennlp-tools-1.8.3.jar
./contrib/analysis-extras/lucene-libs/lucene-analyzers-opennlp-8.0.0-3304.jar
You then have to tell Solr to load these jars, which you can do through solrconfig.xml.
<lib dir="../../../contrib/analysis-extras/lib/" regex="opennlp-.*\.jar" />
<lib dir="../../../contrib/analysis-extras/lucene-libs/lucene-analyzers-opennlp-.*\.jar" regex=".*\.jar" />
Confirm that the jars are loaded as you expect in Solr's log file.

Nutch 1.11 crawl Issue

I have followed the tutorial and configured nutch to run on Windows 7 using Cygwin and i'm using Solr 5.4.0 to index the data
But nutch 1.11 is having problem in executing a crawl.
Crawl Command
$ bin/crawl -i -D solr.server.url=http://127.0.0.1:8983/solr /urls /TestCrawl 2
Error/Exception
Injecting seed URLs /apache-nutch-1.11/bin/nutch inject /TestCrawl/crawldb /urls
Injector: starting at 2016-01-19 17:11:06
Injector: crawlDb: /TestCrawl/crawldb
Injector: urlDir: /urls
Injector: Converting injected urls to crawl db entries.
Injector: java.lang.NullPointerException
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1012)
at org.apache.hadoop.util.Shell.runCommand(Shell.java:445)
at org.apache.hadoop.util.Shell.run(Shell.java:418)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:739)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:722)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:633)
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:421)
at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:281)
at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:125)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:348)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:562)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:557)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:548)
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:833)
at org.apache.nutch.crawl.Injector.inject(Injector.java:323)
at org.apache.nutch.crawl.Injector.run(Injector.java:379)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.nutch.crawl.Injector.main(Injector.java:369)
Error running:
/home/apache-nutch-1.11/bin/nutch inject /TestCrawl/crawldb /urls
Failed with exit value 127.
I can see there are multiple problems with your command, try this:
bin/crawl -i -Dsolr.server.url=http://127.0.0.1:8983/solr/core_name path_to_seed crawl 2
The first problem is that there is a space when you pass the solr parameter. The second problem is that the solr url should include the core name as well.
hadoop-core jar file is needed when you are working with nutch
with nutch 1.11 compatible hadoop-core jar is 0.20.0
please download jar from this link :
http://www.java2s.com/Code/Jar/h/Downloadhadoop0200corejar.htm
paste that jar into "C:\cygwin64\home\apache-nutch-1.11\lib" folder and it will run
successfully.

Solr 5.3 Zookeeper Ensemble create_collection timeout 180s

I have 3 servers running with each Solr 5.3 and Zookeeper (solr-cloud-01/zookeeper-01, solr-cloud-02/zookeeper-02 & solr-cloud-03/zookeeper-03)
Zookeeper is up and running and one of the servers is a leader, others are follower
# zkServer.sh status
If I try to create a solr collection, the config is created correctly in Zookeeper, but the core itself will not create, but timeout after 180s
# solr create_collection -c [collection_name] -d [config_name]
Connecting to ZooKeeper at zookeeper-01:2181,zookeeper-02:2181,zookeeper-03:2181 ...
Uploading /opt/solr/server/solr/configsets/[config_name]/conf for config
[collection_name] to ZooKeeper at zookeeper-01:2181,zookeeper-02:2181,zookeeper-03:2181
(or)
Re-using existing configuration directory [collection_name]
next:
Creating new collection '[collection_name]' using command:
http://localhost:8983/solr/admin/collections?action=CREATE&name=
[collection_name]&numShards=1&replicationFactor=1&maxShardsPerNode=1&
collection.configName=[collection_name]
ERROR: Failed to create collection '[collection_name]' due to:
create the collection time out:180s
The solr admin console log shows 2 identical error messages, one from SolrCore, the other from SolrDispatchFilter
null:org.apache.solr.common.SolrException: create the collection time out:180s
at org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:239)
at org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:170)
at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
at org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:675)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:443)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:214)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at org.eclipse.jetty.server.Server.handle(Server.java:499)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
at java.lang.Thread.run(Thread.java:745)
If I then edit /opt/zookeeper/conf/zoo.cfg and uncomment the other zookeepers (reducing the quorum to 1 server)
server.1=zookeeper-01:2888:3888
#server.2=zookeeper-02:2888:3888
#server.3=zookeeper-03:2888:3888
And change the ZK_HOSTS option in /var/solr/solr.in.sh
#ZK_HOST="zookeeper-01:2181,zookeeper-02:2181,zookeeper-03:2181"
ZK_HOST="zookeeper-01:2181"
And restart both zookeeper and solr => The core is created (it was queued somehow?). But offline becausethe quorum was down (1 of 3 zookeeper nodes)
So then I experimented with a standalone solr / zookeeper setup (solr-cloud-01 / zookeeper-01)
# zkServer.sh status
JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Mode: standalone
# zkServer.sh status
JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Mode: standalone
I executed the same command:
# solr create_collection -c [collection_name] -d [config_name]
Connecting to ZooKeeper at zookeeper-01:2181 ...
Uploading /opt/solr/server/solr/configsets/[config_name]/conf for config [collection_name]
to ZooKeeper at zookeeper-01:2181
Creating new collection '[collection_name]' using command:
http://localhost:8983/solr/admin/collections?action=CREATE
&name=[collection_name]&numShards=1&replicationFactor=1&
maxShardsPerNode=1&collection.configName=[collection_name]
{
"responseHeader":{
"status":0,
"QTime":9417},
"success":{"":{
"responseHeader":{
"status":0,
"QTime":8869},
"core":"[collection_name]_shard1_replica1"}}}
So that works!
In conclusion, I have the feeling that some routes are not correctly configured, but I can't seem to find out which... Because Zookeeper seems to work and all individual solr instances as well
Here my hosts file:
127.0.0.1 localhost
10.0.0.1 solr-cloud-01
10.0.0.2 solr-cloud-02
10.0.0.3 solr-cloud-03
10.0.0.1 zookeeper-01
10.0.0.2 zookeeper-02
10.0.0.3 zookeeper-03
So, I finally found the answer!
After inspecting the /clusterstate.json via the zkCli.sh I saw that when disconnected 3 'rogue' replica's were mad to the standalone cluster. All pointing to 127.0.1.1, (which is a debian specific loopback to localhost, see https://www.debian.org/doc/manuals/debian-reference/ch05.en.html#_the_hostname_resolution)
The clue was in my hosts file.
So when I changed all reference to hostnames from 127.0.1.1 to the outside IP (in my case 10.0.0.x) it started working!
My new hosts file:
127.0.0.1 localhost
10.0.0.1 solr-cloud-01
10.0.0.2 solr-cloud-02
10.0.0.3 solr-cloud-03
10.0.0.1 zookeeper-01
10.0.0.2 zookeeper-02
10.0.0.3 zookeeper-03

Error deploying configuration descriptor Solr

I have done the below steps for Solr Integration to tomcat on windows machine.Can you please clarify what am I doing wrong here.
1) Download Solr and unzipped Solr 5.2.1 to the below directory C:\downloads\solr-5.2.1\solr-5.2.1.
2)Download Tomcat 7 zipped version and unzipped it to below location C:\downloads\apache-tomcat-7.0.62\apache-tomcat-7.0.62
3)Copy Jar files from C:\downloads\solr-5.2.1\solr-5.2.1\dist\solrj-lib directory to C:\downloads\apache-tomcat-7.0.62\apache-tomcat-7.0.62\lib directory.
4) Create a solr.xml in the C:\downloads\apache-tomcat-7.0.62\apache-tomcat-7.0.62\conf\Catalina\localhost folder.
<?xml version='1.0' encoding='UTF-8'?>
<context docBase="C:/downloads/apache-tomcat-7.0.62/apache-tomcat-7.0.62/webapps/solr.war" debug="0" crossContext="true" >
<environment name="solr" type="java.lang.String" value="/apache-tomcat-7.0.62/webapps/" override="true"></environment>
</context>
5)Copy solr.war file from C:\downloads\solr-5.2.1\solr-5.2.1\server\webapps to
C:\downloads\apache-tomcat-7.0.62\apache-tomcat-7.0.62\webapps folder.
6)Start the tomcat using startup.bat command in bin folder
7)Edit web.xml to
<env-entry>
<env-entry-name>solr/home</env-entry-name>
<env-entry-value>C:/downloads/solr-5.2.1/solr-5.2.1</env-entry-value>
<env-entry-type>java.lang.String</env-entry-type>
</env-entry>
8)Restart the tomcat and hit the url http://localhost:8080/solr I get 404 Not found Error.The error in the console is
SEVERE: Error deploying configuration descriptor C:\downloads\apache-tomcat-7.0.
62\apache-tomcat-7.0.62\conf\Catalina\localhost\solr.xml
java.lang.NullPointerException
at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.ja
va:645)
The Solr wiki states that running 5.x versions on Tomcat is no longer supported:
Internally, Solr is still implemented via Servlet APIs and is powered by Jetty -- but this is simply an implementation detail. Deployment as a "webapp" to other Servlet Containers (or other instances of Jetty) is not supported, and may not work in future 5.x versions of Solr when additional changes are likely to be made to Solr internally to leverage custom networking stack features.

SOLR HTTP 500 Can't find resource 'solrconfig.xml'

I have Apache SOLR working with ColdFusion on my local machine, however, when I tried to make the move to production (environments are different), I keep getting the HTTP 500 message below. Production environment is using Ubuntu Lucid, Apache, ColdFusion 9.0.1. Using the version of SOLR installed with ColdFusion.
The path for solrconfig.xml in the error message, "/opt/jrun4/servers/prod-autofeed1/cfusion.ear/cfusion.war/WEB-INF/cfusion/collections/autofeed/conf/" is correct.
Any suggestions? Thank you.
HTTP ERROR: 500
Severe errors in solr configuration.
Check your log files for more detailed information on what may be wrong.
If you want solr to continue after configuration errors, change:
<abortOnConfigurationError>false</abortOnConfigurationError>
in solr.xml
-------------------------------------------------------------
java.lang.RuntimeException: Can't find resource 'solrconfig.xml' in classpath or '/opt/jrun4/servers/prod-autofeed1/cfusion.ear/cfusion.war/WEB-INF/cfusion/collections/autofeed/conf/', cwd=/opt/jrun4/servers/cfusion/cfusion-ear/cfusion-war/WEB-INF/cfusion/solr
at org.apache.solr.core.SolrResourceLoader.openResource(SolrResourceLoader.java:260)
at org.apache.solr.core.SolrResourceLoader.openConfig(SolrResourceLoader.java:228)
at org.apache.solr.core.Config.<init>(Config.java:101)
at org.apache.solr.core.SolrConfig.<init>(SolrConfig.java:130)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:405)
at org.apache.solr.core.CoreContainer.load(CoreContainer.java:278)
at org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:117)
at org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:83)
at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:99)
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:40)
at org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:594)
at org.mortbay.jetty.servlet.Context.startContext(Context.java:139)
at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1218)
at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:500)
at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:448)
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:40)
at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:147)
at org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:161)
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:40)
at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:147)
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:40)
at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:117)
at org.mortbay.jetty.Server.doStart(Server.java:210)
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:40)
at org.mortbay.xml.XmlConfiguration.main(XmlConfiguration.java:929)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.mortbay.start.Main.invokeMain(Main.java:183)
at org.mortbay.start.Main.start(Main.java:497)
at org.mortbay.start.Main.main(Main.java:115)
RequestURI=/solr/
Powered by Jetty://
Double check permissions on the directory /opt/jrun4/servers/prod-autofeed1/cfusion.ear/cfusion.war/WEB-INF/cfusion/collections/autofeed/conf and the file /opt/jrun4/servers/prod-autofeed1/cfusion.ear/cfusion.war/WEB-INF/cfusion/collections/autofeed/conf/solrconfig.xml. If the user solr is run under can't read the dir/file, that'd do it. To test, you might even su to the user in question and simply try to cat the config file.

Resources