solr 4.1 snappuller replication failure in master slave setup - solr

We have a master slave setup for solr with solr version 4.1.
We are experiencing errors during replication on the slave. It appears that this happens when there is replication under load, but we cannot consistently reproduce this error. When replication failure occurs, it leave the slave in a bad state (the slave continuously throws exception when trying to replicate on the next attempt).
In addition, temporary index directory under the data folder are sometimes left behind and not cleaned up.
Currently solrconfig configured to use SimpleFSDirectoryFactory.
Any help would be appreciated. Stack trace available below:
09 Aug 2013 17:29:02,292 ERROR snapPuller-27-thread-1 org.apache.solr.handler.ReplicationHandler - SnapPull failed :org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1423)
at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1535)
at org.apache.solr.handler.SnapPuller.doCommit(SnapPuller.java:649)
at org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:463)
at org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:274)
at org.apache.solr.handler.SnapPuller$1.run(SnapPuller.java:224)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.io.EOFException: read past EOF: SimpleFSIndexInput(path="/opt/app/solr/home-search/multicore/product/data/index.20130809172900006/_p_2.del")
at org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:266)
at org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:51)
at org.apache.lucene.store.DataInput.readInt(DataInput.java:84)
at org.apache.lucene.store.BufferedIndexInput.readInt(BufferedIndexInput.java:181)
at org.apache.lucene.codecs.lucene40.BitVector.<init>(BitVector.java:346)
at org.apache.lucene.codecs.lucene40.Lucene40LiveDocsFormat.readLiveDocs(Lucene40LiveDocsFormat.java:90)
at org.apache.lucene.index.SegmentReader.<init>(SegmentReader.java:62)
at org.apache.lucene.index.ReadersAndLiveDocs.getReader(ReadersAndLiveDocs.java:121)
at org.apache.lucene.index.ReadersAndLiveDocs.getReadOnlyClone(ReadersAndLiveDocs.java:218)
at org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:96)
at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:369)
at org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:257)
at org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:249)
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1361)
... 13 more
...
09 Aug 2013 17:29:02,408 ERROR snapPuller-263-thread-1 org.apache.solr.handler.ReplicationHandler - SnapPull failed :org.apache.solr.common.SolrException: Index fetch failed :
at org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:476)
at org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:274)
at org.apache.solr.handler.SnapPuller$1.run(SnapPuller.java:224)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.IllegalArgumentException: Unknown directory: org.apache.lucene.store.SimpleFSDirectory#/opt/app/solr/home-search/multicore/product/data/index.20130809172900006
at org.apache.solr.core.CachingDirectoryFactory.addCloseListener(CachingDirectoryFactory.java:87)
at org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:434)
... 10 more
...
09 Aug 2013 17:27:00,006 WARN snapPuller-191-thread-1 org.apache.solr.handler.SnapPuller - Exception while updating statistics
java.lang.RuntimeException: Already closed
at org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:246)
at org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:231)
at org.apache.solr.handler.SnapPuller.logReplicationTimeAndConfFiles(SnapPuller.java:548)
at org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:488)
at org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:274)
at org.apache.solr.handler.SnapPuller$1.run(SnapPuller.java:224)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
09 Aug 2013 17:27:00,007 ERROR snapPuller-191-thread-1 org.apache.solr.handler.ReplicationHandler - SnapPull failed :java.lang.RuntimeException: Already closed
at org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:246)
at org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:231)
at org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:380)
at org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:274)
at org.apache.solr.handler.SnapPuller$1.run(SnapPuller.java:224)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)

Related

org.apache.flink.client.program.ProgramInvocationException: Could not retrieve the execution result

I am trying to start a Flink batch job on an AWS EMR cluster and am getting:
The program finished with the following exception:
org.apache.flink.client.program.ProgramInvocationException: Could not retrieve the execution result. (JobID: c7362754c49bdae8e9a46748d47bc180)
at org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:260)
at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:486)
at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:474)
at org.apache.flink.client.program.ContextEnvironment.execute(ContextEnvironment.java:62)
at com.stolencamerafinder.flink.BatchJob.main(BatchJob.java:69)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:529)
at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:421)
at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:426)
at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:804)
at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:280)
at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:215)
at org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1044)
at org.apache.flink.client.cli.CliFrontend.lambda$main$11(CliFrontend.java:1120)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1844)
at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1120)
Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed to submit JobGraph.
at org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$8(RestClusterClient.java:379)
at java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:870)
at java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:852)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
at org.apache.flink.runtime.concurrent.FutureUtils.lambda$retryOperationWithDelay$5(FutureUtils.java:213)
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:561)
at java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:929)
at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.CompletionException: org.apache.flink.runtime.concurrent.FutureUtils$RetryException: Could not complete the operation. Exception is not retryable.
at java.util.concurrent.CompletableFuture.encodeRelay(CompletableFuture.java:326)
at java.util.concurrent.CompletableFuture.completeRelay(CompletableFuture.java:338)
at java.util.concurrent.CompletableFuture.uniRelay(CompletableFuture.java:911)
at java.util.concurrent.CompletableFuture$UniRelay.tryFire(CompletableFuture.java:899)
... 12 more
Caused by: org.apache.flink.runtime.concurrent.FutureUtils$RetryException: Could not complete the operation. Exception is not retryable.
... 10 more
Caused by: java.util.concurrent.CompletionException: org.apache.flink.runtime.rest.util.RestClientException: [Job submission failed.]
at java.util.concurrent.CompletableFuture.encodeRelay(CompletableFuture.java:326)
at java.util.concurrent.CompletableFuture.completeRelay(CompletableFuture.java:338)
at java.util.concurrent.CompletableFuture.uniRelay(CompletableFuture.java:911)
at java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:953)
at java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:926)
... 4 more
Caused by: org.apache.flink.runtime.rest.util.RestClientException: [Job submission failed.]
at org.apache.flink.runtime.rest.RestClient.parseResponse(RestClient.java:310)
at org.apache.flink.runtime.rest.RestClient.lambda$submitRequest$3(RestClient.java:294)
at java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:952)
... 5 more
I have no idea of what the underlying cause is. Where can I find more details on why it went wrong?
On a similar note, I can't find a way of finding the logs for a successful job with Flink / EMR. Any idea?
To find more details, you need more information of the JobManager or ResourceManager. If you are using YARN as the cluster resource management framework, which many developers do, you can access to the ApplciationMatser's logs, which are the logs from JobManager in Flink.
I met the same problem, the reason is that I don't have start the flink cluster.
./bin/start-cluster.sh
i met this problem in my local mac pc, in local mode.
it runs well in maven project, but throws exception above when submitted to flink local cluster.
it was fixed after restarting my macbook

Apache Zeppelin error local jar not exist

java.lang.RuntimeException: Warning: Local jar
C:\Zeppelin\zeppelin-0.8.0-bin-all\bin\54480 does not exist, skipping.
Warning: Local jar C:\Zeppelin\zeppelin-0.8.0-bin-all\bin\10.10.10.122
does not exist, skipping. java.lang.ClassNotFoundException:
org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer at
java.net.URLClassLoader.findClass(URLClassLoader.java:381) at
java.lang.ClassLoader.loadClass(ClassLoader.java:424) at
java.lang.ClassLoader.loadClass(ClassLoader.java:357) at
java.lang.Class.forName0(Native Method) at
java.lang.Class.forName(Class.java:348) at
org.apache.spark.util.Utils$.classForName(Utils.scala:235) at
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:836)
at
org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:197)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:227)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:136) at
org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 2018-09-18
10:33:29 INFO ShutdownHookManager:54 - Shutdown hook called 2018-09-18
10:33:29 INFO ShutdownHookManager:54 - Deleting directory
C:\Users\Polichetti\AppData\Local\Temp\spark-e1cca18a-e05a-4539-b1b6-2f56a8ab27aa
at
org.apache.zeppelin.interpreter.remote.RemoteInterpreterManagedProcess.start(RemoteInterpreterManagedProcess.java:205)
at
org.apache.zeppelin.interpreter.ManagedInterpreterGroup.getOrCreateInterpreterProcess(ManagedInterpreterGroup.java:64)
at
org.apache.zeppelin.interpreter.remote.RemoteInterpreter.getOrCreateInterpreterProcess(RemoteInterpreter.java:111)
at
org.apache.zeppelin.interpreter.remote.RemoteInterpreter.internal_create(RemoteInterpreter.java:164)
at
org.apache.zeppelin.interpreter.remote.RemoteInterpreter.open(RemoteInterpreter.java:132)
at
org.apache.zeppelin.interpreter.remote.RemoteInterpreter.getFormType(RemoteInterpreter.java:299)
at org.apache.zeppelin.notebook.Paragraph.jobRun(Paragraph.java:407)
at org.apache.zeppelin.scheduler.Job.run(Job.java:188) at
org.apache.zeppelin.scheduler.RemoteScheduler$JobRunner.run(RemoteScheduler.java:307)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266) at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Please James, this is not the way that Stackoverflow works... Do some research If you have an error log, and don't throw to here like trash.

java.util.concurrent.ExecutionException: org.apache.solr.common.SolrException ,on sor5.5.3

2017-09-18 11:26:05.661 ERROR (coreContainerWorkExecutor-2-thread-1-processing-n:172.16.4.32:8983_solr) [ ] o.a.s.c.CoreContainer Error waiting for SolrCore to be created
java.util.concurrent.ExecutionException: org.apache.solr.common.SolrException: Unable to create core [wyCluster_shard5_replica2]
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at org.apache.solr.core.CoreContainer$2.run(CoreContainer.java:472)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:210)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.solr.common.SolrException: Unable to create core
There take some ERROR in it .
When I start the Solr slave, it being down soon.
It uses zookeeper.

org.apache.camel.component.file.GenericFileOperationFailedException - Cannot list directory:

Their is a similar posting that claims to have the answer but I'm still getting the error after putting step -> stepwise=false in my camel route. -> exception
14:35:33,649 WARN [org.apache.camel.component.file.remote.RemoteFilePollingConsumerPollStrategy] (Camel (camel-1) thread #13 - sftp://myftp:22) Trying to recover by discon
necting from remote server forcing a re-connect at next poll: sftp://myUser#myftp:22
14:35:33,654 WARN [org.apache.camel.component.file.remote.SftpConsumer] (Camel (camel-1) thread #13 - sftp://myftp:22) Consumer Consumer[sftp://myftp:22?delay
=1h&delete=true&doneFileName=done&password=xxxxxx&sortBy=ignoreCase%3Afile%3Aname&stepwise=false&username=myUser] failed polling endpoint: Endpoint[sftp://myftp:22?delay=1h
&delete=true&doneFileName=done&password=xxxxxx&sortBy=ignoreCase%3Afile%3Aname&stepwise=false&username=myUser]. Will try again at next poll. Caused by: [org.apache.camel.component.file.
GenericFileOperationFailedException - Cannot list directory: .]: org.apache.camel.component.file.GenericFileOperationFailedException: Cannot list directory: .
at org.apache.camel.component.file.remote.SftpOperations.listFiles(SftpOperations.java:583) [camel-ftp-2.13.2.jar:2.13.2]
at org.apache.camel.component.file.remote.SftpConsumer.doPollDirectory(SftpConsumer.java:90) [camel-ftp-2.13.2.jar:2.13.2]
at org.apache.camel.component.file.remote.SftpConsumer.pollDirectory(SftpConsumer.java:52) [camel-ftp-2.13.2.jar:2.13.2]
at org.apache.camel.component.file.GenericFileConsumer.poll(GenericFileConsumer.java:119) [camel-core-2.13.2.jar:2.13.2]
at org.apache.camel.impl.ScheduledPollConsumer.doRun(ScheduledPollConsumer.java:187) [camel-core-2.13.2.jar:2.13.2]
at org.apache.camel.impl.ScheduledPollConsumer.run(ScheduledPollConsumer.java:114) [camel-core-2.13.2.jar:2.13.2]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [rt.jar:1.7.0_65]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) [rt.jar:1.7.0_65]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178) [rt.jar:1.7.0_65]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [rt.jar:1.7.0_65]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_65]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_65]
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_65]
Caused by: 4:
at com.jcraft.jsch.ChannelSftp.ls(ChannelSftp.java:1660) [jsch-0.1.49.jar:]
at com.jcraft.jsch.ChannelSftp.ls(ChannelSftp.java:1466) [jsch-0.1.49.jar:]
at org.apache.camel.component.file.remote.SftpOperations.listFiles(SftpOperations.java:574) [camel-ftp-2.13.2.jar:2.13.2]
... 12 more
Caused by: java.io.IOException: Pipe closed
at java.io.PipedInputStream.read(PipedInputStream.java:308) [rt.jar:1.7.0_65]
at com.jcraft.jsch.Channel$MyPipedInputStream.updateReadSide(Channel.java:344) [jsch-0.1.49.jar:]
at com.jcraft.jsch.ChannelSftp.ls(ChannelSftp.java:1483) [jsch-0.1.49.jar:]
... 14 more
here is my route ->
sftp://myftp:22?delay
=1h&delete=true&doneFileName=done&password=xxxxxx&sortBy=ignoreCase%3Afile%3Aname&stepwise=false&username=myUser
Yeah, there is java.io.IOException: Pipe closed.
I just checked the code of camel-ftp, it has the code to check the connection, but it's hard to know if the connection is still opened if we don't send some bytes to server socket.
The solution could be force the ftp client send some ping message to check if the connection is still open.
If you want to set a very long delay for your ftp pulling, the work around way is setting disconnect option to be true to force ftp endpoint restart a new connection when it pull the directory again from the FTP server.
I just created a JIRA for it.

unable to load solrconfig.xml

I am using Apache Solr on my drupal website.
Tomcat 6 is installed and I have replaced schema.xml, solr-config.xml and protwords.txt files with the new files which was present in module installation directory.
When I run localhost:8983, I get this error.
Log4j (org.slf4j.impl.Log4jLoggerFactory)
2528 [coreLoadExecutor-3-thread-1] ERROR org.apache.solr.core.CoreContainer – Failed to load file /opt/solr-4.5.1/example/solr/collection1/conf/solrconfig.xml
2529 [coreLoadExecutor-3-thread-1] ERROR org.apache.solr.core.CoreContainer – Unable to create core: egitraining-dev.esc.rl.ac.uk
org.apache.solr.common.SolrException: Could not load config file /opt/solr-4.5.1/example/solr/collection1/conf/solrconfig.xml
at org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:490)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:557)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:247)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:239)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:679)
Caused by: java.io.IOException: Can't find resource 'solrconfig.xml' in classpath or 'solr/collection1/conf/conf/', cwd=/opt/solr-4.5.1/example
at org.apache.solr.core.SolrResourceLoader.openResource(SolrResourceLoader.java:322)
at org.apache.solr.core.SolrResourceLoader.openConfig(SolrResourceLoader.java:287)
at org.apache.solr.core.Config.<init>(Config.java:116)
at org.apache.solr.core.Config.<init>(Config.java:86)
at org.apache.solr.core.SolrConfig.<init>(SolrConfig.java:129)
at org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:487)
... 11 more
2531 [coreLoadExecutor-3-thread-1] ERROR org.apache.solr.core.CoreContainer – null:org.apache.solr.common.SolrException: Unable to create core: egitraining-dev.esc.rl.ac.uk
at org.apache.solr.core.CoreContainer.recordAndThrow(CoreContainer.java:934)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:566)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:247)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:239)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:679)
Caused by: org.apache.solr.common.SolrException: Could not load config file /opt/solr-4.5.1/example/solr/collection1/conf/solrconfig.xml
at org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:490)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:557)
... 10 more
Caused by: java.io.IOException: Can't find resource 'solrconfig.xml' in classpath or 'solr/collection1/conf/conf/', cwd=/opt/solr-4.5.1/example
at org.apache.solr.core.SolrResourceLoader.openResource(SolrResourceLoader.java:322)
at org.apache.solr.core.SolrResourceLoader.openConfig(SolrResourceLoader.java:287)
at org.apache.solr.core.Config.<init>(Config.java:116)
at org.apache.solr.core.Config.<init>(Config.java:86)
at org.apache.solr.core.SolrConfig.<init>(SolrConfig.java:129)
at org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:487)
... 11 more
2533 [main] INFO org.apache.solr.servlet.SolrDispatchFilter – user.dir=/opt/solr-4.5.1/example
2533 [main] INFO org.apache.solr.servlet.SolrDispatchFilter – SolrDispatchFilter.init() done
2576 [main] INFO org.eclipse.jetty.server.AbstractConnector – Started SocketConnector#0.0.0.0:8983
Can anyone help Please?
Thanks
This might have something to do with the default Solr config files provided by the Search API Solr module. Try to remove the next couple of lines from solrconfig.xml:
<useCompoundFile>false</useCompoundFile>
<ramBufferSizeMB>32</ramBufferSizeMB>
<mergeFactor>10</mergeFactor>
Patch found at https://drupal.org/comment/7945999#comment-7945999.

Resources