SolrCloud 6 continuously reloading core every 10 seconds - solr

I'm experiencing a loop in reloading core. My current configuration is:
One VM CentOS 7.1 (16GB RAM, 4 cores)
3 istances Zookeeper 3.4.9 (port 2181, 2182, 2183)
3 istances Solr 6.5.1 (port 8983, 8501, 8502)
1 collection named MERITO divided into 2 shards (one in 8983, one in 8501), each shard replicated (both replicas in 8502).
The cluster is currently active and everything seems to work fine. Looking at the solr.log file I notice a loop in reloading cores, it keeps reloading every 10 seconds circa.
Here is the block that is repeating:
10:24:50.911 INFO (qtp575335780-18) [ ] o.a.s.c.CoreContainer Reloading SolrCore 'merito_shard1_replica2' using configuration from collection merito
10:24:50.934 INFO (qtp575335780-18) [c:merito s:shard1 r:core_node2 x:merito_shard1_replica2] o.a.s.c.SolrCore [[merito_shard1_replica2] ] Opening new SolrCore at [/home/user/Desktop/Solr/instance2/server/solr/merito_shard1_replica2], dataDir=[/home/user/Desktop/Solr/instance2/server/solr/merito_shard1_replica2/data/]
10:24:50.934 INFO (qtp575335780-18) [c:merito s:shard1 r:core_node2 x:merito_shard1_replica2] o.a.s.c.JmxMonitoredMap JMX monitoring is enabled. Adding Solr mbeans to JMX Server: com.sun.jmx.mbeanserver.JmxMBeanServer#66ea810
10:24:50.935 INFO (qtp575335780-18) [c:merito s:shard1 r:core_node2 x:merito_shard1_replica2] o.a.s.r.XSLTResponseWriter xsltCacheLifetimeSeconds=5
10:24:50.950 INFO (qtp575335780-18) [c:merito s:shard1 r:core_node2 x:merito_shard1_replica2] o.a.s.u.CommitTracker Hard AutoCommit: if uncommited for 15000ms;
10:24:50.950 INFO (qtp575335780-18) [c:merito s:shard1 r:core_node2 x:merito_shard1_replica2] o.a.s.u.CommitTracker Soft AutoCommit: disabled
10:24:50.953 INFO (qtp575335780-18) [c:merito s:shard1 r:core_node2 x:merito_shard1_replica2] o.a.s.s.SolrIndexSearcher Opening [Searcher#5747d7f5[merito_shard1_replica2] main]
10:24:50.955 INFO (qtp575335780-18) [c:merito s:shard1 r:core_node2 x:merito_shard1_replica2] o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: /configs/merito
10:24:50.956 INFO (qtp575335780-18) [c:merito s:shard1 r:core_node2 x:merito_shard1_replica2] o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using ZooKeeperStorageIO:path=/configs/merito
10:24:50.956 INFO (qtp575335780-18) [c:merito s:shard1 r:core_node2 x:merito_shard1_replica2] o.a.s.h.c.SpellCheckComponent Initializing spell checkers
10:24:50.956 INFO (qtp575335780-18) [c:merito s:shard1 r:core_node2 x:merito_shard1_replica2] o.a.s.s.DirectSolrSpellChecker init: {name=default,field=_text_,classname=solr.DirectSolrSpellChecker,distanceMeasure=internal,accuracy=0.5,maxEdits=2,minPrefix=1,maxInspections=5,minQueryLength=4,maxQueryFrequency=0.01}
10:24:50.957 INFO (qtp575335780-18) [c:merito s:shard1 r:core_node2 x:merito_shard1_replica2] o.a.s.h.ReplicationHandler Commits will be reserved for 10000
10:24:50.961 INFO (searcherExecutor-584-thread-1-processing-n:192.168.94.133:8501_solr x:merito_shard1_replica2 s:shard1 c:merito r:core_node2) [c:merito s:shard1 r:core_node2 x:merito_shard1_replica2] o.a.s.c.QuerySenderListener QuerySenderListener sending requests to Searcher#5747d7f5[merito_shard1_replica2] main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_2(6.5.1):C1)))}
10:24:50.967 INFO (searcherExecutor-584-thread-1-processing-n:192.168.94.133:8501_solr x:merito_shard1_replica2 s:shard1 c:merito r:core_node2) [c:merito s:shard1 r:core_node2 x:merito_shard1_replica2] o.a.s.c.QuerySenderListener QuerySenderListener done.
10:24:50.967 INFO (searcherExecutor-584-thread-1-processing-n:192.168.94.133:8501_solr x:merito_shard1_replica2 s:shard1 c:merito r:core_node2) [c:merito s:shard1 r:core_node2 x:merito_shard1_replica2] o.a.s.h.c.SpellCheckComponent Loading spell index for spellchecker: default
10:24:50.968 INFO (searcherExecutor-584-thread-1-processing-n:192.168.94.133:8501_solr x:merito_shard1_replica2 s:shard1 c:merito r:core_node2) [c:merito s:shard1 r:core_node2 x:merito_shard1_replica2] o.a.s.c.SolrCore [merito_shard1_replica2] Registered new searcher Searcher#5747d7f5[merito_shard1_replica2] main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_2(6.5.1):C1)))}
10:24:50.967 INFO (qtp575335780-18) [c:merito s:shard1 r:core_node2 x:merito_shard1_replica2] o.a.s.u.DefaultSolrCoreState New IndexWriter is ready to be used.
10:24:50.970 INFO (qtp575335780-18) [c:merito s:shard1 r:core_node2 x:merito_shard1_replica2] o.a.s.s.SolrIndexSearcher Opening [Searcher#56f7a479[merito_shard1_replica2] main]
10:24:50.971 INFO (qtp575335780-18) [c:merito s:shard1 r:core_node2 x:merito_shard1_replica2] o.a.s.c.SolrCore [merito_shard1_replica2] CLOSING SolrCore org.apache.solr.core.SolrCore#7d455130
10:24:50.972 INFO (searcherExecutor-584-thread-1-processing-n:192.168.94.133:8501_solr x:merito_shard1_replica2 s:shard1 c:merito r:core_node2) [c:merito s:shard1 r:core_node2 x:merito_shard1_replica2] o.a.s.c.QuerySenderListener QuerySenderListener sending requests to Searcher#56f7a479[merito_shard1_replica2] main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_2(6.5.1):C1)))}
10:24:50.972 INFO (searcherExecutor-584-thread-1-processing-n:192.168.94.133:8501_solr x:merito_shard1_replica2 s:shard1 c:merito r:core_node2) [c:merito s:shard1 r:core_node2 x:merito_shard1_replica2] o.a.s.c.QuerySenderListener QuerySenderListener done.
10:24:50.973 INFO (searcherExecutor-584-thread-1-processing-n:192.168.94.133:8501_solr x:merito_shard1_replica2 s:shard1 c:merito r:core_node2) [c:merito s:shard1 r:core_node2 x:merito_shard1_replica2] o.a.s.c.SolrCore [merito_shard1_replica2] Registered new searcher Searcher#56f7a479[merito_shard1_replica2] main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_2(6.5.1):C1)))}
10:24:50.980 INFO (qtp575335780-18) [c:merito s:shard1 r:core_node2 x:merito_shard1_replica2] o.a.s.m.SolrMetricManager Closing metric reporters for: solr.core.merito.shard1.replica2
10:24:50.981 INFO (qtp575335780-18) [c:merito s:shard1 r:core_node2 x:merito_shard1_replica2] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores params={core=merito_shard1_replica2&qt=/admin/cores&action=RELOAD&wt=javabin&version=2} status=0 QTime=10388
10:24:51.175 INFO (qtp575335780-19) [ ] o.a.s.c.RequestParams conf resource params.json loaded . version : 20
10:24:51.176 INFO (qtp575335780-19) [ ] o.a.s.c.RequestParams request params refreshed to version 20
10:24:51.178 INFO (qtp575335780-19) [ ] o.a.s.c.SolrResourceLoader [merito_shard1_replica2] Added 51 libs to classloader, from paths: [/home/user/Desktop/Solr/instance2/contrib/clustering/lib, /home/user/Desktop/Solr/instance2/contrib/extraction/lib, /home/user/Desktop/Solr/instance2/contrib/langid/lib, /home/user/Desktop/Solr/instance2/contrib/velocity/lib, /home/user/Desktop/Solr/instance2/dist]
10:24:51.198 INFO (qtp575335780-19) [ ] o.a.s.c.SolrConfig Using Lucene MatchVersion: 6.5.1
10:24:51.220 INFO (qtp575335780-19) [ ] o.a.s.s.IndexSchema [merito_shard1_replica2] Schema name=example-basic-bdm
10:24:51.496 INFO (qtp575335780-19) [ ] o.a.s.s.IndexSchema Loaded schema example-basic-bdm/1.6 with uniqueid field id
After each block it starts againg with o.a.s.c.CoreContainer Reloading SolrCore .
Do you have any suggestion on how to prevent this behaviour? The logs are filling with this spam messages.
Thanks in advance

You are calling it from a cron or windows service. Check solr server interaction with other servers from resource io monitor and identify the process.
So i was in same situation as well my solr instance was reloading after every 2 mins. Turns out there was a cron(Windows service) running on the server that was continuously reloading it after 2 mins so as to add new synonyms from dictionary continuously.
I controlled that now.

Related

trying to create a simple Solr master/slave configuration getting indexFetcher error on SLAVE trying to connect to master

2021-07-26 17:13:30.420 INFO (qtp210506412-18) [ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/info/logging params={wt=json&=1627318946019&since=0} status=0 QTime=0
2021-07-26 17:13:40.423 INFO (qtp210506412-22) [ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/info/logging params={wt=json&=1627318946019&since=0} status=0 QTime=0
2021-07-26 17:13:46.305 WARN (indexFetcher-90-thread-1) [ ] o.a.s.h.IndexFetcher Master at: https://localhost:8986/solr/#/master is not available. Index fetch failed by exception: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at https://localhost:8986/solr/#/master: Expected mime type application/octet-stream but got text/html.
The given URL is invalid for replication (https://localhost:8986/solr/#/master) - the part after # as an anchor and is only relevant on the client side in HTML / Javascript.
You probably want the actual replication path. This is usually something like:
http://<host>:<port>/<core>/replication
Change the current configuration and reload the core / restart Solr and replication should start properly.

Failed to install jts library on Solr 6.4.2

I just installed Solr-6.4.2 and tried to install the JTS library like here explained by copying all JTS library files to the /solr-6.4.2/server/solr-webapp/WEB-INF/lib directory.
Then configured the managed-schema by adding
<fieldType name="location_rpt" class="solr.SpatialRecursivePrefixTreeFieldType"
spatialContextFactory="com.spatial4j.core.context.jts.JtsSpatialContextFactory"
distErrPct="0.025"
maxDistErr="0.000009"
units="degrees"
/>
<field name="geo" type="location_rpt" indexed="true" stored="true" multiValued="true" />
and started it in /bin with ./solr start (jetty)
But when i visit the solr interface it says:
> org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
> Could not load conf for core polygon: Can't load schema
> /home/spatial/solr-6.4.2/server/solr/polygon/conf/managed-schema:
> Plugin Initializing failure for [schema.xml] fieldType
It looks to me that the library is not found or not automatically loaded (as it should be according to tutorials).
Can you help me?
Here is the log file:
2017-03-11 15:44:57.061 INFO (main) [ ] o.a.s.c.CorePropertiesLocator Cores are: [polygon]
2017-03-11 15:44:57.067 INFO (coreLoadExecutor-6-thread-1) [ x:polygon] o.a.s.c.SolrResourceLoader [null] Added 8 libs to classloader, from paths: [/home/spatial/solr-6.4.2/server/solr/polygon/lib]
2017-03-11 15:44:57.117 INFO (main) [ ] o.e.j.s.Server Started #777ms
2017-03-11 15:44:57.174 INFO (coreLoadExecutor-6-thread-1) [ x:polygon] o.a.s.c.SolrResourceLoader [polygon] Added 59 libs to classloader, from paths: [/home/spatial/solr-6.4.2/contrib/clustering/lib, /home/spatial/solr-6.4.2/contrib/extraction/lib, /home/spatial/solr-6.4.2/contrib/langid/lib, /home/spatial/solr-6.4.2/contrib/velocity/lib, /home/spatial/solr-6.4.2/dist]
2017-03-11 15:44:57.209 INFO (coreLoadExecutor-6-thread-1) [ x:polygon] o.a.s.c.SolrConfig Using Lucene MatchVersion: 6.4.2
2017-03-11 15:44:57.298 INFO (coreLoadExecutor-6-thread-1) [ x:polygon] o.a.s.s.IndexSchema [polygon] Schema name=example-data-driven-schema
2017-03-11 15:44:57.385 WARN (coreLoadExecutor-6-thread-1) [ x:polygon] o.a.s.c.SolrResourceLoader Solr loaded a deprecated plugin/analysis class [solr.SynonymFilterFactory]. Please consult documentation how to replace it accordingly.
2017-03-11 15:44:57.535 WARN (coreLoadExecutor-6-thread-1) [ x:polygon] o.a.s.s.AbstractSpatialFieldType Replace 'com.spatial4j.core' with 'org.locationtech.spatial4j' in your schema.
2017-03-11 15:44:57.556 ERROR (coreLoadExecutor-6-thread-1) [ x:polygon] o.a.s.c.CoreContainer Error creating core [polygon]: Could not load conf for core polygon: Can't load schema /home/spatial/solr-6.4.2/server/solr/polygon/conf/managed-schema: Plugin Initializing failure for [schema.xml] fieldType
org.apache.solr.common.SolrException: Could not load conf for core polygon: Can't load schema /home/spatial/solr-6.4.2/server/solr/polygon/conf/managed-schema: Plugin Initializing failure for [schema.xml] fieldType
at org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:84)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:888)
at org.apache.solr.core.CoreContainer.lambda$load$3(CoreContainer.java:542)
at com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.solr.common.SolrException: Can't load schema /home/spatial/solr-6.4.2/server/solr/polygon/conf/managed-schema: Plugin Initializing failure for [schema.xml] fieldType
at org.apache.solr.schema.IndexSchema.readSchema(IndexSchema.java:598)
at org.apache.solr.schema.IndexSchema.<init>(IndexSchema.java:183)
at org.apache.solr.schema.ManagedIndexSchema.<init>(ManagedIndexSchema.java:104)
at org.apache.solr.schema.ManagedIndexSchemaFactory.create(ManagedIndexSchemaFactory.java:173)
at org.apache.solr.schema.ManagedIndexSchemaFactory.create(ManagedIndexSchemaFactory.java:45)
at org.apache.solr.schema.IndexSchemaFactory.buildIndexSchema(IndexSchemaFactory.java:75)
at org.apache.solr.core.ConfigSetService.createIndexSchema(ConfigSetService.java:106)
at org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:78)
... 8 more
Caused by: org.apache.solr.common.SolrException: Plugin Initializing failure for [schema.xml] fieldType
at org.apache.solr.util.plugin.AbstractPluginLoader.load(AbstractPluginLoader.java:194)
at org.apache.solr.schema.IndexSchema.readSchema(IndexSchema.java:491)
... 15 more
Caused by: java.lang.RuntimeException: schema fieldtype location_rpt(org.apache.solr.schema.SpatialRecursivePrefixTreeFieldType) invalid arguments:{units=degrees}
at org.apache.solr.schema.FieldType.setArgs(FieldType.java:202)
at org.apache.solr.schema.FieldTypePluginLoader.init(FieldTypePluginLoader.java:165)
at org.apache.solr.schema.FieldTypePluginLoader.init(FieldTypePluginLoader.java:53)
at org.apache.solr.util.plugin.AbstractPluginLoader.load(AbstractPluginLoader.java:191)
... 16 more
2017-03-11 15:44:57.558 ERROR (coreContainerWorkExecutor-2-thread-1) [ ] o.a.s.c.CoreContainer Error waiting for SolrCore to be created
java.util.concurrent.ExecutionException: org.apache.solr.common.SolrException: Unable to create core [polygon]
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at org.apache.solr.core.CoreContainer.lambda$load$4(CoreContainer.java:570)
at com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.solr.common.SolrException: Unable to create core [polygon]
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:903)
at org.apache.solr.core.CoreContainer.lambda$load$3(CoreContainer.java:542)
at com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
... 5 more
Caused by: org.apache.solr.common.SolrException: Could not load conf for core polygon: Can't load schema /home/spatial/solr-6.4.2/server/solr/polygon/conf/managed-schema: Plugin Initializing failure for [schema.xml] fieldType
at org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:84)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:888)
... 7 more
Caused by: org.apache.solr.common.SolrException: Can't load schema /home/spatial/solr-6.4.2/server/solr/polygon/conf/managed-schema: Plugin Initializing failure for [schema.xml] fieldType
at org.apache.solr.schema.IndexSchema.readSchema(IndexSchema.java:598)
at org.apache.solr.schema.IndexSchema.<init>(IndexSchema.java:183)
at org.apache.solr.schema.ManagedIndexSchema.<init>(ManagedIndexSchema.java:104)
at org.apache.solr.schema.ManagedIndexSchemaFactory.create(ManagedIndexSchemaFactory.java:173)
at org.apache.solr.schema.ManagedIndexSchemaFactory.create(ManagedIndexSchemaFactory.java:45)
at org.apache.solr.schema.IndexSchemaFactory.buildIndexSchema(IndexSchemaFactory.java:75)
at org.apache.solr.core.ConfigSetService.createIndexSchema(ConfigSetService.java:106)
at org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:78)
... 8 more
Caused by: org.apache.solr.common.SolrException: Plugin Initializing failure for [schema.xml] fieldType
at org.apache.solr.util.plugin.AbstractPluginLoader.load(AbstractPluginLoader.java:194)
at org.apache.solr.schema.IndexSchema.readSchema(IndexSchema.java:491)
... 15 more
Caused by: java.lang.RuntimeException: schema fieldtype location_rpt(org.apache.solr.schema.SpatialRecursivePrefixTreeFieldType) invalid arguments:{units=degrees}
at org.apache.solr.schema.FieldType.setArgs(FieldType.java:202)
at org.apache.solr.schema.FieldTypePluginLoader.init(FieldTypePluginLoader.java:165)
at org.apache.solr.schema.FieldTypePluginLoader.init(FieldTypePluginLoader.java:53)
at org.apache.solr.util.plugin.AbstractPluginLoader.load(AbstractPluginLoader.java:191)
It turned out the interface was recently changed and older examples found on stackoverflow, like here are not working with the most current solr version (6.4.2). The most current documentation is here
A configuration example which will work:
<fieldType name="location_rpt" class="solr.SpatialRecursivePrefixTreeFieldType"
spatialContextFactory="com.spatial4j.core.context.jts.JtsSpatialContextFactory"
autoIndex="true"
distErrPct="0.025"
maxDistErr="0.001"
distanceUnits="kilometers" />
I. e. distanceUnits is now used instead of units, etc. and degrees as attribute seems to raise an error.
The initial code I used raised errors in the most current solr version.
Caused by: java.lang.RuntimeException: schema fieldtype location_rpt(org.apache.solr.schema.SpatialRecursivePrefixTreeFieldType) invalid arguments:{units=degrees}

Failing unit test - Expected file contents but got GenricFile[<file-name>]

I have a unit test in Camel for testing a simple route from a file to a JMS queue.
The route looks like this:
public class File2QueueRouteBuilder extends RouteBuilder {
#Override
public void configure() throws Exception {
from("file:src/data/").to("jms:demo_queue");
}
}
I Test the route by generating a file with a producer template and getting the message from the queue by adding a route from the JMS queue to a mock endpoint. I inject seda as the JMS component in the camel context to avoid depending on activeMQ for unittest.
The test code looks like this:
public class CamelRiderTest extends CamelTestSupport {
#Override
protected CamelContext createCamelContext() throws Exception {
CamelContext context = super.createCamelContext();
// use a simple in memory JMS implementation in stead of ActiveMQ as in the production code.
context.addComponent("jms", context.getComponent("seda"));
return context;
}
#Override
protected RouteBuilder createRouteBuilder() throws Exception {
// Get the production route to test.
RouteBuilder toTest = new File2QueueRouteBuilder();
// Add a test route that reads the message from the queue to a mock
toTest.includeRoutes(new RouteBuilder() {
#Override
public void configure() throws Exception {
from("jms:demo_queue").to("mock:demo_queue");
}
});
return toTest;
}
#Test
public void testFile2QueueRouteBuilder() throws InterruptedException {
// Get the mock endpoint for verification
MockEndpoint demoQueueOutput = getMockEndpoint("mock:demo_queue");
// Set the expected result
demoQueueOutput.expectedMessageCount(2); // Assert that two messages are received.
demoQueueOutput.setAssertPeriod(1000); // Asserts after 1000 millis that no more than 2 messages were received
demoQueueOutput.allMessages().body().contains("Hello"); // Assert contents of messages
// Use a template producer to create an input file containing "Hello world"
template.sendBodyAndHeader("file://src/data", "Hello camel", Exchange.FILE_NAME, "hello1.txt");
template.sendBodyAndHeader("file://src/data", "Hello again", Exchange.FILE_NAME, "hello2.txt");
// Wait for the route to process input
Thread.sleep(1000);
// Assert mock endpoint expected result.
demoQueueOutput.assertIsSatisfied();
}
}
The messages seem to come through as expected but i get the following error when trying to validate the contents:
15/07/29 09:38:18 INFO demo.CamelRiderTest: ********************************************************************************
15/07/29 09:38:18 INFO demo.CamelRiderTest: Testing: testFile2QueueRouteBuilder(dk.systematic.cura.demo.CamelRiderTest)
15/07/29 09:38:18 INFO demo.CamelRiderTest: ********************************************************************************
15/07/29 09:38:18 INFO impl.DefaultCamelContext: Apache Camel 2.15.2 (CamelContext: camel-1) is starting
15/07/29 09:38:18 INFO management.DefaultManagementStrategy: JMX is disabled
15/07/29 09:38:18 INFO converter.DefaultTypeConverter: Loaded 186 type converters
15/07/29 09:38:18 INFO seda.SedaEndpoint: Endpoint Endpoint[jms://demo_queue] is using shared queue: jms://demo_queue with size: 2147483647
15/07/29 09:38:18 INFO impl.DefaultCamelContext: AllowUseOriginalMessage is enabled. If access to the original message is not needed, then its recommended to turn this option off as it may improve performance.
15/07/29 09:38:18 INFO impl.DefaultCamelContext: StreamCaching is not in use. If using streams then its recommended to enable stream caching. See more details at http://camel.apache.org/stream-caching.html
15/07/29 09:38:18 INFO impl.DefaultCamelContext: Route: route1 started and consuming from: Endpoint[jms://demo_queue]
15/07/29 09:38:18 INFO impl.DefaultCamelContext: Route: route2 started and consuming from: Endpoint[file://src/data/]
15/07/29 09:38:18 INFO impl.DefaultCamelContext: Total 2 routes, of which 2 is started.
15/07/29 09:38:18 INFO impl.DefaultCamelContext: Apache Camel 2.15.2 (CamelContext: camel-1) started in 0.137 seconds
15/07/29 09:38:19 INFO mock.MockEndpoint: Asserting: Endpoint[mock://demo_queue] is satisfied
15/07/29 09:38:19 INFO demo.CamelRiderTest: ********************************************************************************
15/07/29 09:38:19 INFO demo.CamelRiderTest: Testing done: testFile2QueueRouteBuilder(dk.systematic.cura.demo.CamelRiderTest)
15/07/29 09:38:19 INFO demo.CamelRiderTest: Took: 1.056 seconds (1056 millis)
15/07/29 09:38:19 INFO demo.CamelRiderTest: ********************************************************************************
15/07/29 09:38:19 INFO impl.DefaultCamelContext: Apache Camel 2.15.2 (CamelContext: camel-1) is shutting down
15/07/29 09:38:19 INFO impl.DefaultShutdownStrategy: Starting to graceful shutdown 2 routes (timeout 10 seconds)
15/07/29 09:38:20 INFO impl.DefaultShutdownStrategy: Route: route2 shutdown complete, was consuming from: Endpoint[file://src/data/]
15/07/29 09:38:20 INFO impl.DefaultShutdownStrategy: Route: route1 shutdown complete, was consuming from: Endpoint[jms://demo_queue]
15/07/29 09:38:20 INFO impl.DefaultShutdownStrategy: Graceful shutdown of 2 routes completed in 0 seconds
15/07/29 09:38:20 INFO impl.DefaultCamelContext: Apache Camel 2.15.2 (CamelContext: camel-1) uptime 2.202 seconds
15/07/29 09:38:20 INFO impl.DefaultCamelContext: Apache Camel 2.15.2 (CamelContext: camel-1) is shutdown in 1.006 seconds
Assertion error at index 0 on mock mock://demo_queue with predicate: Simple: body contains Hello evaluated as: GenericFile[hello1.txt] contains Hello on Exchange[hello1.txt]
java.lang.AssertionError: Assertion error at index 0 on mock mock://demo_queue with predicate: Simple: body contains Hello evaluated as: GenericFile[hello1.txt] contains Hello on Exchange[hello1.txt]
at org.apache.camel.util.PredicateAssertHelper.assertMatches(PredicateAssertHelper.java:43)
at org.apache.camel.component.mock.AssertionClause.applyAssertionOn(AssertionClause.java:106)
at org.apache.camel.component.mock.MockEndpoint$18.run(MockEndpoint.java:976)
at org.apache.camel.component.mock.MockEndpoint.doAssertIsSatisfied(MockEndpoint.java:410)
at org.apache.camel.component.mock.MockEndpoint.assertIsSatisfied(MockEndpoint.java:378)
at org.apache.camel.component.mock.MockEndpoint.assertIsSatisfied(MockEndpoint.java:366)
at dk.systematic.cura.demo.CamelRiderTest.testFile2QueueRouteBuilder(CamelRiderTest.java:48)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:86)
at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:49)
at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:69)
at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:48)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
at org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:105)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at org.gradle.messaging.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:360)
at org.gradle.internal.concurrent.DefaultExecutorFactory$StoppableExecutorImpl$1.run(DefaultExecutorFactory.java:64)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Looks like I don't get the contents of the read file from the queue, but something else. What am I doing wrong?
I am running on windows 7 with java 1.7 (64 bit) and Camel 2.15.2
Camel File produces a GenericFile-Object as body of the exchange.
In your Unit-Test you expected a String. Change your route to:
from("file:src/data/").convertBodyTo(String.class).to("jms:demo_queue");

Loading solr configs in Cloudera SolrCloud

We try to import our data into SolrCloud using MapReduce batch indexing. We face a problem at the reduce phase, that solr.xml cannot be found. We create a 'twitter' collection but looking at the logs, after it failed to load in solr.xml, it uses the default one and tries to create 'collection1' (failed) and 'core1' (success) SolrCore. I'm not sure if we need to create our own solr.xml and where to put it (we try to put it at several places but it seems not to load in). Below is the log:
2022 [main] INFO org.apache.solr.hadoop.HeartBeater - Heart beat reporting class is org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
2025 [main] INFO org.apache.solr.hadoop.SolrRecordWriter - Using this unpacked directory as solr home: /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip
2025 [main] INFO org.apache.solr.hadoop.SolrRecordWriter - Creating embedded Solr server with solrHomeDir: /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip, fs: DFS[DFSClient[clientName=DFSClient_NONMAPREDUCE_-1828461666_1, ugi=nguyen (auth:SIMPLE)]], outputShardDir: hdfs://master.hadoop:8020/user/nguyen/twitter/outdir/reducers/_temporary/_attempt_201311191613_0320_r_000014_0/part-r-00014
2029 [Thread-64] INFO org.apache.solr.hadoop.HeartBeater - HeartBeat thread running
2030 [Thread-64] INFO org.apache.solr.hadoop.HeartBeater - Issuing heart beat for 1 threads
2083 [main] INFO org.apache.solr.core.SolrResourceLoader - new SolrResourceLoader for directory: '/data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/'
2259 [main] INFO org.apache.solr.hadoop.SolrRecordWriter - Constructed instance information solr.home /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip (/data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip), instance dir /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/, conf dir /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/conf/, writing index to solr.data.dir hdfs://master.hadoop:8020/user/nguyen/twitter/outdir/reducers/_temporary/_attempt_201311191613_0320_r_000014_0/part-r-00014/data, with permdir hdfs://master.hadoop:8020/user/nguyen/twitter/outdir/reducers/_temporary/_attempt_201311191613_0320_r_000014_0/part-r-00014
2266 [main] INFO org.apache.solr.core.ConfigSolr - Loading container configuration from /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/solr.xml
2267 [main] INFO org.apache.solr.core.ConfigSolr - /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/solr.xml does not exist, using default configuration
2505 [main] INFO org.apache.solr.core.CoreContainer - New CoreContainer 696103669
2505 [main] INFO org.apache.solr.core.CoreContainer - Loading cores into CoreContainer [instanceDir=/data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/]
2515 [main] INFO org.apache.solr.handler.component.HttpShardHandlerFactory - Setting socketTimeout to: 0
2515 [main] INFO org.apache.solr.handler.component.HttpShardHandlerFactory - Setting urlScheme to: http://
2515 [main] INFO org.apache.solr.handler.component.HttpShardHandlerFactory - Setting connTimeout to: 0
2515 [main] INFO org.apache.solr.handler.component.HttpShardHandlerFactory - Setting maxConnectionsPerHost to: 20
2516 [main] INFO org.apache.solr.handler.component.HttpShardHandlerFactory - Setting corePoolSize to: 0
2516 [main] INFO org.apache.solr.handler.component.HttpShardHandlerFactory - Setting maximumPoolSize to: 2147483647
2516 [main] INFO org.apache.solr.handler.component.HttpShardHandlerFactory - Setting maxThreadIdleTime to: 5
2516 [main] INFO org.apache.solr.handler.component.HttpShardHandlerFactory - Setting sizeOfQueue to: -1
2516 [main] INFO org.apache.solr.handler.component.HttpShardHandlerFactory - Setting fairnessPolicy to: false
2527 [main] INFO org.apache.solr.client.solrj.impl.HttpClientUtil - Creating new http client, config:maxConnectionsPerHost=20&maxConnections=10000&socketTimeout=0&connTimeout=0&retry=false
2648 [main] INFO org.apache.solr.logging.LogWatcher - Registering Log Listener [Log4j (org.slf4j.impl.Log4jLoggerFactory)]
2676 [coreLoadExecutor-3-thread-1] INFO org.apache.solr.core.CoreContainer - Creating SolrCore 'collection1' using instanceDir: /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/collection1
2677 [coreLoadExecutor-3-thread-1] INFO org.apache.solr.core.SolrResourceLoader - new SolrResourceLoader for directory: '/data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/collection1/'
2691 [coreLoadExecutor-3-thread-1] ERROR org.apache.solr.core.CoreContainer - Failed to load file /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/collection1/solrconfig.xml
2693 [coreLoadExecutor-3-thread-1] ERROR org.apache.solr.core.CoreContainer - Unable to create core: collection1
org.apache.solr.common.SolrException: Could not load config for solrconfig.xml
at org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:596)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:661)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:368)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:360)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: java.io.IOException: Can't find resource 'solrconfig.xml' in classpath or '/data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/collection1/conf/', cwd=/data/05/mapred/local/taskTracker/nguyen/jobcache/job_201311191613_0320/attempt_201311191613_0320_r_000014_0/work
at org.apache.solr.core.SolrResourceLoader.openResource(SolrResourceLoader.java:322)
at org.apache.solr.core.SolrResourceLoader.openConfig(SolrResourceLoader.java:287)
at org.apache.solr.core.Config.<init>(Config.java:116)
at org.apache.solr.core.Config.<init>(Config.java:86)
at org.apache.solr.core.SolrConfig.<init>(SolrConfig.java:120)
at org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:593)
... 11 more
2695 [coreLoadExecutor-3-thread-1] ERROR org.apache.solr.core.CoreContainer - null:org.apache.solr.common.SolrException: Unable to create core: collection1
at org.apache.solr.core.CoreContainer.recordAndThrow(CoreContainer.java:1158)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:670)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:368)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:360)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: org.apache.solr.common.SolrException: Could not load config for solrconfig.xml
at org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:596)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:661)
... 10 more
Caused by: java.io.IOException: Can't find resource 'solrconfig.xml' in classpath or '/data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/collection1/conf/', cwd=/data/05/mapred/local/taskTracker/nguyen/jobcache/job_201311191613_0320/attempt_201311191613_0320_r_000014_0/work
at org.apache.solr.core.SolrResourceLoader.openResource(SolrResourceLoader.java:322)
at org.apache.solr.core.SolrResourceLoader.openConfig(SolrResourceLoader.java:287)
at org.apache.solr.core.Config.<init>(Config.java:116)
at org.apache.solr.core.Config.<init>(Config.java:86)
at org.apache.solr.core.SolrConfig.<init>(SolrConfig.java:120)
at org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:593)
... 11 more
2697 [main] INFO org.apache.solr.core.CoreContainer - Creating SolrCore 'core1' using instanceDir: /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip
2697 [main] INFO org.apache.solr.core.SolrResourceLoader - new SolrResourceLoader for directory: '/data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/'
2751 [main] INFO org.apache.solr.core.SolrConfig - Adding specified lib dirs to ClassLoader
2752 [main] WARN org.apache.solr.core.SolrResourceLoader - Can't find (or read) directory to add to classloader: ../../../contrib/extraction/lib (resolved as: /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/../../../contrib/extraction/lib).
2752 [main] WARN org.apache.solr.core.SolrResourceLoader - Can't find (or read) directory to add to classloader: ../../../dist/ (resolved as: /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/../../../dist).
2752 [main] WARN org.apache.solr.core.SolrResourceLoader - Can't find (or read) directory to add to classloader: ../../../contrib/clustering/lib/ (resolved as: /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/../../../contrib/clustering/lib).
2753 [main] WARN org.apache.solr.core.SolrResourceLoader - Can't find (or read) directory to add to classloader: ../../../dist/ (resolved as: /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/../../../dist).
2753 [main] WARN org.apache.solr.core.SolrResourceLoader - Can't find (or read) directory to add to classloader: ../../../contrib/langid/lib/ (resolved as: /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/../../../contrib/langid/lib).
2753 [main] WARN org.apache.solr.core.SolrResourceLoader - Can't find (or read) directory to add to classloader: ../../../dist/ (resolved as: /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/../../../dist).
2753 [main] WARN org.apache.solr.core.SolrResourceLoader - Can't find (or read) directory to add to classloader: ../../../contrib/velocity/lib (resolved as: /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/../../../contrib/velocity/lib).
2753 [main] WARN org.apache.solr.core.SolrResourceLoader - Can't find (or read) directory to add to classloader: ../../../dist/ (resolved as: /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/../../../dist).
2785 [main] INFO org.apache.solr.update.SolrIndexConfig - IndexWriter infoStream solr logging is enabled
2790 [main] INFO org.apache.solr.core.SolrConfig - Using Lucene MatchVersion: LUCENE_44
2869 [main] INFO org.apache.solr.core.Config - Loaded SolrConfig: solrconfig.xml
2879 [main] INFO org.apache.solr.schema.IndexSchema - Reading Solr Schema from schema.xml
2937 [main] INFO org.apache.solr.schema.IndexSchema - [core1] Schema name=twitter
3352 [main] INFO org.apache.solr.schema.IndexSchema - unique key field: id
3471 [main] INFO org.apache.solr.schema.FileExchangeRateProvider - Reloading exchange rates from file currency.xml
3478 [main] INFO org.apache.solr.schema.FileExchangeRateProvider - Reloading exchange rates from file currency.xml
3635 [main] INFO org.apache.solr.core.HdfsDirectoryFactory - Solr Kerberos Authentication disabled
3636 [main] INFO org.apache.solr.core.JmxMonitoredMap - No JMX servers found, not exposing Solr information with JMX.
3652 [main] INFO org.apache.solr.core.HdfsDirectoryFactory - creating directory factory for path hdfs://master.hadoop:8020/user/nguyen/twitter/outdir/reducers/_temporary/_attempt_201311191613_0320_r_000014_0/part-r-00014/data
3686 [main] INFO org.apache.solr.core.CachingDirectoryFactory - return new directory for hdfs://master.hadoop:8020/user/nguyen/twitter/outdir/reducers/_temporary/_attempt_201311191613_0320_r_000014_0/part-r-00014/data
3711 [main] WARN org.apache.solr.core.SolrCore - [core1] Solr index directory 'hdfs:/master.hadoop:8020/user/nguyen/twitter/outdir/reducers/_temporary/_attempt_201311191613_0320_r_000014_0/part-r-00014/data/index' doesn't exist. Creating new index...
3719 [main] INFO org.apache.solr.core.HdfsDirectoryFactory - creating directory factory for path hdfs://master.hadoop:8020/user/nguyen/twitter/outdir/reducers/_temporary/_attempt_201311191613_0320_r_000014_0/part-r-00014/data/index
3719 [main] INFO org.apache.solr.core.HdfsDirectoryFactory - Number of slabs of block cache [1] with direct memory allocation set to [true]
3720 [main] INFO org.apache.solr.core.HdfsDirectoryFactory - Block cache target memory usage, slab size of [134217728] will allocate [1] slabs and use ~[134217728] bytes
3721 [main] INFO org.apache.solr.store.blockcache.BufferStore - Initializing the 1024 buffers with [8192] buffers.
3740 [main] INFO org.apache.solr.store.blockcache.BufferStore - Initializing the 8192 buffers with [8192] buffers.
3891 [main] INFO org.apache.solr.core.CachingDirectoryFactory - return new directory for hdfs://master.hadoop:8020/user/nguyen/twitter/outdir/reducers/_temporary/_attempt_201311191613_0320_r_000014_0/part-r-00014/data/index
3988 [main] INFO org.apache.solr.update.LoggingInfoStream - [IFD][main]: init: current segments file is "null"; deletionPolicy=org.apache.solr.core.IndexDeletionPolicyWrapper#65b01d5d
3992 [main] INFO org.apache.solr.update.LoggingInfoStream - [IFD][main]: now checkpoint "" [0 segments ; isCommit = false]
3992 [main] INFO org.apache.solr.update.LoggingInfoStream - [IFD][main]: 0 msec to checkpoint
3992 [main] INFO org.apache.solr.update.LoggingInfoStream - [IW][main]: init: create=true
3992 [main] INFO org.apache.solr.update.LoggingInfoStream - [IW][main]:
dir=NRTCachingDirectory(org.apache.solr.store.hdfs.HdfsDirectory#17e5a6d8 lockFactory=org.apache.solr.store.hdfs.HdfsLockFactory#7f117668; maxCacheMB=192.0 maxMergeSizeMB=16.0)
solr looks for solr.home parameter and searchs solrConfig.xml file there. if there is none it tries to load default configuration.
it looks like your solr home is
/data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/collection1/
check that folder for solrconfig.xml file
if there is none, copy one from example directory of solr
if there is one, match the file/folder permissions with the server instance

zookeeper does not run?

I wanted to run a solr cloud with solr 4.3.0.
(I am using aws ubuntu-12.04-lts micro instances)
So I followed this toturial:
which basically says, start the zookeeper and connect the solr instances to it.
Here's how I start the zookeeper.
First I copied the config like described in the tutorial
sudo cp zookeeper-3.4.5/conf/zoo_sample.cfg zookeeper-3.4.5/conf/zoo.cfg
Then I started the zookeeper
ubuntu#ip-10-48-159-36:/opt$ sudo zookeeper-3.4.5/bin/zkServer.sh start
JMX enabled by default
Using config: /opt/zookeeper-3.4.5/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
Looks fine so far.
I checked the status:
ubuntu#ip-10-48-159-36:/opt$ sudo zookeeper-3.4.5/bin/zkServer.sh status
JMX enabled by default
Using config: /opt/zookeeper-3.4.5/bin/../conf/zoo.cfg
Error contacting service. It is probably not running.
Which seems a bit weird already.
If I try to connect with the client (remote as well as local), its seems to work
ubuntu#ip-10-234-223-69:/opt$ zookeeper-3.4.5/bin/zkCli.sh -server ec2-54-247-144-120.eu-west-1.compute.amazonaws.com:2181
Connecting to ec2-54-247-144-120.eu-west-1.compute.amazonaws.com:2181
2013-06-07 11:07:01,996 [myid:] - INFO [main:Environment#100] - Client environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT
2013-06-07 11:07:02,000 [myid:] - INFO [main:Environment#100] - Client environment:host.name=ip-10-234-223-69.eu-west-1.compute.internal
2013-06-07 11:07:02,000 [myid:] - INFO [main:Environment#100] - Client environment:java.version=1.6.0_27
2013-06-07 11:07:02,002 [myid:] - INFO [main:Environment#100] - Client environment:java.vendor=Sun Microsystems Inc.
2013-06-07 11:07:02,003 [myid:] - INFO [main:Environment#100] - Client environment:java.home=/usr/lib/jvm/java-6-openjdk-amd64/jre
2013-06-07 11:07:02,003 [myid:] - INFO [main:Environment#100] - Client environment:java.class.path=/opt/zookeeper-3.4.5/bin/../build/classes:/opt/zookeeper-3.4.5/bin/../build/lib/*.jar:/opt/zookeeper-3.4.5/bin/../lib/slf4j-log4j12-1.6.1.jar:/opt/zookeeper-3.4.5/bin/../lib/slf4j-api-1.6.1.jar:/opt/zookeeper-3.4.5/bin/../lib/netty-3.2.2.Final.jar:/opt/zookeeper-3.4.5/bin/../lib/log4j-1.2.15.jar:/opt/zookeeper-3.4.5/bin/../lib/jline-0.9.94.jar:/opt/zookeeper-3.4.5/bin/../zookeeper-3.4.5.jar:/opt/zookeeper-3.4.5/bin/../src/java/lib/*.jar:/opt/zookeeper-3.4.5/bin/../conf:
2013-06-07 11:07:02,004 [myid:] - INFO [main:Environment#100] - Client environment:java.library.path=/usr/lib/jvm/java-6-openjdk-amd64/jre/lib/amd64/server:/usr/lib/jvm/java-6-openjdk-amd64/jre/lib/amd64:/usr/lib/jvm/java-6-openjdk-amd64/jre/../lib/amd64:/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib
2013-06-07 11:07:02,008 [myid:] - INFO [main:Environment#100] - Client environment:java.io.tmpdir=/tmp
2013-06-07 11:07:02,009 [myid:] - INFO [main:Environment#100] - Client environment:java.compiler=<NA>
2013-06-07 11:07:02,018 [myid:] - INFO [main:Environment#100] - Client environment:os.name=Linux
2013-06-07 11:07:02,019 [myid:] - INFO [main:Environment#100] - Client environment:os.arch=amd64
2013-06-07 11:07:02,019 [myid:] - INFO [main:Environment#100] - Client environment:os.version=3.2.0-40-virtual
2013-06-07 11:07:02,020 [myid:] - INFO [main:Environment#100] - Client environment:user.name=ubuntu
2013-06-07 11:07:02,020 [myid:] - INFO [main:Environment#100] - Client environment:user.home=/home/ubuntu
2013-06-07 11:07:02,021 [myid:] - INFO [main:Environment#100] - Client environment:user.dir=/opt
2013-06-07 11:07:02,029 [myid:] - INFO [main:ZooKeeper#438] - Initiating client connection, connectString=ec2-54-247-144-120.eu-west-1.compute.amazonaws.com:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher#182d9c06
Welcome to ZooKeeper!
2013-06-07 11:07:02,074 [myid:] - INFO [main-SendThread(ip-10-48-159-36.eu-west-1.compute.internal:2181):ClientCnxn$SendThread#966] - Opening socket connection to server ip-10-48-159-36.eu-west-1.compute.internal/10.48.159.36:2181. Will not attempt to authenticate using SASL (unknown error)
JLine support is enabled
[zk: ec2-54-247-144-120.eu-west-1.compute.amazonaws.com:2181(CONNECTING) 0] 2013-06-07 11:07:32,100 [myid:] - INFO [main-SendThread(ip-10-48-159-36.eu-west-1.compute.internal:2181):ClientCnxn$SendThread#1083] - Client session timed out, have not heard from server in 30038ms for sessionid 0x0, closing socket connection and attempting reconnect
2013-06-07 11:07:33,204 [myid:] - INFO [main-SendThread(ip-10-48-159-36.eu-west-1.compute.internal:2181):ClientCnxn$SendThread#966] - Opening socket connection to server ip-10-48-159-36.eu-west-1.compute.internal/10.48.159.36:2181. Will not attempt to authenticate using SASL (unknown error)
Now I tried to connect a solr instance to it. In the web interface of tomcat7 it only tells me "503 - Server is shutting down", so I checked the solr logs
2013-06-07 11:16:36,065 [pool-2-thread-1] INFO org.apache.solr.servlet.SolrDispatchFilter . SolrDispatchFilter.init()
2013-06-07 11:16:36,100 [pool-2-thread-1] INFO org.apache.solr.core.SolrResourceLoader . Using JNDI solr.home: /opt/solr-4.3.0/example/solr
2013-06-07 11:16:36,132 [pool-2-thread-1] INFO org.apache.solr.core.CoreContainer . looking for solr config file: /opt/solr-4.3.0/example/solr/solr.xml
2013-06-07 11:16:36,138 [pool-2-thread-1] INFO org.apache.solr.core.CoreContainer . New CoreContainer 1285984216
2013-06-07 11:16:36,146 [pool-2-thread-1] INFO org.apache.solr.core.CoreContainer . Loading CoreContainer using Solr Home: '/opt/solr-4.3.0/example/solr/'
2013-06-07 11:16:36,152 [pool-2-thread-1] INFO org.apache.solr.core.SolrResourceLoader . new SolrResourceLoader for directory: '/opt/solr-4.3.0/example/solr/'
2013-06-07 11:16:36,567 [pool-2-thread-1] INFO org.apache.solr.handler.component.HttpShardHandlerFactory . Setting socketTimeout to: 0
2013-06-07 11:16:36,568 [pool-2-thread-1] INFO org.apache.solr.handler.component.HttpShardHandlerFactory . Setting urlScheme to: http://
2013-06-07 11:16:36,568 [pool-2-thread-1] INFO org.apache.solr.handler.component.HttpShardHandlerFactory . Setting connTimeout to: 0
2013-06-07 11:16:36,568 [pool-2-thread-1] INFO org.apache.solr.handler.component.HttpShardHandlerFactory . Setting maxConnectionsPerHost to: 20
2013-06-07 11:16:36,568 [pool-2-thread-1] INFO org.apache.solr.handler.component.HttpShardHandlerFactory . Setting corePoolSize to: 0
2013-06-07 11:16:36,568 [pool-2-thread-1] INFO org.apache.solr.handler.component.HttpShardHandlerFactory . Setting maximumPoolSize to: 2147483647
2013-06-07 11:16:36,568 [pool-2-thread-1] INFO org.apache.solr.handler.component.HttpShardHandlerFactory . Setting maxThreadIdleTime to: 5
2013-06-07 11:16:36,569 [pool-2-thread-1] INFO org.apache.solr.handler.component.HttpShardHandlerFactory . Setting sizeOfQueue to: -1
2013-06-07 11:16:36,569 [pool-2-thread-1] INFO org.apache.solr.handler.component.HttpShardHandlerFactory . Setting fairnessPolicy to: false
2013-06-07 11:16:36,578 [pool-2-thread-1] INFO org.apache.solr.client.solrj.impl.HttpClientUtil . Creating new http client, config:maxConnectionsPerHost=20&maxConnections=10000&socketTimeout=0&connTimeout=0&retry=false
2013-06-07 11:16:36,879 [pool-2-thread-1] INFO org.apache.solr.core.CoreContainer . Registering Log Listener
2013-06-07 11:16:36,881 [pool-2-thread-1] INFO org.apache.solr.core.CoreContainer . Zookeeper client=ec2-54-247-144-120.eu-west-1.compute.amazonaws.com:2181
2013-06-07 11:16:36,888 [pool-2-thread-1] INFO org.apache.solr.client.solrj.impl.HttpClientUtil . Creating new http client, config:maxConnections=500&maxConnectionsPerHost=16&socketTimeout=0&connTimeout=0
2013-06-07 11:16:37,040 [pool-2-thread-1] INFO org.apache.solr.common.cloud.ConnectionManager . Waiting for client to connect to ZooKeeper
2013-06-07 11:16:52,046 [pool-2-thread-1] ERROR org.apache.solr.servlet.SolrDispatchFilter . Could not start Solr. Check solr/home property and the logs
2013-06-07 11:16:52,103 [pool-2-thread-1] ERROR org.apache.solr.core.SolrCore . null:java.lang.RuntimeException: java.util.concurrent.TimeoutException: Could not connect to ZooKeeper ec2-54-247-144-120.eu-west-1.compute.amazonaws.com:2181 within 15000 ms
at org.apache.solr.common.cloud.SolrZkClient.<init>(SolrZkClient.java:130)
at org.apache.solr.common.cloud.SolrZkClient.<init>(SolrZkClient.java:88)
at org.apache.solr.cloud.ZkController.<init>(ZkController.java:170)
at org.apache.solr.core.CoreContainer.initZooKeeper(CoreContainer.java:242)
at org.apache.solr.core.CoreContainer.load(CoreContainer.java:495)
at org.apache.solr.core.CoreContainer.load(CoreContainer.java:358)
at org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:326)
at org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:124)
at org.apache.catalina.core.ApplicationFilterConfig.initFilter(ApplicationFilterConfig.java:277)
at org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:258)
at org.apache.catalina.core.ApplicationFilterConfig.setFilterDef(ApplicationFilterConfig.java:382)
at org.apache.catalina.core.ApplicationFilterConfig.<init>(ApplicationFilterConfig.java:103)
at org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4638)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5294)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:895)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:871)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:615)
at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:649)
at org.apache.catalina.startup.HostConfig$DeployDescriptor.run(HostConfig.java:1581)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:679)
Caused by: java.util.concurrent.TimeoutException: Could not connect to ZooKeeper ec2-54-247-144-120.eu-west-1.compute.amazonaws.com:2181 within 15000 ms
at org.apache.solr.common.cloud.ConnectionManager.waitForConnected(ConnectionManager.java:173)
at org.apache.solr.common.cloud.SolrZkClient.<init>(SolrZkClient.java:127)
... 25 more
2013-06-07 11:16:52,104 [pool-2-thread-1] INFO org.apache.solr.servlet.SolrDispatchFilter . SolrDispatchFilter.init() done
What does it tell me?
On the same instance I just connected with the client successfully... :(
So where is the problem?
[Edit:]
Instead of using amazons ec**.amazon.* address I used the network addresses 10.X.X.X for telling solr where the zookeeper is.
It seems to work.
You have your answer - Your ZooKeeper in inaccessible!
Check your firewall configuration.
You can also check it with
zkCli.sh -server localhost:2181
There must have been some sort of connectivity problem.
I see you have it resolved now.
Next time you run into a situation like this, you should log onto the box that is having problems connecting and use telnet to see if you can connect.
eg: from your solr box:
telnet ec2-54-247-144-120.eu-west-1.compute.amazonaws.com 2181
and then try from the zk box too. It should start to illuminate where your issues are.
That eliminates any application layer issues and will tell you quite reliably wether or not you can connect. It you can't connect, then it's almost always some sort of security issue - either a firewall running somewhere (try - $service iptables stop) or it will be an issue with security group configuration in amazon.
The last potential problem is network availability. Despite what people think, the network is NOT reliable and should never be considered so. Anyone working in SOA/distributed systems will know this well :)
http://aphyr.com/posts/288-the-network-is-reliable
"A team from the University of Toronto and Microsoft Research studied the behavior of network failures in several of Microsoft’s datacenters. They found an average failure rate of 5.2 devices per day and 40.8 links per day with a median time to repair of approximately five minutes (and up to one week). "
While setting up SolrCloud and ZooKeeper I also ran into the "Error contacting service. It is probably not running." issue. The reason was a typo in a file name that ZooKeeper needs. The correct file name is "myid". I wrote "myip" by mistake. After the renaming of the file and restarting ZooKeeper (./zkServer.sh restart), my issue was resolved.
try to stop your solr instance solr.shutdown() so that you can create new CloudSolrServer instance for each thread

Resources