SolrCloud - How to resolve "Could not find collection configName" - solr

I am trying to start Solr in SolrCloud mode. I have created a new collection from collection1 and changed its name in file core.properties by setting the property name=logmail.
But when I start Solr, I am getting the following error
$ java -Dcollection.configName=logmail -DzkRun -Dnumshards=2 -DBootstrap_confdir=./solr/logmail/conf -jar start.jar
2165 [main] INFO org.apache.solr.common.cloud.ZkStateReader –
Updating cluster state from ZooKeeper... 2179
[OverseerStateUpdate-94955713964081152-127.0.1.1:8983_solr-n_0000000001]
INFO org.apache.solr.cloud.Overseer – Starting to work on the main
queue 2197 [main] INFO org.apache.solr.core.CoresLocator – Looking
for core definitions underneath /home/rahul/Desktop/dev/solrcloud/solr
2203 [main] INFO org.apache.solr.core.CoresLocator – Found core
logmail in /home/rahul/Desktop/dev/solrcloud/solr/logmail/ 2204 [main]
INFO org.apache.solr.core.CoresLocator – Found core collection1 in
/home/rahul/Desktop/dev/solrcloud/solr/collection1/ 2204 [main] INFO
org.apache.solr.core.CoresLocator – Found 2 core definitions 2207
[coreLoadExecutor-6-thread-1] INFO org.apache.solr.cloud.ZkController
– publishing core=logmail state=down collection=logmail 2207
[coreLoadExecutor-6-thread-2] INFO org.apache.solr.cloud.ZkController
– publishing core=collection1 state=down collection=collection1 2208
[coreLoadExecutor-6-thread-1] INFO org.apache.solr.cloud.ZkController
– numShards not found on descriptor - reading it from system property
2208 [coreLoadExecutor-6-thread-2] INFO
org.apache.solr.cloud.ZkController – numShards not found on
descriptor - reading it from system property 2214
[coreLoadExecutor-6-thread-1] INFO org.apache.solr.cloud.ZkController
– look for our core node name 2214 [coreLoadExecutor-6-thread-1] INFO
org.apache.solr.cloud.ZkController – waiting to find shard id in
clusterstate for logmail 2214 [zkCallback-2-thread-1] INFO
org.apache.solr.cloud.DistributedQueue – NodeChildrenChanged fired on
path /overseer/queue state SyncConnected 2215
[coreLoadExecutor-6-thread-1] INFO org.apache.solr.cloud.ZkController
– Check for collection zkNode:logmail 2222
[coreLoadExecutor-6-thread-2] INFO org.apache.solr.cloud.ZkController
– look for our core node name 2222 [coreLoadExecutor-6-thread-1] INFO
org.apache.solr.cloud.ZkController – Creating collection in
ZooKeeper:logmail 2222 [coreLoadExecutor-6-thread-2] INFO
org.apache.solr.cloud.ZkController – waiting to find shard id in
clusterstate for collection1 2223 [coreLoadExecutor-6-thread-1] INFO
org.apache.solr.cloud.ZkController – Looking for collection
configName 2223 [coreLoadExecutor-6-thread-2] INFO
org.apache.solr.cloud.ZkController – Check for collection
zkNode:collection1 2224 [coreLoadExecutor-6-thread-2] INFO
org.apache.solr.cloud.ZkController – Creating collection in
ZooKeeper:collection1 2224 [coreLoadExecutor-6-thread-2] INFO
org.apache.solr.cloud.ZkController – Looking for collection
configName 2225 [coreLoadExecutor-6-thread-1] INFO
org.apache.solr.cloud.ZkController – Could not find collection
configName - pausing for 3 seconds and trying again - try: 1 2226
[coreLoadExecutor-6-thread-2] INFO org.apache.solr.cloud.ZkController
– Could not find collection configName - pausing for 3 seconds and
trying again - try: 1 2226
[OverseerStateUpdate-94955713964081152-127.0.1.1:8983_solr-n_0000000001]
INFO org.apache.solr.cloud.Overseer – Update state numShards=null
message={ "core":"logmail", "roles":null,
"base_url":"http://127.0.1.1:8983/solr",
"node_name":"127.0.1.1:8983_solr", "state":"down", "shard":null,
"collection":"logmail", "operation":"state"} 2226
[OverseerStateUpdate-94955713964081152-127.0.1.1:8983_solr-n_0000000001]
INFO org.apache.solr.cloud.Overseer – node=core_node1 is already
registered 2227
[OverseerStateUpdate-94955713964081152-127.0.1.1:8983_solr-n_0000000001]
INFO org.apache.solr.cloud.Overseer – shard=shard1 is already
registered 2255 [zkCallback-2-thread-1] INFO
org.apache.solr.common.cloud.ZkStateReader – A cluster state change:
WatchedEvent state:SyncConnected type:NodeDataChanged
path:/clusterstate.json, has occurred - updating... (live nodes size:
1) 2268
[OverseerStateUpdate-94955713964081152-127.0.1.1:8983_solr-n_0000000001]
INFO org.apache.solr.cloud.Overseer – Update state numShards=null
message={ "core":"collection1", "roles":null,
"base_url":"http://127.0.1.1:8983/solr",
"node_name":"127.0.1.1:8983_solr", "state":"down", "shard":null,
"collection":"collection1", "operation":"state"} 2268
[OverseerStateUpdate-94955713964081152-127.0.1.1:8983_solr-n_0000000001]
INFO org.apache.solr.cloud.Overseer – node=core_node1 is already
registered 2269
[OverseerStateUpdate-94955713964081152-127.0.1.1:8983_solr-n_0000000001]
INFO org.apache.solr.cloud.Overseer – shard=shard1 is already
registered 2288 [zkCallback-2-thread-1] INFO
org.apache.solr.cloud.DistributedQueue – NodeChildrenChanged fired on
path /overseer/queue state SyncConnected 2318 [zkCallback-2-thread-1]
INFO org.apache.solr.common.cloud.ZkStateReader – A cluster state
change: WatchedEvent state:SyncConnected type:NodeDataChanged
path:/clusterstate.json, has occurred - updating... (live nodes size:
1) 5227 [coreLoadExecutor-6-thread-1] INFO
org.apache.solr.cloud.ZkController – Could not find collection
configName - pausing for 3 seconds and trying again - try: 2 5228
[coreLoadExecutor-6-thread-2] INFO org.apache.solr.cloud.ZkController
– Could not find collection configName - pausing for 3 seconds and
trying again - try: 2 8229 [coreLoadExecutor-6-thread-1] INFO
org.apache.solr.cloud.ZkController – Could not find collection
configName - pausing for 3 seconds and trying again - try: 3 8229
[coreLoadExecutor-6-thread-2] INFO org.apache.solr.cloud.ZkController
– Could not find collection configName - pausing for 3 seconds and
trying again - try: 3 11232 [coreLoadExecutor-6-thread-1] INFO
org.apache.solr.cloud.ZkController – Could not find collection
configName - pausing for 3 seconds and trying again - try: 4 11232
[coreLoadExecutor-6-thread-2] INFO org.apache.solr.cloud.ZkController
– Could not find collection configName - pausing for 3 seconds and
trying again - try: 4 14237 [coreLoadExecutor-6-thread-1] INFO
org.apache.solr.cloud.ZkController – Could not find collection
configName - pausing for 3 seconds and trying again - try: 5 14237
[coreLoadExecutor-6-thread-2] INFO org.apache.solr.cloud.ZkController
– Could not find collection configName - pausing for 3 seconds and
trying again - try: 5 17237 [coreLoadExecutor-6-thread-1] ERROR
org.apache.solr.cloud.ZkController – Could not find configName for
collection logmail 17238 [coreLoadExecutor-6-thread-2] ERROR
org.apache.solr.cloud.ZkController – Could not find configName for
collection collection1 17240 [coreLoadExecutor-6-thread-1] ERROR
org.apache.solr.core.CoreContainer – Error creating core [logmail]:
Could not find configName for collection logmail found:null
org.apache.solr.common.cloud.ZooKeeperException: Could not find
configName for collection logmail found:null

It looks like there may be a discrepancy between what Solr has on the filesystem for collections based on your commands and what is in zookeeper.
These are hard to fix; if possible I would recommend to delete your configuration files out of zookeeper and reload them.

You have a typo in your command. This should do the trick:
$ java -Dcollection.configName=logmail -DzkRun -Dnumshards=2 -Dbootstrap_confdir=./solr/logmail/conf -jar start.jar

Related

Apache Camel Route is working under Windows but not Linux

I am executing Camel code in Windows using Eclipse and it is working fine.
However, when I execute the same code in standalone from Linux, the route has print first log but when fetching file it stops without any error.
Here is my code:
from("timer://alertstrigtimer?period=90s&repeatCount=1")
.log(LoggingLevel.INFO, "*******************************Job-Alert-System: Started: alertstrigtimer******************************" + getFileURI(getWorkFilePath(), getWorkFileName()))
.pollEnrich(getFileURI(getWorkFilePath(), getWorkFileName()))
.log(LoggingLevel.INFO, "*******************************Job-Alert-System: Started: alertstrigtimer******************************" + getFileURI(getWorkFilePath(), getWorkFileName()))
.choice()
.when(header("CamelFileName").isNull())
.log(LoggingLevel.INFO, "No File")
.process(new Processor() {
public void process(Exchange exchange) throws Exception {
log.info("Job-Alert-System: No Date File Exist!!!! Calculate 15 Minutes Back and fetching data from Masterdata");
// Do something
}
})
.otherwise()
.log(LoggingLevel.INFO, "Job Alert System: Date File Loaded: ${header.CamelFileName} at ${header.CamelFileLastModified}")
.process(new Processor() {
// Do something by a processor
})
public static String getFileURI(String filePath, String fileName) {
return "file://" + filePath + "?fileName=" + fileName
+ "&preMove=$simple{file:onlyname.noext}.$simple{date:now:yyyy-MM-dd'T'hh-mm-ss}";
}
Here are my logs from the Linux environment:
[main] INFO org.apache.camel.impl.DefaultCamelContext - Apache Camel 2.21.1 (CamelContext: camel-1) is starting
[main] INFO org.apache.camel.management.ManagedManagementStrategy - JMX is enabled
[main] INFO org.apache.camel.impl.converter.DefaultTypeConverter - Type converters loaded (core: 194, classpath: 0)
[main] INFO org.apache.camel.impl.DefaultCamelContext - StreamCaching is not in use. If using streams then its recommended to enable stream caching. See more details at http://camel.apache.org/stream-caching.html
[main] INFO org.apache.camel.impl.DefaultCamelContext - Route: route1 started and consuming from: timer://alertstrigtimer?period=90s&repeatCount=1
[main] INFO org.apache.camel.impl.DefaultCamelContext - Route: loadDataAndAlerts started and consuming from: direct://loadDataAndAlerts
[main] INFO org.apache.camel.impl.DefaultCamelContext - Total 2 routes, of which 2 are started
[main] INFO org.apache.camel.impl.DefaultCamelContext - Apache Camel 2.21.1 (CamelContext: camel-1) started in 0.664 seconds
[Camel (camel-1) thread #1 - timer://alertstrigtimer] INFO route1 - *******************************Job-Alert-System: Started: alertstrigtimer******************************file:///shared/wildfly/work-files/alerts?fileName=LastExecutionTime_JobAlerts.txt&preMove=.2020-10-12T06-48-16
It stops here. It creates a directory structure, but does not move forward.
Logs from My Local Machine:
[main] INFO org.apache.camel.impl.DefaultCamelContext - Apache Camel 2.21.1 (CamelContext: camel-1) is starting
[main] INFO org.apache.camel.management.ManagedManagementStrategy - JMX is enabled
[main] INFO org.apache.camel.impl.converter.DefaultTypeConverter - Type converters loaded (core: 194, classpath: 5)
[main] INFO org.apache.camel.impl.DefaultCamelContext - StreamCaching is not in use. If using streams then its recommended to enable stream caching. See more details at http://camel.apache.org/stream-caching.html
[main] INFO org.apache.camel.impl.DefaultCamelContext - Route: route1 started and consuming from: timer://alertstrigtimer?period=90s&repeatCount=1
[main] INFO org.apache.camel.impl.DefaultCamelContext - Route: loadDataAndAlerts started and consuming from: direct://loadDataAndAlerts
[main] INFO org.apache.camel.impl.DefaultCamelContext - Total 2 routes, of which 2 are started
[main] INFO org.apache.camel.impl.DefaultCamelContext - Apache Camel 2.21.1 (CamelContext: camel-1) started in 0.845 seconds
[Camel (camel-1) thread #1 - timer://alertstrigtimer] INFO route1 - *******************************Job-Alert-System: Started: alertstrigtimer******************************file://null?fileName=null&preMove=null.2020-10-12T10-28-51
[Camel (camel-1) thread #1 - timer://alertstrigtimer] INFO route1 - Job Alert System: Date File Loaded: null.2020-10-12T10-28-51 at 0
It creates directory structure in addition to a file, but the file is not present and it moves forward.

Corrupted files with Camel Ftp component

I'm using apache camel to make a ftp client for downloading some files to some local directory. The program reads a xml file to get the name of the file that will be fetched from the ftp.The program seems to work except that the files downloaded are corrupted. Right now I'm trying to download some image files but the ones I get are 14.9Kb and corrupted, no error message shown.
This is my code:
main
public void main() throws FileNotFoundException {
BasicConfigurator.configure();
RutaFtp routeBuilder = new RutaFtp();
CamelContext ctx = new DefaultCamelContext();
try {
ctx.addRoutes(routeBuilder);
ctx.start();
Thread.sleep(10000);
ctx.stop();
}
catch (Exception e) {
e.printStackTrace();
}
}
camel route:
from("file:./?fileName=Datos.xml&noop=true")
.split(xpath("//Datos/imagen/text()"))
.setProperty("rutaArchivo", this.body())
.log(LoggingLevel.INFO, "imagen: ${body}")
.process(ExtraerNombre).to("direct:ftp").end();
from("direct:ftp")
.pollEnrich("ftp://"+user+"#"+ftp+"/?password="+password+"&recursive=true&passiveMode=true&fileName=${body}&delete="+borrado+"")
.to("file:C:/outputFolder?flatten=true").end();
}
I've tried using the streamDownload parameter but tha prevents the files to be downloaded (I don't know why)
.pollEnrich("ftp://"+user+"#"+ftp+"/?password="+password+"&recursive=true&passiveMode=true&streamDownload=true&fileName=${body}&delete="+borrado+"")
console log:
INFO | Apache Camel 2.15.1.redhat-621084 (CamelContext: camel-1) is
starting 0 [main] INFO org.apache.camel.impl.DefaultCamelContext -
Apache Camel 2.15.1.redhat-621084 (CamelContext: camel-1) is starting
INFO | JMX is enabled 10 [main] INFO
org.apache.camel.management.ManagedManagementStrategy - JMX is
enabled INFO | Loaded 185 type converters 208 [main] INFO
org.apache.camel.impl.converter.DefaultTypeConverter - Loaded 185
type converters INFO | AllowUseOriginalMessage is enabled. If access
to the original message is not needed, then its recommended to turn
this option off as it may improve performance. 395 [main] INFO
org.apache.camel.impl.DefaultCamelContext - AllowUseOriginalMessage
is enabled. If access to the original message is not needed, then its
recommended to turn this option off as it may improve performance.
INFO | StreamCaching is not in use. If using streams then its
recommended to enable stream caching. See more details at
http://camel.apache.org/stream-caching.html 395 [main] INFO
org.apache.camel.impl.DefaultCamelContext - StreamCaching is not in
use. If using streams then its recommended to enable stream caching.
See more details at http://camel.apache.org/stream-caching.html INFO
| Endpoint is configured with noop=true so forcing endpoint to be
idempotent as well 395 [main] INFO
org.apache.camel.component.file.FileEndpoint - Endpoint is configured
with noop=true so forcing endpoint to be idempotent as well INFO |
Using default memory based idempotent repository with cache max size:
1000 395 [main] INFO org.apache.camel.component.file.FileEndpoint -
Using default memory based idempotent repository with cache max size:
1000 INFO | Route: route1 started and consuming from:
Endpoint[file://./?fileName=Datos.xml&noop=true] 502 [main] INFO
org.apache.camel.impl.DefaultCamelContext - Route: route1 started and
consuming from: Endpoint[file://./?fileName=Datos.xml&noop=true] INFO
| Route: route2 started and consuming from: Endpoint[direct://ftp] 504
[main] INFO org.apache.camel.impl.DefaultCamelContext - Route: route2
started and consuming from: Endpoint[direct://ftp] INFO | Total 2
routes, of which 2 is started. 504 [main] INFO
org.apache.camel.impl.DefaultCamelContext - Total 2 routes, of which
2 is started. INFO | Apache Camel 2.15.1.redhat-621084 (CamelContext:
camel-1) started in 0.504 seconds 507 [main] INFO
org.apache.camel.impl.DefaultCamelContext - Apache Camel
2.15.1.redhat-621084 (CamelContext: camel-1) started in 0.504 seconds INFO | Created default XPathFactory
com.sun.org.apache.xpath.internal.jaxp.XPathFactoryImpl#5434283f 1533
[Camel (camel-1) thread #0 - file://./] INFO
org.apache.camel.builder.xml.XPathBuilder - Created default
XPathFactory
com.sun.org.apache.xpath.internal.jaxp.XPathFactoryImpl#5434283f INFO
| imagen: ftp://190.0.56.190:8021/pruebasumman/conductor/71708375.jpg
1635 [Camel (camel-1) thread #0 - file://./] INFO route1 - imagen:
ftp://190.0.56.190:8021/pruebasumman/conductor/71708375.jpg INFO |
Apache Camel 2.15.1.redhat-621084 (CamelContext: camel-1) is shutting
down 10521 [main] INFO org.apache.camel.impl.DefaultCamelContext -
Apache Camel 2.15.1.redhat-621084 (CamelContext: camel-1) is shutting
down INFO | Starting to graceful shutdown 2 routes (timeout 300
seconds) 10524 [main] INFO
org.apache.camel.impl.DefaultShutdownStrategy - Starting to graceful
shutdown 2 routes (timeout 300 seconds) INFO | Waiting as there are
still 3 inflight and pending exchanges to complete, timeout in 300
seconds. 10524 [Camel (camel-1) thread #2 - ShutdownTask] INFO
org.apache.camel.impl.DefaultShutdownStrategy - Waiting as there are
still 3 inflight and pending exchanges to complete, timeout in 300
seconds. INFO | Waiting as there are still 3 inflight and pending
exchanges to complete, timeout in 299 seconds. 11525 [Camel (camel-1)
thread #2 - ShutdownTask] INFO
org.apache.camel.impl.DefaultShutdownStrategy - Waiting as there are
still 3 inflight and pending exchanges to complete, timeout in 299
seconds. INFO | Waiting as there are still 3 inflight and pending
exchanges to complete, timeout in 298 seconds. 12528 [Camel (camel-1)
thread #2 - ShutdownTask] INFO
org.apache.camel.impl.DefaultShutdownStrategy - Waiting as there are
still 3 inflight and pending exchanges to complete, timeout in 298
seconds. INFO | Waiting as there are still 3 inflight and pending
exchanges to complete, timeout in 297 seconds. 13529 [Camel (camel-1)
thread #2 - ShutdownTask] INFO
org.apache.camel.impl.DefaultShutdownStrategy - Waiting as there are
still 3 inflight and pending exchanges to complete, timeout in 297
seconds. INFO | Waiting as there are still 3 inflight and pending
exchanges to complete, timeout in 296 seconds. 14540 [Camel (camel-1)
thread #2 - ShutdownTask] INFO
org.apache.camel.impl.DefaultShutdownStrategy - Waiting as there are
still 3 inflight and pending exchanges to complete, timeout in 296
seconds. INFO | Waiting as there are still 3 inflight and pending
exchanges to complete, timeout in 295 seconds. 15555 [Camel (camel-1)
thread #2 - ShutdownTask] INFO
org.apache.camel.impl.DefaultShutdownStrategy - Waiting as there are
still 3 inflight and pending exchanges to complete, timeout in 295
seconds. INFO | Waiting as there are still 3 inflight and pending
exchanges to complete, timeout in 294 seconds. 16568 [Camel (camel-1)
thread #2 - ShutdownTask] INFO
org.apache.camel.impl.DefaultShutdownStrategy - Waiting as there are
still 3 inflight and pending exchanges to complete, timeout in 294
seconds. INFO | Waiting as there are still 3 inflight and pending
exchanges to complete, timeout in 293 seconds. 17569 [Camel (camel-1)
thread #2 - ShutdownTask] INFO
org.apache.camel.impl.DefaultShutdownStrategy - Waiting as there are
still 3 inflight and pending exchanges to complete, timeout in 293
seconds. INFO | Waiting as there are still 3 inflight and pending
exchanges to complete, timeout in 292 seconds. 18574 [Camel (camel-1)
thread #2 - ShutdownTask] INFO
org.apache.camel.impl.DefaultShutdownStrategy - Waiting as there are
still 3 inflight and pending exchanges to complete, timeout in 292
seconds.
Thanks in advance.
Download image file in binary mode
By default, Camel FTP is downloading file by ASCII mode.
Add binary=true into your ftp route will turn from ASCII mode to binary mode

Solr suddenly shutsdown gracefully

I searched a lot for this problem but I can't find many resources concerning this issue.
We are running a set-up of SolrCloud on 2 FreeBsd servers. Every night at exactly 12:00 our Solr servers are somehow being shutdown gracefully internally by solr. Strange is that both solr-servers are being restarted.
I don't have any clu what the cause is, it seems that Jetty is the possible cause with the setting "stopAtShutdown" and "gracefulShutdown" but I am not sure what the reason is for restarting solr.
Some log lines:
INFO - 2015-11-24 00:00:01.056; org.eclipse.jetty.server.Server; Graceful shutdown SocketConnector#0.0.0.0:8983
INFO - 2015-11-24 00:00:01.057; org.eclipse.jetty.server.Server; Graceful shutdown o.e.j.w.WebAppContext{/solr,file:/usr/local/share/examples/apache-solr/solr-webapp/webapp/},/usr/local/share/examples/apa
che-solr/webapps/solr.war
INFO - 2015-11-24 00:00:02.064; org.apache.solr.core.CoreContainer; Shutting down CoreContainer instance=568196821
WARN - 2015-11-24 00:00:02.065; org.apache.solr.cloud.RecoveryStrategy; Stopping recovery for core=X_shard1_replica2 coreNodeName=core_node1
WARN - 2015-11-24 00:00:02.065; org.apache.solr.cloud.RecoveryStrategy; Stopping recovery for core=Y_shard1_replica1 coreNodeName=core_node1
INFO - 2015-11-24 00:00:02.065; org.apache.solr.cloud.ZkController; publishing core=Z_shard1_replica1 state=down collection=Z
INFO - 2015-11-24 00:00:02.073; org.apache.solr.cloud.ZkController; publishing core=Zoetermeer_Openbaar_shard1_replica2 state=down collection=Zoetermeer_Openbaar
INFO - 2015-11-24 00:00:02.102; org.apache.solr.cloud.ZkController; publishing core=X_shard1_replica2 state=down collection=X
INFO - 2015-11-24 00:00:02.110; org.apache.solr.cloud.ZkController; publishing core=Y_shard1_replica1 state=down collection=Y
INFO - 2015-11-24 00:00:02.125; org.apache.solr.core.SolrCore; [Z_shard1_replica1] CLOSING SolrCore org.apache.solr.core.SolrCore#3e6f43bd
INFO - 2015-11-24 00:00:02.125; org.apache.solr.update.DirectUpdateHandler2; closing DirectUpdateHandler2{commits=0,autocommits=0,soft autocommits=0,optimizes=0,rollbacks=0,expungeDeletes=0,docsPending=0,
adds=0,deletesById=0,deletesByQuery=0,errors=0,cumulative_adds=0,cumulative_deletesById=0,cumulative_deletesByQuery=0,cumulative_errors=0,transaction_logs_total_size=0,transaction_logs_total_number=0}
INFO - 2015-11-24 00:00:02.126; org.apache.solr.update.SolrCoreState; Closing SolrCoreState
INFO - 2015-11-24 00:00:02.126; org.apache.solr.update.DefaultSolrCoreState; SolrCoreState ref count has reached 0 - closing IndexWriter
INFO - 2015-11-24 00:00:02.126; org.apache.solr.update.DefaultSolrCoreState; closing IndexWriter with IndexWriterCloser
INFO - 2015-11-24 00:00:02.129; org.apache.solr.core.SolrCore; [Z_shard1_replica1] Closing main searcher on request.
INFO - 2015-11-24 00:00:02.130; org.apache.solr.core.CachingDirectoryFactory; Closing StandardDirectoryFactory - 2 directories currently being tracked
INFO - 2015-11-24 00:00:02.132; org.apache.solr.common.cloud.ZkStateReader$2; A cluster state change: WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json, has occurred - updating
... (live nodes size: 2)
INFO - 2015-11-24 00:00:02.133; org.apache.solr.core.CachingDirectoryFactory; looking to close /data/solr/Z_shard1_replica1/data/index [CachedDir<<refCount=0;path=/data/solr/Z_shard1_replica1/data/index;done=false>>]
INFO - 2015-11-24 00:00:02.133; org.apache.solr.core.CachingDirectoryFactory; Closing directory: /data/solr/Z_shard1_replica1/data/index
INFO - 2015-11-24 00:00:02.134; org.apache.solr.core.CachingDirectoryFactory; looking to close /data/solr/Z_shard1_replica1/data [CachedDir<<refCount=0;path=/data/solr/Z_shard1_replica1/data;done=false>>]
INFO - 2015-11-24 00:00:02.134; org.apache.solr.core.CachingDirectoryFactory; Closing directory: /data/solr/Z_shard1_replica1/data
INFO - 2015-11-24 00:00:02.134; org.apache.solr.core.SolrCore; [X_Openbaar_shard1_replica2] CLOSING SolrCore org.apache.solr.core.SolrCore#78
Does somebody have an idea what is going on?

Unable to start solr in DSE(Permission denied)

I am new in Cassandra After Installing DSE in CentOS, I started DSE services successfully but I can not start Solr services.I got error while start solr, Kindly check below error log.
[dba#support dse]$ bin/dse cassandra -s
Tomcat: Logging to /home/dba/tomcat
[dba#support dse]$ 18:08:21,873 |-INFO in ch.qos.logback.classic.LoggerContext[d efault] - Found resource [logback.xml] at [file:/home/Datastax/dse/resources/cas sandra/conf/logback.xml]
18:08:22,484 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - debug attribute not set
18:08:22,493 |-INFO in ReconfigureOnChangeFilter{invocationCounter=0} - Will sca n for changes in [[/home/Datastax/dse/resources/cassandra/conf/logback.xml]] eve ry 60 seconds.
18:08:22,493 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - Adding ReconfigureOnChangeFilter as a turbo filter
18:08:22,537 |-INFO in ch.qos.logback.classic.joran.action.JMXConfiguratorAction - begin
18:08:22,822 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About t o instantiate appender of type [ch.qos.logback.core.rolling.RollingFileAppender]
18:08:22,828 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [FILE]
18:08:22,941 |-INFO in ch.qos.logback.core.rolling.FixedWindowRollingPolicy#7787 8e70 - Will use zip compression
18:08:22,986 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [ch.qos.logback.classic.encoder.PatternLayoutEncoder] fo r [encoder] property
18:08:23,037 |-INFO in ch.qos.logback.core.rolling.RollingFileAppender[FILE] - A ctive log file name: /home/Datastax/log/cassandra/system.log
18:08:23,037 |-INFO in ch.qos.logback.core.rolling.RollingFileAppender[FILE] - F ile property is set to [/home/Datastax/log/cassandra/system.log]
18:08:23,039 |-ERROR in ch.qos.logback.core.rolling.RollingFileAppender[FILE] - openFile(/home/Datastax/log/cassandra/system.log,true) call failed. java.io.File NotFoundException: /home/Datastax/log/cassandra/system.log (Permission denied)
at java.io.FileNotFoundException: /home/Datastax/log/cassandra/system.lo g (Permission denied)
at at java.io.FileOutputStream.open(Native Method)
at at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
at at ch.qos.logback.core.recovery.ResilientFileOutputStream.<init> (ResilientFileOutputStream.java:28)
at at ch.qos.logback.core.FileAppender.openFile(FileAppender.java:1 50)
at at ch.qos.logback.core.FileAppender.start(FileAppender.java:108)
at at ch.qos.logback.core.rolling.RollingFileAppender.start(Rolling FileAppender.java:86)
at at ch.qos.logback.core.joran.action.AppenderAction.end(AppenderA ction.java:96)
at at ch.qos.logback.core.joran.spi.Interpreter.callEndAction(Inter preter.java:317)
at at ch.qos.logback.core.joran.spi.Interpreter.endElement(Interpre ter.java:196)
at at ch.qos.logback.core.joran.spi.Interpreter.endElement(Interpre ter.java:182)
at at ch.qos.logback.core.joran.spi.EventPlayer.play(EventPlayer.ja va:62)
at at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(Gen ericConfigurator.java:149)
at at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(Gen ericConfigurator.java:135)
at at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(Gen ericConfigurator.java:99)
at at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(Gen ericConfigurator.java:49)
at at ch.qos.logback.classic.util.ContextInitializer.configureByRes ource(ContextInitializer.java:75)
at at ch.qos.logback.classic.util.ContextInitializer.autoConfig(Con textInitializer.java:150)
at at org.slf4j.impl.StaticLoggerBinder.init(StaticLoggerBinder.jav a:85)
at at org.slf4j.impl.StaticLoggerBinder.<clinit>(StaticLoggerBinder .java:55)
at at org.slf4j.LoggerFactory.bind(LoggerFactory.java:142)
at at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.j ava:121)
at at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java: 332)
at at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:284)
at at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:305)
at at com.datastax.bdp.server.AbstractDseModule.<clinit>(AbstractDs eModule.java:20)
18:08:23,933 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About t
INFO 12:38:25 Load of settings is done.
INFO 12:38:25 CQL slow log is enabled
INFO 12:38:25 CQL system info tables are not enabled
INFO 12:38:25 Resource level latency tracking is not enabled
INFO 12:38:25 Database summary stats are not enabled
INFO 12:38:25 Cluster summary stats are not enabled
INFO 12:38:25 Histogram data tables are not enabled
INFO 12:38:25 User level latency tracking is not enabled
INFO 12:38:25 Spark cluster info tables are not enabled
INFO 12:38:25 Loading settings from file:/home/Datastax/dse/resources/cassandr a/conf/cassandra.yaml
INFO 12:38:25 Node configuration:[authenticator=AllowAllAuthenticator; authori zer=AllowAllAuthorizer; auto_snapshot=true; batch_size_warn_threshold_in_kb=64; batchlog_replay_throttle_in_kb=1024; cas_contention_timeout_in_ms=1000; client_e ncryption_options=<REDACTED>; cluster_name=Cassandra Cluster; column_index_size_ in_kb=64; commit_failure_policy=stop; commitlog_directory=/home/Datastax/commitl og; commitlog_segment_size_in_mb=32; commitlog_sync=periodic; commitlog_sync_per iod_in_ms=10000; compaction_throughput_mb_per_sec=16; concurrent_counter_writes= 32; concurrent_reads=32; concurrent_writes=32; counter_cache_save_period=7200; c ounter_cache_size_in_mb=null; counter_write_request_timeout_in_ms=5000; cross_no de_timeout=false; data_file_directories=[/home/Datastax/data]; disk_failure_poli cy=stop; dynamic_snitch_badness_threshold=0.1; dynamic_snitch_reset_interval_in_ ms=600000; dynamic_snitch_update_interval_in_ms=100; endpoint_snitch=com.datasta x.bdp.snitch.DseSimpleSnitch; hinted_handoff_enabled=true; hinted_handoff_thrott le_in_kb=1024; incremental_backups=false; index_summary_capacity_in_mb=null; ind ex_summary_resize_interval_in_minutes=60; inter_dc_tcp_nodelay=false; internode_ compression=dc; key_cache_save_period=14400; key_cache_size_in_mb=null; listen_a ddress=172.16.16.250; max_hint_window_in_ms=10800000; max_hints_delivery_threads =2; memtable_allocation_type=heap_buffers; native_transport_port=9042; num_token s=256; partitioner=org.apache.cassandra.dht.Murmur3Partitioner; permissions_vali dity_in_ms=2000; range_request_timeout_in_ms=10000; read_request_timeout_in_ms=5 000; request_scheduler=org.apache.cassandra.scheduler.NoScheduler; request_timeo ut_in_ms=10000; row_cache_save_period=0; row_cache_size_in_mb=0; rpc_address=172 .16.16.250; rpc_keepalive=true; rpc_port=9160; rpc_server_type=sync; saved_cache s_directory=/home/Datastax/saved_caches; seed_provider=[{class_name=org.apache.c assandra.locator.SimpleSeedProvider, parameters=[{seeds=172.16.16.250,202.129.19 8.236}]}]; server_encryption_options=<REDACTED>; snapshot_before_compaction=fals e; ssl_storage_port=7001; sstable_preemptive_open_interval_in_mb=50; start_nativ e_transport=true; start_rpc=true; storage_port=7000; thrift_framed_transport_siz e_in_mb=15; tombstone_failure_threshold=100000; tombstone_warn_threshold=1000; t rickle_fsync=false; trickle_fsync_interval_in_kb=10240; truncate_request_timeout _in_ms=60000; write_request_timeout_in_ms=2000]
INFO 12:38:25 DiskAccessMode 'auto' determined to be mmap, indexAccessMode is mmap
INFO 12:38:25 Global memtable on-heap threshold is enabled at 479MB
INFO 12:38:25 Global memtable off-heap threshold is enabled at 479MB
INFO 12:38:25 Detected search service is enabled, setting my workload to Searc h
INFO 12:38:25 Detected search service is enabled, setting my DC to Solr
INFO 12:38:25 Initialized DseDelegateSnitch with workload Search, delegating t o com.datastax.bdp.snitch.DseSimpleSnitch
INFO 12:38:26 Loading settings from file:/home/Datastax/dse/resources/cassandr a/conf/cassandra.yaml
INFO 12:38:26 Node configuration:[authenticator=AllowAllAuthenticator; authori zer=AllowAllAuthorizer; auto_snapshot=true; batch_size_warn_threshold_in_kb=64; batchlog_replay_throttle_in_kb=1024; cas_contention_timeout_in_ms=1000; client_e ncryption_options=<REDACTED>; cluster_name=Cassandra Cluster; column_index_size_ in_kb=64; commit_failure_policy=stop; commitlog_directory=/home/Datastax/commitl og; commitlog_segment_size_in_mb=32; commitlog_sync=periodic; commitlog_sync_per iod_in_ms=10000; compaction_throughput_mb_per_sec=16; concurrent_counter_writes= 32; concurrent_reads=32; concurrent_writes=32; counter_cache_save_period=7200; c ounter_cache_size_in_mb=null; counter_write_request_timeout_in_ms=5000; cross_no de_timeout=false; data_file_directories=[/home/Datastax/data]; disk_failure_poli cy=stop; dynamic_snitch_badness_threshold=0.1; dynamic_snitch_reset_interval_in_ ms=600000; dynamic_snitch_update_interval_in_ms=100; endpoint_snitch=com.datasta x.bdp.snitch.DseSimpleSnitch; hinted_handoff_enabled=true; hinted_handoff_thrott le_in_kb=1024; incremental_backups=false; index_summary_capacity_in_mb=null; ind ex_summary_resize_interval_in_minutes=60; inter_dc_tcp_nodelay=false; internode_ compression=dc; key_cache_save_period=14400; key_cache_size_in_mb=null; listen_a ddress=172.16.16.250; max_hint_window_in_ms=10800000; max_hints_delivery_threads =2; memtable_allocation_type=heap_buffers; native_transport_port=9042; num_token s=256; partitioner=org.apache.cassandra.dht.Murmur3Partitioner; permissions_vali dity_in_ms=2000; range_request_timeout_in_ms=10000; read_request_timeout_in_ms=5 000; request_scheduler=org.apache.cassandra.scheduler.NoScheduler; request_timeo ut_in_ms=10000; row_cache_save_period=0; row_cache_size_in_mb=0; rpc_address=172 .16.16.250; rpc_keepalive=true; rpc_port=9160; rpc_server_type=sync; saved_cache s_directory=/home/Datastax/saved_caches; seed_provider=[{class_name=org.apache.c assandra.locator.SimpleSeedProvider, parameters=[{seeds=172.16.16.250,202.129.19 8.236}]}]; server_encryption_options=<REDACTED>; snapshot_before_compaction=fals e; ssl_storage_port=7001; sstable_preemptive_open_interval_in_mb=50; start_nativ e_transport=true; start_rpc=true; storage_port=7000; thrift_framed_transport_siz e_in_mb=15; tombstone_failure_threshold=100000; tombstone_warn_threshold=1000; t rickle_fsync=false; trickle_fsync_interval_in_kb=10240; truncate_request_timeout _in_ms=60000; write_request_timeout_in_ms=2000]
INFO 12:38:26 Using Solr-enabled cql queries
INFO 12:38:26 CFS operations enabled
INFO 12:38:27 UserLatencyTracking plugin using 1 async writers
INFO 12:38:27 Initializing user/object io tracker plugin
INFO 12:38:27 Initializing CQL slow query log plugin
INFO 12:38:27 Solr node health tracking is not enabled
INFO 12:38:27 Solr latency snapshots are not enabled
INFO 12:38:27 Solr slow sub-query log is not enabled
INFO 12:38:27 Solr indexing error log is not enabled
INFO 12:38:27 Solr update handler metrics are not enabled
INFO 12:38:27 Solr request handler metrics are not enabled
INFO 12:38:27 Solr index statistics reporting is not enabled
INFO 12:38:27 Solr cache statistics reporting is not enabled
INFO 12:38:27 Initializing Solr slow query log plugin...
INFO 12:38:27 Initializing Solr document validation error log plugin...
INFO 12:38:27 CqlSystemInfo plugin using 1 async writers
INFO 12:38:27 ClusterSummaryStats plugin using 8 async writers
INFO 12:38:27 DbSummaryStats plugin using 8 async writers
INFO 12:38:27 HistogramDataTables plugin using 8 async writers
INFO 12:38:27 ResourceLatencyTracking plugin using 8 async writers
INFO 12:38:27 Setting TTL to 604800
INFO 12:38:27 Setting TTL to 604800
INFO 12:38:27 Setting TTL to 604800
INFO 12:38:27 Setting TTL to 604800
INFO 12:38:27 Setting TTL to 604800
INFO 12:38:27 DSE version: 4.7.0
INFO 12:38:27 Hadoop version: 1.0.4.15
INFO 12:38:27 Hive version: 0.12.0.7
INFO 12:38:27 Pig version: 0.10.1
INFO 12:38:27 Solr version: 4.10.3.0.6
INFO 12:38:27 Sqoop version: 1.4.5.15.1
INFO 12:38:27 Mahout version: 0.8
INFO 12:38:27 Appender version: 3.1.0
INFO 12:38:27 Spark version: 1.2.1.2
INFO 12:38:27 Shark version: 1.1.1
INFO 12:38:27 Hive metastore version: 1
INFO 12:38:27 CQL slow log is enabled
INFO 12:38:27 CQL system info tables are not enabled
INFO 12:38:27 Resource level latency tracking is not enabled
INFO 12:38:27 Database summary stats are not enabled
INFO 12:38:27 Cluster summary stats are not enabled
INFO 12:38:27 Histogram data tables are not enabled
INFO 12:38:27 User level latency tracking is not enabled
INFO 12:38:27 Spark cluster info tables are not enabled
INFO 12:38:27 Using com.datastax.bdp.cassandra.cql3.DseQueryHandler as query h andler for native protocol queries (as requested with -Dcassandra.custom_query_h andler_class)
INFO 12:38:28 Initializing system.schema_triggers
ERROR 12:38:31 Failed managing commit log segments. Commit disk failure policy is stop; terminating thread
org.apache.cassandra.io.FSWriteError: java.io.FileNotFoundException: /home/Datas tax/commitlog/CommitLog-4-1432643911014.log (Permission denied)
Anyone point me the way to rectify this error
This is likely a permissions issue with the parent Datastax directory. On startup DSE will attempt to create the log file (system.log), and will fail if permissions are not setup correctly on the parent directories. Can you provide more info about?:
install method (stand-alone installer or tarball)
DSE version

Loading solr configs in Cloudera SolrCloud

We try to import our data into SolrCloud using MapReduce batch indexing. We face a problem at the reduce phase, that solr.xml cannot be found. We create a 'twitter' collection but looking at the logs, after it failed to load in solr.xml, it uses the default one and tries to create 'collection1' (failed) and 'core1' (success) SolrCore. I'm not sure if we need to create our own solr.xml and where to put it (we try to put it at several places but it seems not to load in). Below is the log:
2022 [main] INFO org.apache.solr.hadoop.HeartBeater - Heart beat reporting class is org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
2025 [main] INFO org.apache.solr.hadoop.SolrRecordWriter - Using this unpacked directory as solr home: /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip
2025 [main] INFO org.apache.solr.hadoop.SolrRecordWriter - Creating embedded Solr server with solrHomeDir: /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip, fs: DFS[DFSClient[clientName=DFSClient_NONMAPREDUCE_-1828461666_1, ugi=nguyen (auth:SIMPLE)]], outputShardDir: hdfs://master.hadoop:8020/user/nguyen/twitter/outdir/reducers/_temporary/_attempt_201311191613_0320_r_000014_0/part-r-00014
2029 [Thread-64] INFO org.apache.solr.hadoop.HeartBeater - HeartBeat thread running
2030 [Thread-64] INFO org.apache.solr.hadoop.HeartBeater - Issuing heart beat for 1 threads
2083 [main] INFO org.apache.solr.core.SolrResourceLoader - new SolrResourceLoader for directory: '/data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/'
2259 [main] INFO org.apache.solr.hadoop.SolrRecordWriter - Constructed instance information solr.home /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip (/data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip), instance dir /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/, conf dir /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/conf/, writing index to solr.data.dir hdfs://master.hadoop:8020/user/nguyen/twitter/outdir/reducers/_temporary/_attempt_201311191613_0320_r_000014_0/part-r-00014/data, with permdir hdfs://master.hadoop:8020/user/nguyen/twitter/outdir/reducers/_temporary/_attempt_201311191613_0320_r_000014_0/part-r-00014
2266 [main] INFO org.apache.solr.core.ConfigSolr - Loading container configuration from /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/solr.xml
2267 [main] INFO org.apache.solr.core.ConfigSolr - /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/solr.xml does not exist, using default configuration
2505 [main] INFO org.apache.solr.core.CoreContainer - New CoreContainer 696103669
2505 [main] INFO org.apache.solr.core.CoreContainer - Loading cores into CoreContainer [instanceDir=/data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/]
2515 [main] INFO org.apache.solr.handler.component.HttpShardHandlerFactory - Setting socketTimeout to: 0
2515 [main] INFO org.apache.solr.handler.component.HttpShardHandlerFactory - Setting urlScheme to: http://
2515 [main] INFO org.apache.solr.handler.component.HttpShardHandlerFactory - Setting connTimeout to: 0
2515 [main] INFO org.apache.solr.handler.component.HttpShardHandlerFactory - Setting maxConnectionsPerHost to: 20
2516 [main] INFO org.apache.solr.handler.component.HttpShardHandlerFactory - Setting corePoolSize to: 0
2516 [main] INFO org.apache.solr.handler.component.HttpShardHandlerFactory - Setting maximumPoolSize to: 2147483647
2516 [main] INFO org.apache.solr.handler.component.HttpShardHandlerFactory - Setting maxThreadIdleTime to: 5
2516 [main] INFO org.apache.solr.handler.component.HttpShardHandlerFactory - Setting sizeOfQueue to: -1
2516 [main] INFO org.apache.solr.handler.component.HttpShardHandlerFactory - Setting fairnessPolicy to: false
2527 [main] INFO org.apache.solr.client.solrj.impl.HttpClientUtil - Creating new http client, config:maxConnectionsPerHost=20&maxConnections=10000&socketTimeout=0&connTimeout=0&retry=false
2648 [main] INFO org.apache.solr.logging.LogWatcher - Registering Log Listener [Log4j (org.slf4j.impl.Log4jLoggerFactory)]
2676 [coreLoadExecutor-3-thread-1] INFO org.apache.solr.core.CoreContainer - Creating SolrCore 'collection1' using instanceDir: /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/collection1
2677 [coreLoadExecutor-3-thread-1] INFO org.apache.solr.core.SolrResourceLoader - new SolrResourceLoader for directory: '/data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/collection1/'
2691 [coreLoadExecutor-3-thread-1] ERROR org.apache.solr.core.CoreContainer - Failed to load file /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/collection1/solrconfig.xml
2693 [coreLoadExecutor-3-thread-1] ERROR org.apache.solr.core.CoreContainer - Unable to create core: collection1
org.apache.solr.common.SolrException: Could not load config for solrconfig.xml
at org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:596)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:661)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:368)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:360)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: java.io.IOException: Can't find resource 'solrconfig.xml' in classpath or '/data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/collection1/conf/', cwd=/data/05/mapred/local/taskTracker/nguyen/jobcache/job_201311191613_0320/attempt_201311191613_0320_r_000014_0/work
at org.apache.solr.core.SolrResourceLoader.openResource(SolrResourceLoader.java:322)
at org.apache.solr.core.SolrResourceLoader.openConfig(SolrResourceLoader.java:287)
at org.apache.solr.core.Config.<init>(Config.java:116)
at org.apache.solr.core.Config.<init>(Config.java:86)
at org.apache.solr.core.SolrConfig.<init>(SolrConfig.java:120)
at org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:593)
... 11 more
2695 [coreLoadExecutor-3-thread-1] ERROR org.apache.solr.core.CoreContainer - null:org.apache.solr.common.SolrException: Unable to create core: collection1
at org.apache.solr.core.CoreContainer.recordAndThrow(CoreContainer.java:1158)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:670)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:368)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:360)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: org.apache.solr.common.SolrException: Could not load config for solrconfig.xml
at org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:596)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:661)
... 10 more
Caused by: java.io.IOException: Can't find resource 'solrconfig.xml' in classpath or '/data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/collection1/conf/', cwd=/data/05/mapred/local/taskTracker/nguyen/jobcache/job_201311191613_0320/attempt_201311191613_0320_r_000014_0/work
at org.apache.solr.core.SolrResourceLoader.openResource(SolrResourceLoader.java:322)
at org.apache.solr.core.SolrResourceLoader.openConfig(SolrResourceLoader.java:287)
at org.apache.solr.core.Config.<init>(Config.java:116)
at org.apache.solr.core.Config.<init>(Config.java:86)
at org.apache.solr.core.SolrConfig.<init>(SolrConfig.java:120)
at org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:593)
... 11 more
2697 [main] INFO org.apache.solr.core.CoreContainer - Creating SolrCore 'core1' using instanceDir: /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip
2697 [main] INFO org.apache.solr.core.SolrResourceLoader - new SolrResourceLoader for directory: '/data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/'
2751 [main] INFO org.apache.solr.core.SolrConfig - Adding specified lib dirs to ClassLoader
2752 [main] WARN org.apache.solr.core.SolrResourceLoader - Can't find (or read) directory to add to classloader: ../../../contrib/extraction/lib (resolved as: /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/../../../contrib/extraction/lib).
2752 [main] WARN org.apache.solr.core.SolrResourceLoader - Can't find (or read) directory to add to classloader: ../../../dist/ (resolved as: /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/../../../dist).
2752 [main] WARN org.apache.solr.core.SolrResourceLoader - Can't find (or read) directory to add to classloader: ../../../contrib/clustering/lib/ (resolved as: /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/../../../contrib/clustering/lib).
2753 [main] WARN org.apache.solr.core.SolrResourceLoader - Can't find (or read) directory to add to classloader: ../../../dist/ (resolved as: /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/../../../dist).
2753 [main] WARN org.apache.solr.core.SolrResourceLoader - Can't find (or read) directory to add to classloader: ../../../contrib/langid/lib/ (resolved as: /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/../../../contrib/langid/lib).
2753 [main] WARN org.apache.solr.core.SolrResourceLoader - Can't find (or read) directory to add to classloader: ../../../dist/ (resolved as: /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/../../../dist).
2753 [main] WARN org.apache.solr.core.SolrResourceLoader - Can't find (or read) directory to add to classloader: ../../../contrib/velocity/lib (resolved as: /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/../../../contrib/velocity/lib).
2753 [main] WARN org.apache.solr.core.SolrResourceLoader - Can't find (or read) directory to add to classloader: ../../../dist/ (resolved as: /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/../../../dist).
2785 [main] INFO org.apache.solr.update.SolrIndexConfig - IndexWriter infoStream solr logging is enabled
2790 [main] INFO org.apache.solr.core.SolrConfig - Using Lucene MatchVersion: LUCENE_44
2869 [main] INFO org.apache.solr.core.Config - Loaded SolrConfig: solrconfig.xml
2879 [main] INFO org.apache.solr.schema.IndexSchema - Reading Solr Schema from schema.xml
2937 [main] INFO org.apache.solr.schema.IndexSchema - [core1] Schema name=twitter
3352 [main] INFO org.apache.solr.schema.IndexSchema - unique key field: id
3471 [main] INFO org.apache.solr.schema.FileExchangeRateProvider - Reloading exchange rates from file currency.xml
3478 [main] INFO org.apache.solr.schema.FileExchangeRateProvider - Reloading exchange rates from file currency.xml
3635 [main] INFO org.apache.solr.core.HdfsDirectoryFactory - Solr Kerberos Authentication disabled
3636 [main] INFO org.apache.solr.core.JmxMonitoredMap - No JMX servers found, not exposing Solr information with JMX.
3652 [main] INFO org.apache.solr.core.HdfsDirectoryFactory - creating directory factory for path hdfs://master.hadoop:8020/user/nguyen/twitter/outdir/reducers/_temporary/_attempt_201311191613_0320_r_000014_0/part-r-00014/data
3686 [main] INFO org.apache.solr.core.CachingDirectoryFactory - return new directory for hdfs://master.hadoop:8020/user/nguyen/twitter/outdir/reducers/_temporary/_attempt_201311191613_0320_r_000014_0/part-r-00014/data
3711 [main] WARN org.apache.solr.core.SolrCore - [core1] Solr index directory 'hdfs:/master.hadoop:8020/user/nguyen/twitter/outdir/reducers/_temporary/_attempt_201311191613_0320_r_000014_0/part-r-00014/data/index' doesn't exist. Creating new index...
3719 [main] INFO org.apache.solr.core.HdfsDirectoryFactory - creating directory factory for path hdfs://master.hadoop:8020/user/nguyen/twitter/outdir/reducers/_temporary/_attempt_201311191613_0320_r_000014_0/part-r-00014/data/index
3719 [main] INFO org.apache.solr.core.HdfsDirectoryFactory - Number of slabs of block cache [1] with direct memory allocation set to [true]
3720 [main] INFO org.apache.solr.core.HdfsDirectoryFactory - Block cache target memory usage, slab size of [134217728] will allocate [1] slabs and use ~[134217728] bytes
3721 [main] INFO org.apache.solr.store.blockcache.BufferStore - Initializing the 1024 buffers with [8192] buffers.
3740 [main] INFO org.apache.solr.store.blockcache.BufferStore - Initializing the 8192 buffers with [8192] buffers.
3891 [main] INFO org.apache.solr.core.CachingDirectoryFactory - return new directory for hdfs://master.hadoop:8020/user/nguyen/twitter/outdir/reducers/_temporary/_attempt_201311191613_0320_r_000014_0/part-r-00014/data/index
3988 [main] INFO org.apache.solr.update.LoggingInfoStream - [IFD][main]: init: current segments file is "null"; deletionPolicy=org.apache.solr.core.IndexDeletionPolicyWrapper#65b01d5d
3992 [main] INFO org.apache.solr.update.LoggingInfoStream - [IFD][main]: now checkpoint "" [0 segments ; isCommit = false]
3992 [main] INFO org.apache.solr.update.LoggingInfoStream - [IFD][main]: 0 msec to checkpoint
3992 [main] INFO org.apache.solr.update.LoggingInfoStream - [IW][main]: init: create=true
3992 [main] INFO org.apache.solr.update.LoggingInfoStream - [IW][main]:
dir=NRTCachingDirectory(org.apache.solr.store.hdfs.HdfsDirectory#17e5a6d8 lockFactory=org.apache.solr.store.hdfs.HdfsLockFactory#7f117668; maxCacheMB=192.0 maxMergeSizeMB=16.0)
solr looks for solr.home parameter and searchs solrConfig.xml file there. if there is none it tries to load default configuration.
it looks like your solr home is
/data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/collection1/
check that folder for solrconfig.xml file
if there is none, copy one from example directory of solr
if there is one, match the file/folder permissions with the server instance

Resources