Solr Map reduce indexer tool not able to fetch aliases through zk - solr

Hi While working with MapReduceIndexerTool with solr 4.10 cloud, the code is successfully able to connect to Zookeeper, but while fetching the aliases.json, it fails to fetch the data. Below is the command and stack trace:
command:
hadoop --config /etc/hadoop/conf jar target/search-mr-*-job.jar org.apache.solr.hadoop.MapReduceIndexerTool -D 'mapred.child.java.opts=-Xmx500m' --log4j src/test/resources/log4j.properties --morphline-file /home/impadmin/app_quotes_morphline.conf --output-dir hdfs://impetus-i0056.impetus.co.in:8020/user/impadmin/MapReduceIndexerTool/output2 --zk-host 172.26.45.69:9983/solr --collection app.quotes hdfs://impetus-i0056.impetus.co.in:8020/apps/hive/warehouse/kst
stack trace:
WARNING: Use "yarn jar" to launch YARN applications.
1 [main] INFO org.apache.solr.common.cloud.SolrZkClient - Using default ZkCredentialsProvider
87 [main] INFO org.apache.solr.common.cloud.ConnectionManager - Waiting for client to connect to ZooKeeper
114 [main-EventThread] INFO org.apache.solr.common.cloud.ConnectionManager - Watcher org.apache.solr.common.cloud.ConnectionManager#1568159 name:ZooKeeperConnection Watcher:172.26.45.69:9983/solr got event WatchedEvent state:SyncConnected type:None path:null path:null type:None
115 [main] INFO org.apache.solr.common.cloud.ConnectionManager - Client is connected to ZooKeeper
115 [main] INFO org.apache.solr.common.cloud.SolrZkClient - Using default ZkACLProvider
Exception in thread "main" net.sourceforge.argparse4j.inf.ArgumentParserException: java.lang.IllegalArgumentException: Cannot find expected information for SolrCloud in ZooKeeper: 172.26.45.69:9983/solr
at org.apache.solr.hadoop.MapReduceIndexerTool.verifyZKStructure(MapReduceIndexerTool.java:1418)
at org.apache.solr.hadoop.MapReduceIndexerTool.run(MapReduceIndexerTool.java:716)
at org.apache.solr.hadoop.MapReduceIndexerTool.run(MapReduceIndexerTool.java:681)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.solr.hadoop.MapReduceIndexerTool.main(MapReduceIndexerTool.java:668)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.lang.IllegalArgumentException: Cannot find expected information for SolrCloud in ZooKeeper: 172.26.45.69:9983/solr
at org.apache.solr.hadoop.ZooKeeperInspector.extractDocCollection(ZooKeeperInspector.java:88)
at org.apache.solr.hadoop.ZooKeeperInspector.extractShardUrls(ZooKeeperInspector.java:56)
at org.apache.solr.hadoop.MapReduceIndexerTool.verifyZKStructure(MapReduceIndexerTool.java:1415)
... 10 more
Caused by: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /aliases.json
at org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)
at org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:351)
at org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:348)
at org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:61)
at org.apache.solr.common.cloud.SolrZkClient.getData(SolrZkClient.java:348)
at org.apache.solr.hadoop.ZooKeeperInspector.checkForAlias(ZooKeeperInspector.java:164)
at org.apache.solr.hadoop.ZooKeeperInspector.extractDocCollection(ZooKeeperInspector.java:85)
... 12 more
Please help me to identify the root cause.

The issue was with the URL that was being hit to access zk solr configs. thus correcting the URL solved the issue. In case of embedded solr instance the URL does not have application solr available, but rather puts it directly under zk root.

Related

flink 1.12.1 example application failing on a single node yarn cluster

I am trying out flink example as explained in flink docs in a single node yarn cluster.
As mentioned in this discussion HADOOP_CONF_DIR is also set like below before executing the yarn command.
export HADOOP_CONF_DIR=/etc/hadoop/conf
On executing the below command
ubuntu#vrni-platform:~/build-target/flink$ ./bin/flink run-application -t yarn-application ./examples/streaming/TopSpeedWindowing.jar
It is failing with the below errors
The program finished with the following exception:
org.apache.flink.client.deployment.ClusterDeploymentException: Couldn't deploy Yarn Application Cluster
at org.apache.flink.yarn.YarnClusterDescriptor.deployApplicationCluster(YarnClusterDescriptor.java:465)
at org.apache.flink.client.deployment.application.cli.ApplicationClusterDeployer.run(ApplicationClusterDeployer.java:67)
at org.apache.flink.client.cli.CliFrontend.runApplication(CliFrontend.java:213)
at org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1061)
at org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1136)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
at org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1136)
Caused by: org.apache.flink.yarn.YarnClusterDescriptor$YarnDeploymentException: The YARN application unexpectedly switched to state FAILED during deployment.
Diagnostics from YARN: Application application_1614159836384_0045 failed 1 times (global limit =2; local limit is =1) due to AM Container for appattempt_1614159836384_0045_000001 exited with exitCode: -1000
Failing this attempt.Diagnostics: [2021-02-24 16:19:39.409]File file:/home/ubuntu/.flink/application_1614159836384_0045/flink-dist_2.12-1.12.1.jar does not exist
java.io.FileNotFoundException: File file:/home/ubuntu/.flink/application_1614159836384_0045/flink-dist_2.12-1.12.1.jar does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:641)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:867)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:631)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:442)
at org.apache.hadoop.yarn.util.FSDownload.verifyAndCopy(FSDownload.java:269)
at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:67)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:414)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:411)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:411)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.doDownloadCall(ContainerLocalizer.java:242)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.call(ContainerLocalizer.java:235)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.call(ContainerLocalizer.java:223)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
I have made the log level DEBUG and I do see that flink-dist_2.12-1.12.1.jar is getting copied to /home/ubuntu/.flink/application_1614159836384_0045.
2021-02-24 16:19:37,768 DEBUG org.apache.flink.yarn.YarnApplicationFileUploader [] - Got modification time 1614183577000 from remote path file:/home/ubuntu/.flink/application_1614159836384_0045/TopSpeedWindowing.jar
2021-02-24 16:19:37,769 DEBUG org.apache.flink.yarn.YarnApplicationFileUploader [] - Copying from file:/home/ubuntu/build-target/flink/lib/flink-dist_2.12-1.12.1.jar to file:/home/ubuntu/.flink/application_1614159836384_0045/flink-dist_2.12-1.12.1.jar with replication factor 1
I have placed the entire DEBUG logs here.
Nodemanger logs have warnings like below
2021-02-24 16:36:34,219 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_INIT for appId application_1614159836384_0047
2021-02-24 16:36:34,220 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Created localizer for container_1614159836384_0047_01_000001
2021-02-24 16:36:34,222 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Writing credentials to the nmPrivate file /var/lib/hadoop-yarn/cache/yarn/nm-local-dir/nmPrivate/container_1614159836384_0047_01_000001.tokens
2021-02-24 16:36:34,222 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Initializing user ubuntu
2021-02-24 16:36:34,224 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Copying from /var/lib/hadoop-yarn/cache/yarn/nm-local-dir/nmPrivate/container_1614159836384_0047_01_000001.tokens to /var/lib/hadoop-yarn/cache/yarn/nm-local-dir/usercache/ubuntu/appcache/application_1614159836384_0047/container_1614159836384_0047_01_000001.tokens
2021-02-24 16:36:34,224 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Localizer CWD set to /var/lib/hadoop-yarn/cache/yarn/nm-local-dir/usercache/ubuntu/appcache/application_1614159836384_0047 = file:/var/lib/hadoop-yarn/cache/yarn/nm-local-dir/usercache/ubuntu/appcache/application_1614159836384_0047
2021-02-24 16:36:34,247 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer: Disk Validator: yarn.nodemanager.disk-validator is loaded.
2021-02-24 16:36:34,268 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: { file:/home/ubuntu/.flink/application_1614159836384_0047/flink-dist_2.12-1.12.1.jar, 1614184593000, FILE, null } failed: File file:/home/ubuntu/.flink/application_1614159836384_0047/flink-dist_2.12-1.12.1.jar does not exist
java.io.FileNotFoundException: File file:/home/ubuntu/.flink/application_1614159836384_0047/flink-dist_2.12-1.12.1.jar does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:641)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:867)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:631)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:442)
at org.apache.hadoop.yarn.util.FSDownload.verifyAndCopy(FSDownload.java:269)
at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:67)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:414)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:411)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:411)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.doDownloadCall(ContainerLocalizer.java:242)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.call(ContainerLocalizer.java:235)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.call(ContainerLocalizer.java:223)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
The entire nodemanger logs are here.
Can someone let me know what is going wrong? Does flink not support single node yarn cluster for development?
Flink Version 1.12.1
There was a configuration issue in my setup. In my setup hadoop-yarn-nodemenager is running with yarn user.
ubuntu#vrni-platform:/tmp/flink$ ps -ef | grep nodemanager
yarn 4953 1 2 05:53 ? 00:11:26 /usr/lib/jvm/java-8-openjdk/bin/java -Dproc_nodemanager -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/lib/heap-dumps/yarn -XX:+ExitOnOutOfMemoryError -Dyarn.log.dir=/var/log/hadoop-yarn -Dyarn.log.file=hadoop-yarn-nodemanager-vrni-platform.log -Dyarn.home.dir=/usr/lib/hadoop-yarn -Dyarn.root.logger=INFO,console -Djava.library.path=/usr/lib/hadoop/lib/native -Xmx512m -Dhadoop.log.dir=/var/log/hadoop-yarn -Dhadoop.log.file=hadoop-yarn-nodemanager-vrni-platform.log -Dhadoop.home.dir=/usr/lib/hadoop -Dhadoop.id.str=yarn -Dhadoop.root.logger=INFO,RFA -Dhadoop.policy.file=hadoop-policy.xml -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.yarn.server.nodemanager.NodeManager
I was executing the ./bin/flink command as ubuntu user and yarn user does not have permission to write to ubuntu's home folder in my setup.
ubuntu#vrni-platform:/tmp/flink$ echo ~ubuntu
/home/ubuntu
ubuntu#vrni-platform:/tmp/flink$ echo ~yarn
/var/lib/hadoop-yarn
It appears flink needs permission to write to user's home directory to create a .flink folder even when the job is submitted in yarn. It is working fine for me if I run the flink with yarn user in my setup.

Apache flink 1.6 HA standalone cluster: Fatal error in the cluster entrypoint

I am trying to setup Apache Flink standalone cluster consisting of 2 master nodes and one worker node. Using Flink 1.6 and Zookeeper. To start and stop cluster I used process described in Flink's 1.6 documentation, i.e. to start cluster I ran start-zookeeper-quorum.sh and then start-cluster.sh
and to stop cluster I ran stop-cluster.sh
After running one job (which failed), then stopping and restarting cluster again I noticed error where none of 2 the job managers could start because they are looking for directory job_e44fdee88a931200953fed45883ee3f1 which does not exist (I am assuming this is directory for my failed job, but not sure)
How do I recover cluster from this error?
2018-09-06 14:58:04,065 ERROR org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Fatal error occurred in the cluster entrypoint.
java.lang.RuntimeException: org.apache.flink.runtime.client.JobExecutionException: Could not set up JobManager
at org.apache.flink.util.ExceptionUtils.rethrow(ExceptionUtils.java:199)
at org.apache.flink.util.function.ConsumerWithException.accept(ConsumerWithException.java:40)
at org.apache.flink.runtime.dispatcher.Dispatcher.lambda$waitForTerminatingJobManager$29(Dispatcher.java:820)
at java.util.concurrent.CompletableFuture.uniRun(CompletableFuture.java:705)
at java.util.concurrent.CompletableFuture$UniRun.tryFire(CompletableFuture.java:687)
at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:332)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:158)
at org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:70)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.onReceive(AkkaRpcActor.java:142)
at org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.onReceive(FencedAkkaRpcActor.java:40)
at akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.scala:165)
at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:95)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
at akka.actor.ActorCell.invoke(ActorCell.scala:495)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
at akka.dispatch.Mailbox.run(Mailbox.scala:224)
at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: org.apache.flink.runtime.client.JobExecutionException: Could not set up JobManager
at org.apache.flink.runtime.jobmaster.JobManagerRunner.<init>(JobManagerRunner.java:176)
at org.apache.flink.runtime.dispatcher.Dispatcher$DefaultJobManagerRunnerFactory.createJobManagerRunner(Dispatcher.java:936)
at org.apache.flink.runtime.dispatcher.Dispatcher.createJobManagerRunner(Dispatcher.java:291)
at org.apache.flink.runtime.dispatcher.Dispatcher.runJob(Dispatcher.java:281)
at org.apache.flink.util.function.ConsumerWithException.accept(ConsumerWithException.java:38)
:
... 21 more
Caused by: java.lang.Exception: Cannot set up the user code libraries: /hastorage/default/blob/job_e44fdee88a931200953fed45883ee3f1/blob_p-f655414c973995e93709acbd22c1c162c9c43a98-75bd4e71882f988a6c337222efadba7b (No such file or directory)
at org.apache.flink.runtime.jobmaster.JobManagerRunner.<init>(JobManagerRunner.java:134)
... 25 more
Caused by: java.io.FileNotFoundException: /hastorage/default/blob/job_e44fdee88a931200953fed45883ee3f1/blob_p-f655414c973995e93709acbd22c1c162c9c43a98-75bd4e71882f988a6c337222efadba7b (No such file or directory)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.<init>(FileInputStream.java:138)
at org.apache.flink.core.fs.local.LocalDataInputStream.<init>(LocalDataInputStream.java:50)
at org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:142)
at org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:102)
at org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:84)
at org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:493)
at org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:444)
at org.apache.flink.runtime.blob.BlobServer.getFile(BlobServer.java:417)
at org.apache.flink.runtime.execution.librarycache.BlobLibraryCacheManager.registerTask(BlobLibraryCacheManager.java:120)
at org.apache.flink.runtime.execution.librarycache.BlobLibraryCacheManager.registerJob(BlobLibraryCacheManager.java:91)
at org.apache.flink.runtime.jobmaster.JobManagerRunner.<init>(JobManagerRunner.java:131)
... 25 more
2018-09-06 14:58:04,069 INFO org.apache.flink.runtime.blob.TransientBlobCache - Shutting down BLOB cache
The problem you are observing is caused by a bug in Flink. You can find more details about the problem here. The problem will be fixed with the next bug fix release.

solr tutorial fails to create collection

I'm trying to run the solr 6.6.0 tutorial and after running:
bin/solr start -e cloud -noprompt
it starts solr on ports 8983 and 7574 but fails to create the getting started collection with the following error:
ERROR: Failed to create collection 'gettingstarted' due to: {10.1.20.105:7574_solr=org.apache.solr.client.solrj.SolrServerException:IOException occured when talking to server at: http://10.1.20.105:7574/solr}
ERROR: Failed to create collection using command: [-name, gettingstarted, -shards, 2, -replicationFactor, 2, -confname, gettingstarted, -confdir, data_driven_schema_configs, -configsetsDir, /Users/rcarey/solr-6.6.0/server/solr/configsets, -solrUrl, http://localhost:8983/solr]
It looks like its trying to create each replica on a different ip, rather than a different port on the same ip. 10.1.20.105 is not the IP that the 8983 replica is using. I'm not sure if theres something additional I need to configure for this so that it uses the one IP for both. I have the host set to localhost.
The Solr Admin is available on both http://localhost:8983/solr and http://localhost:7574/solr
I get the following in the log:
24/08/2017, 11:38:36 ERROR null OverseerCollectionMessageHandler Error from shard: http://10.1.20.105:7574/solr
24/08/2017, 11:38:36 ERROR null OverseerCollectionMessageHandler Error from shard: http://10.1.20.105:7574/solr
org.apache.solr.client.solrj.SolrServerException: IOException occured when talking to server at: http://10.1.20.105:7574/solr
at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:624)
at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:279)
at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:268)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at org.apache.solr.handler.component.HttpShardHandler.lambda$submit$0(HttpShardHandler.java:163)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.http.conn.ConnectTimeoutException: Connect to 10.1.20.105:7574 timed out
at org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:119)
at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:177)
at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:304)
at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:611)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:446)
at org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:515)
... 12 more
24/08/2017, 11:38:36 ERROR null OverseerCollectionMessageHandler Cleaning up collection [gettingstarted].
24/08/2017, 11:39:06 ERROR null CollectionsHandler Timed out waiting for new collection's replicas to become ACTIVE with timeout=30
Help me to fix this.
I had the same issue. In bin/solr.in.sh, I uncommented and set the following:
SOLR_HOST="localhost"
Then things worked because solr communicated with the server via "localhost" instead of an IP, which had a timeout. Fixing the error:
SolrServerException:IOException occured when talking to server at: http://YOUR_IP/solr

Unable to access SOLR server admin page

I am new to SOLR. I am building SOLR from source using solr-5.0.0-src.tgz. After running
ant compile
at solr-5.0.0/, I run
bin/solr start
at solr-5.0.0/solr/. And it says
Waiting to see Solr listening on port 8983 [/]
Started Solr server on port 8983 (pid=20151). Happy searching!
However, when visiting http://localhost:8983/solr/, I receive HTTP ERROR
HTTP ERROR: 503
Problem accessing /solr/. Reason:
Service Unavailable
Powered by Jetty://
And
bin/solr status
gives
Found 1 Solr nodes:
Solr process 20151 running on port 8983
Error: Could not find or load main class org.apache.solr.util.SolrCLI
I wonder if this is the reason admin page is unavailable? If so, how I could solve the problem. If not, what is it?
Thanks.
change to solr directory and run:
ant server
Then restart the server
bin/solr stop && bin/solr start
Check that everything is working:
bin/solr status
You have not mentioned the full stack trace...
Here it is ....
Exception in thread "main" java.lang.UnsupportedClassVersionError: org/apache/solr/util/SolrCLI : Unsupported major.minor version 51.0
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClassCond(ClassLoader.java:631)
at java.lang.ClassLoader.defineClass(ClassLoader.java:615)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
Could not find the main class: org.apache.solr.util.SolrCLI. Program will exit.
To fix the problem you need to upgrade the java ...to J2SE 7

SOLR HTTP 500 Can't find resource 'solrconfig.xml'

I have Apache SOLR working with ColdFusion on my local machine, however, when I tried to make the move to production (environments are different), I keep getting the HTTP 500 message below. Production environment is using Ubuntu Lucid, Apache, ColdFusion 9.0.1. Using the version of SOLR installed with ColdFusion.
The path for solrconfig.xml in the error message, "/opt/jrun4/servers/prod-autofeed1/cfusion.ear/cfusion.war/WEB-INF/cfusion/collections/autofeed/conf/" is correct.
Any suggestions? Thank you.
HTTP ERROR: 500
Severe errors in solr configuration.
Check your log files for more detailed information on what may be wrong.
If you want solr to continue after configuration errors, change:
<abortOnConfigurationError>false</abortOnConfigurationError>
in solr.xml
-------------------------------------------------------------
java.lang.RuntimeException: Can't find resource 'solrconfig.xml' in classpath or '/opt/jrun4/servers/prod-autofeed1/cfusion.ear/cfusion.war/WEB-INF/cfusion/collections/autofeed/conf/', cwd=/opt/jrun4/servers/cfusion/cfusion-ear/cfusion-war/WEB-INF/cfusion/solr
at org.apache.solr.core.SolrResourceLoader.openResource(SolrResourceLoader.java:260)
at org.apache.solr.core.SolrResourceLoader.openConfig(SolrResourceLoader.java:228)
at org.apache.solr.core.Config.<init>(Config.java:101)
at org.apache.solr.core.SolrConfig.<init>(SolrConfig.java:130)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:405)
at org.apache.solr.core.CoreContainer.load(CoreContainer.java:278)
at org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:117)
at org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:83)
at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:99)
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:40)
at org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:594)
at org.mortbay.jetty.servlet.Context.startContext(Context.java:139)
at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1218)
at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:500)
at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:448)
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:40)
at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:147)
at org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:161)
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:40)
at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:147)
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:40)
at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:117)
at org.mortbay.jetty.Server.doStart(Server.java:210)
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:40)
at org.mortbay.xml.XmlConfiguration.main(XmlConfiguration.java:929)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.mortbay.start.Main.invokeMain(Main.java:183)
at org.mortbay.start.Main.start(Main.java:497)
at org.mortbay.start.Main.main(Main.java:115)
RequestURI=/solr/
Powered by Jetty://
Double check permissions on the directory /opt/jrun4/servers/prod-autofeed1/cfusion.ear/cfusion.war/WEB-INF/cfusion/collections/autofeed/conf and the file /opt/jrun4/servers/prod-autofeed1/cfusion.ear/cfusion.war/WEB-INF/cfusion/collections/autofeed/conf/solrconfig.xml. If the user solr is run under can't read the dir/file, that'd do it. To test, you might even su to the user in question and simply try to cat the config file.

Resources