SonarQube don't start - database

I'm trying to install SonarQube on my CentOS7 server.
When i try to access via web I see this:
web_page
It's supposed to show a SonarQube page right??
Here i put my logs and configs:
web.log:
2019.08.29 17:24:54 INFO web[][o.s.s.a.EmbeddedTomcat] HTTP connector enabled on port 9000
2019.08.29 17:25:05 INFO web[][o.s.p.ProcessEntryPoint] Starting web
2019.08.29 17:25:05 INFO web[][o.a.t.u.n.NioSelectorPool] Using a shared selector for servlet write/read
2019.08.29 17:25:06 INFO web[][o.e.plugins] [Kurt Wagner] modules [], plugins [], sites []
2019.08.29 17:25:06 INFO web[][o.s.s.e.EsClientProvider] Connected to local Elasticsearch: [127.0.0.1:9001]
2019.08.29 17:25:06 INFO web[][o.s.s.p.LogServerVersion] SonarQube Server / 6.4.0.25310 / ad64a17b531c0e1f6fef0ce7e4d0d0b060977754
2019.08.29 17:25:06 INFO web[][o.sonar.db.Database] Create JDBC data source for jdbc:postgresql://localhost/sonar
2019.08.29 17:25:06 ERROR web[][o.s.s.p.Platform] Web server startup failed
java.lang.IllegalStateException: Can not connect to database. Please check connectivity and settings (see the properties prefixed by 'sonar.jdbc.').
at org.sonar.db.DefaultDatabase.checkConnection(DefaultDatabase.java:108)
at org.sonar.db.DefaultDatabase.start(DefaultDatabase.java:75)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.picocontainer.lifecycle.ReflectionLifecycleStrategy.invokeMethod(ReflectionLifecycleStrategy.java:110)
at org.picocontainer.lifecycle.ReflectionLifecycleStrategy.start(ReflectionLifecycleStrategy.java:89)
at org.sonar.core.platform.ComponentContainer$1.start(ComponentContainer.java:320)
at org.picocontainer.injectors.AbstractInjectionFactory$LifecycleAdapter.start(AbstractInjectionFactory.java:84)
at org.picocontainer.behaviors.AbstractBehavior.start(AbstractBehavior.java:169)
at org.picocontainer.behaviors.Stored$RealComponentLifecycle.start(Stored.java:132)
at org.picocontainer.behaviors.Stored.start(Stored.java:110)
at org.picocontainer.DefaultPicoContainer.potentiallyStartAdapter(DefaultPicoContainer.java:1016)
at org.picocontainer.DefaultPicoContainer.startAdapters(DefaultPicoContainer.java:1009)
at org.picocontainer.DefaultPicoContainer.start(DefaultPicoContainer.java:767)
at org.sonar.core.platform.ComponentContainer.startComponents(ComponentContainer.java:143)
at org.sonar.server.platform.platformlevel.PlatformLevel.start(PlatformLevel.java:88)
at org.sonar.server.platform.Platform.start(Platform.java:231)
at org.sonar.server.platform.Platform.startLevel1Container(Platform.java:190)
at org.sonar.server.platform.Platform.init(Platform.java:86)
at org.sonar.server.platform.web.PlatformServletContextListener.contextInitialized(PlatformServletContextListener.java:43)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4727)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5189)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1419)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1409)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.commons.dbcp.SQLNestedException: Cannot create PoolableConnectionFactory (FATAL: password authentication failed for user "sonar")
at org.apache.commons.dbcp.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:1549)
at org.apache.commons.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1388)
at org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044)
at org.sonar.db.profiling.NullConnectionInterceptor.getConnection(NullConnectionInterceptor.java:31)
at org.sonar.db.profiling.ProfiledDataSource.getConnection(ProfiledDataSource.java:323)
at org.sonar.db.DefaultDatabase.checkConnection(DefaultDatabase.java:106)
... 30 common frames omitted
Caused by: org.postgresql.util.PSQLException: FATAL: password authentication failed for user "sonar"
at org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:451)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:223)
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:66)
at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:211)
at org.postgresql.Driver.makeConnection(Driver.java:407)
at org.postgresql.Driver.connect(Driver.java:275)
at org.apache.commons.dbcp.DriverConnectionFactory.createConnection(DriverConnectionFactory.java:38)
at org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582)
at org.apache.commons.dbcp.BasicDataSource.validateConnectionFactory(BasicDataSource.java:1556)
at org.apache.commons.dbcp.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:1545)
... 35 common frames omitted
sonar.log:
2019.08.29 17:28:37 INFO app[][o.s.a.SchedulerImpl] Process [web] is stopped
2019.08.29 17:28:37 INFO app[][o.s.a.SchedulerImpl] Process [es] is stopped
2019.08.29 17:28:37 INFO app[][o.s.a.SchedulerImpl] SonarQube is stopped
<-- Wrapper Stopped
--> Wrapper Started as Daemon
Launching a JVM...
Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
2019.08.29 17:28:40 INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /opt/sonarqube/temp
2019.08.29 17:28:40 INFO app[][o.s.a.p.JavaProcessLauncherImpl] Launch process[es]: /usr/java/jdk1.8.0_131/jre/bin/java -Djava.awt.headless=true -Xmx1G -Xms256m -Xss256k -Djna.nosys=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Djava.io.tmpdir=/opt/sonarqube/temp -cp ./lib/common/*:./lib/search/* org.sonar.search.SearchServer /opt/sonarqube/temp/sq-process6725369398987844378properties
2019.08.29 17:28:45 INFO app[][o.s.a.SchedulerImpl] Process[es] is up
2019.08.29 17:28:45 INFO app[][o.s.a.p.JavaProcessLauncherImpl] Launch process[web]: /usr/java/jdk1.8.0_131/jre/bin/java -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError -Djava.io.tmpdir=/opt/sonarqube/temp -cp ./lib/common/*:./lib/server/*:/opt/sonarqube/lib/jdbc/postgresql/postgresql-9.4.1209.jre7.jar org.sonar.server.app.WebServer /opt/sonarqube/temp/sq-process3428525909039640490properties
systemctl status httpd:
[oksmart#CLOUDSVRSONAR01 logs]$ systemctl status httpd -l
● httpd.service - The Apache HTTP Server
Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled)
Active: active (running) since Thu 2019-08-29 16:59:00 CEST; 33min ago
Docs: man:httpd(8)
man:apachectl(8)
Process: 5697 ExecStop=/bin/kill -WINCH ${MAINPID} (code=exited, status=0/SUCCESS)
Main PID: 5834 (httpd)
Status: "Total requests: 50; Current requests/sec: 0; Current traffic: 0 B/sec"
CGroup: /system.slice/httpd.service
├─5834 /usr/sbin/httpd -DFOREGROUND
├─5835 /usr/sbin/httpd -DFOREGROUND
├─5836 /usr/sbin/httpd -DFOREGROUND
├─5837 /usr/sbin/httpd -DFOREGROUND
├─5839 /usr/sbin/httpd -DFOREGROUND
├─5900 /usr/sbin/httpd -DFOREGROUND
├─5901 /usr/sbin/httpd -DFOREGROUND
├─5903 /usr/sbin/httpd -DFOREGROUND
├─5904 /usr/sbin/httpd -DFOREGROUND
├─5905 /usr/sbin/httpd -DFOREGROUND
└─5906 /usr/sbin/httpd -DFOREGROUND
Aug 29 16:59:00 CLOUDSVRSONAR01 systemd[1]: Starting The Apache HTTP Server...
Aug 29 16:59:00 CLOUDSVRSONAR01 httpd[5834]: AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using fe80::250:56ff:fe01:114a. Set the 'ServerName' directive globally to suppress this message
Aug 29 16:59:00 CLOUDSVRSONAR01 systemd[1]: Started The Apache HTTP Server.
/etc/httpd/conf.d/sonar.oksmart.es.conf :
<VirtualHost *:80>
ServerName localhost
# ProxyPreserveHost On
# ProxyPass / http://localhost:9000/
# ProxyPassReverse / http://localhost:9000/
TransferLog /var/log/httpd/sonar.oksmart.es_access.log
ErrorLog /var/log/httpd/sonar.oksmart.es_error.log
</VirtualHost>
NOTE: I commented the proxy options because if i comment out those lines, i get an error on the web page.
sonar.sh :
DEF_APP_NAME="SonarQube"
DEF_APP_LONG_NAME="SonarQube"
APP_NAME="${DEF_APP_NAME}"
APP_LONG_NAME="${DEF_APP_LONG_NAME}"
WRAPPER_CMD="./wrapper"
WRAPPER_CONF="../../conf/wrapper.conf"
PRIORITY=
PIDDIR="."
RUN_AS_USER=root
...
sonar.service:
[Unit]
Description=SonarQube service
After=syslog.target network.target
[Service]
Type=forking
ExecStart=/opt/sonarqube/bin/linux-x86-64/sonar.sh start
ExecStop=/opt/sonarqube/bin/linux-x86-64/sonar.sh stop
User=root
Group=sonar
Restart=always
[Install]
WantedBy=multi-user.target
Any idea of what am i doing wrong??
Firewall is disabled.
Thanks all!

Looks like the web container failed to startup because of database authentication.
Besides that, you commented out your proxy config, that's why you're seeing default Apache homepage.

Related

flink 1.12.1 example application failing on a single node yarn cluster

I am trying out flink example as explained in flink docs in a single node yarn cluster.
As mentioned in this discussion HADOOP_CONF_DIR is also set like below before executing the yarn command.
export HADOOP_CONF_DIR=/etc/hadoop/conf
On executing the below command
ubuntu#vrni-platform:~/build-target/flink$ ./bin/flink run-application -t yarn-application ./examples/streaming/TopSpeedWindowing.jar
It is failing with the below errors
The program finished with the following exception:
org.apache.flink.client.deployment.ClusterDeploymentException: Couldn't deploy Yarn Application Cluster
at org.apache.flink.yarn.YarnClusterDescriptor.deployApplicationCluster(YarnClusterDescriptor.java:465)
at org.apache.flink.client.deployment.application.cli.ApplicationClusterDeployer.run(ApplicationClusterDeployer.java:67)
at org.apache.flink.client.cli.CliFrontend.runApplication(CliFrontend.java:213)
at org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1061)
at org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1136)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
at org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1136)
Caused by: org.apache.flink.yarn.YarnClusterDescriptor$YarnDeploymentException: The YARN application unexpectedly switched to state FAILED during deployment.
Diagnostics from YARN: Application application_1614159836384_0045 failed 1 times (global limit =2; local limit is =1) due to AM Container for appattempt_1614159836384_0045_000001 exited with exitCode: -1000
Failing this attempt.Diagnostics: [2021-02-24 16:19:39.409]File file:/home/ubuntu/.flink/application_1614159836384_0045/flink-dist_2.12-1.12.1.jar does not exist
java.io.FileNotFoundException: File file:/home/ubuntu/.flink/application_1614159836384_0045/flink-dist_2.12-1.12.1.jar does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:641)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:867)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:631)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:442)
at org.apache.hadoop.yarn.util.FSDownload.verifyAndCopy(FSDownload.java:269)
at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:67)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:414)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:411)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:411)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.doDownloadCall(ContainerLocalizer.java:242)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.call(ContainerLocalizer.java:235)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.call(ContainerLocalizer.java:223)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
I have made the log level DEBUG and I do see that flink-dist_2.12-1.12.1.jar is getting copied to /home/ubuntu/.flink/application_1614159836384_0045.
2021-02-24 16:19:37,768 DEBUG org.apache.flink.yarn.YarnApplicationFileUploader [] - Got modification time 1614183577000 from remote path file:/home/ubuntu/.flink/application_1614159836384_0045/TopSpeedWindowing.jar
2021-02-24 16:19:37,769 DEBUG org.apache.flink.yarn.YarnApplicationFileUploader [] - Copying from file:/home/ubuntu/build-target/flink/lib/flink-dist_2.12-1.12.1.jar to file:/home/ubuntu/.flink/application_1614159836384_0045/flink-dist_2.12-1.12.1.jar with replication factor 1
I have placed the entire DEBUG logs here.
Nodemanger logs have warnings like below
2021-02-24 16:36:34,219 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_INIT for appId application_1614159836384_0047
2021-02-24 16:36:34,220 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Created localizer for container_1614159836384_0047_01_000001
2021-02-24 16:36:34,222 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Writing credentials to the nmPrivate file /var/lib/hadoop-yarn/cache/yarn/nm-local-dir/nmPrivate/container_1614159836384_0047_01_000001.tokens
2021-02-24 16:36:34,222 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Initializing user ubuntu
2021-02-24 16:36:34,224 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Copying from /var/lib/hadoop-yarn/cache/yarn/nm-local-dir/nmPrivate/container_1614159836384_0047_01_000001.tokens to /var/lib/hadoop-yarn/cache/yarn/nm-local-dir/usercache/ubuntu/appcache/application_1614159836384_0047/container_1614159836384_0047_01_000001.tokens
2021-02-24 16:36:34,224 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Localizer CWD set to /var/lib/hadoop-yarn/cache/yarn/nm-local-dir/usercache/ubuntu/appcache/application_1614159836384_0047 = file:/var/lib/hadoop-yarn/cache/yarn/nm-local-dir/usercache/ubuntu/appcache/application_1614159836384_0047
2021-02-24 16:36:34,247 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer: Disk Validator: yarn.nodemanager.disk-validator is loaded.
2021-02-24 16:36:34,268 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: { file:/home/ubuntu/.flink/application_1614159836384_0047/flink-dist_2.12-1.12.1.jar, 1614184593000, FILE, null } failed: File file:/home/ubuntu/.flink/application_1614159836384_0047/flink-dist_2.12-1.12.1.jar does not exist
java.io.FileNotFoundException: File file:/home/ubuntu/.flink/application_1614159836384_0047/flink-dist_2.12-1.12.1.jar does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:641)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:867)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:631)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:442)
at org.apache.hadoop.yarn.util.FSDownload.verifyAndCopy(FSDownload.java:269)
at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:67)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:414)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:411)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:411)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.doDownloadCall(ContainerLocalizer.java:242)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.call(ContainerLocalizer.java:235)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.call(ContainerLocalizer.java:223)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
The entire nodemanger logs are here.
Can someone let me know what is going wrong? Does flink not support single node yarn cluster for development?
Flink Version 1.12.1
There was a configuration issue in my setup. In my setup hadoop-yarn-nodemenager is running with yarn user.
ubuntu#vrni-platform:/tmp/flink$ ps -ef | grep nodemanager
yarn 4953 1 2 05:53 ? 00:11:26 /usr/lib/jvm/java-8-openjdk/bin/java -Dproc_nodemanager -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/lib/heap-dumps/yarn -XX:+ExitOnOutOfMemoryError -Dyarn.log.dir=/var/log/hadoop-yarn -Dyarn.log.file=hadoop-yarn-nodemanager-vrni-platform.log -Dyarn.home.dir=/usr/lib/hadoop-yarn -Dyarn.root.logger=INFO,console -Djava.library.path=/usr/lib/hadoop/lib/native -Xmx512m -Dhadoop.log.dir=/var/log/hadoop-yarn -Dhadoop.log.file=hadoop-yarn-nodemanager-vrni-platform.log -Dhadoop.home.dir=/usr/lib/hadoop -Dhadoop.id.str=yarn -Dhadoop.root.logger=INFO,RFA -Dhadoop.policy.file=hadoop-policy.xml -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.yarn.server.nodemanager.NodeManager
I was executing the ./bin/flink command as ubuntu user and yarn user does not have permission to write to ubuntu's home folder in my setup.
ubuntu#vrni-platform:/tmp/flink$ echo ~ubuntu
/home/ubuntu
ubuntu#vrni-platform:/tmp/flink$ echo ~yarn
/var/lib/hadoop-yarn
It appears flink needs permission to write to user's home directory to create a .flink folder even when the job is submitted in yarn. It is working fine for me if I run the flink with yarn user in my setup.

Apache flink 1.6 HA standalone cluster: Fatal error in the cluster entrypoint

I am trying to setup Apache Flink standalone cluster consisting of 2 master nodes and one worker node. Using Flink 1.6 and Zookeeper. To start and stop cluster I used process described in Flink's 1.6 documentation, i.e. to start cluster I ran start-zookeeper-quorum.sh and then start-cluster.sh
and to stop cluster I ran stop-cluster.sh
After running one job (which failed), then stopping and restarting cluster again I noticed error where none of 2 the job managers could start because they are looking for directory job_e44fdee88a931200953fed45883ee3f1 which does not exist (I am assuming this is directory for my failed job, but not sure)
How do I recover cluster from this error?
2018-09-06 14:58:04,065 ERROR org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Fatal error occurred in the cluster entrypoint.
java.lang.RuntimeException: org.apache.flink.runtime.client.JobExecutionException: Could not set up JobManager
at org.apache.flink.util.ExceptionUtils.rethrow(ExceptionUtils.java:199)
at org.apache.flink.util.function.ConsumerWithException.accept(ConsumerWithException.java:40)
at org.apache.flink.runtime.dispatcher.Dispatcher.lambda$waitForTerminatingJobManager$29(Dispatcher.java:820)
at java.util.concurrent.CompletableFuture.uniRun(CompletableFuture.java:705)
at java.util.concurrent.CompletableFuture$UniRun.tryFire(CompletableFuture.java:687)
at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:332)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:158)
at org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:70)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.onReceive(AkkaRpcActor.java:142)
at org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.onReceive(FencedAkkaRpcActor.java:40)
at akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.scala:165)
at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:95)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
at akka.actor.ActorCell.invoke(ActorCell.scala:495)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
at akka.dispatch.Mailbox.run(Mailbox.scala:224)
at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: org.apache.flink.runtime.client.JobExecutionException: Could not set up JobManager
at org.apache.flink.runtime.jobmaster.JobManagerRunner.<init>(JobManagerRunner.java:176)
at org.apache.flink.runtime.dispatcher.Dispatcher$DefaultJobManagerRunnerFactory.createJobManagerRunner(Dispatcher.java:936)
at org.apache.flink.runtime.dispatcher.Dispatcher.createJobManagerRunner(Dispatcher.java:291)
at org.apache.flink.runtime.dispatcher.Dispatcher.runJob(Dispatcher.java:281)
at org.apache.flink.util.function.ConsumerWithException.accept(ConsumerWithException.java:38)
:
... 21 more
Caused by: java.lang.Exception: Cannot set up the user code libraries: /hastorage/default/blob/job_e44fdee88a931200953fed45883ee3f1/blob_p-f655414c973995e93709acbd22c1c162c9c43a98-75bd4e71882f988a6c337222efadba7b (No such file or directory)
at org.apache.flink.runtime.jobmaster.JobManagerRunner.<init>(JobManagerRunner.java:134)
... 25 more
Caused by: java.io.FileNotFoundException: /hastorage/default/blob/job_e44fdee88a931200953fed45883ee3f1/blob_p-f655414c973995e93709acbd22c1c162c9c43a98-75bd4e71882f988a6c337222efadba7b (No such file or directory)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.<init>(FileInputStream.java:138)
at org.apache.flink.core.fs.local.LocalDataInputStream.<init>(LocalDataInputStream.java:50)
at org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:142)
at org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:102)
at org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:84)
at org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:493)
at org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:444)
at org.apache.flink.runtime.blob.BlobServer.getFile(BlobServer.java:417)
at org.apache.flink.runtime.execution.librarycache.BlobLibraryCacheManager.registerTask(BlobLibraryCacheManager.java:120)
at org.apache.flink.runtime.execution.librarycache.BlobLibraryCacheManager.registerJob(BlobLibraryCacheManager.java:91)
at org.apache.flink.runtime.jobmaster.JobManagerRunner.<init>(JobManagerRunner.java:131)
... 25 more
2018-09-06 14:58:04,069 INFO org.apache.flink.runtime.blob.TransientBlobCache - Shutting down BLOB cache
The problem you are observing is caused by a bug in Flink. You can find more details about the problem here. The problem will be fixed with the next bug fix release.

Atlassian Bamboo installation failing to start [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 3 years ago.
Improve this question
I've installed Bamboo on my machine and I'm trying to run it. Every time I call bin\start-bamboo.bat from the command line it fails to start and I can't figure out why. Here's is what I have in my cataline.log:
23-May-2017 17:34:41.938 WARNING [main]
org.apache.tomcat.util.digester.SetPropertiesRule.begin
[SetPropertiesRule]{Server/Service/Engine/Valve} Setting property
'resolveHosts' to 'false' did not find a matching property.
23-May-2017 17:34:41.943 INFO [main]
org.apache.catalina.core.AprLifecycleListener.lifecycleEvent The APR
based Apache Tomcat Native library which allows optimal performance in
production environments was not found on the java.library.path:
C:\Program Files
(x86)\Java\jdk1.7.0_55\bin;C:\WINDOWS\Sun\Java\bin;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32\WindowsPowerShell\v1.0\;C:\Program
Files\Microsoft SQL Server\120\Tools\Binn\;C:\Program Files\Microsoft
SQL Server\130\Tools\Binn\;C:\Program Files (x86)\Windows
Kits\10\Windows Performance Toolkit\;C:\Program Files\Microsoft SQL
Server\Client SDK\ODBC\110\Tools\Binn\;C:\Program Files
(x86)\Microsoft SQL Server\120\Tools\Binn\;C:\Program Files\Microsoft
SQL Server\120\DTS\Binn\;C:\Program Files (x86)\Microsoft SQL
Server\120\Tools\Binn\ManagementStudio\;C:\Program Files
(x86)\Microsoft SQL
Server\120\DTS\Binn\;C:\Users\matth.dnx\bin;C:\Program
Files\Microsoft DNX\Dnvm\;C:\Program Files (x86)\Microsoft Emulator
Manager\1.0\;C:\Program Files (x86)\nodejs\;C:\Program
Files\Microsoft\Web Platform Installer\;C:\Program
Files\TortoiseGit\bin;C:\Program Files\Git\cmd;C:\Program
Files\Intel\WiFi\bin\;C:\Program Files\Common
Files\Intel\WirelessCommon\;C:\Program Files
(x86)\Skype\Phone\;C:\Program Files\Intel\WiFi\bin\;C:\Program
Files\Common
Files\Intel\WirelessCommon\;C:\Users\matth\AppData\Roaming\npm;C:\Program
Files
(x86)\MSBuild\12.0\Bin;C:\Users\matth\Downloads\mysql-connector-java-5.1.42.tar\mysql-connector-java-5.1.42;.
23-May-2017 17:34:41.989 INFO [main]
org.apache.coyote.AbstractProtocol.init Initializing ProtocolHandler
["http-nio-8085"] 23-May-2017 17:34:42.003 INFO [main]
org.apache.tomcat.util.net.NioSelectorPool.getSharedSelector Using a
shared selector for servlet write/read 23-May-2017 17:34:42.005 INFO
[main] org.apache.catalina.startup.Catalina.load Initialization
processed in 299 ms 23-May-2017 17:34:42.014 INFO [main]
org.apache.catalina.core.StandardService.startInternal Starting
service Catalina 23-May-2017 17:34:42.014 INFO [main]
org.apache.catalina.core.StandardEngine.startInternal Starting Servlet
Engine: Apache Tomcat/8.0.36 23-May-2017 17:34:53.384 INFO
[localhost-startStop-1] org.apache.jasper.servlet.TldScanner.scanJars
At least one JAR was scanned for TLDs yet contained no TLDs. Enable
debug logging for this logger for a complete list of JARs that were
scanned but no TLDs were found in them. Skipping unneeded JARs during
scanning can improve startup time and JSP compilation time.
23-May-2017 17:34:53.404 SEVERE [localhost-startStop-1]
org.apache.catalina.core.StandardContext.startInternal One or more
listeners failed to start. Full details will be found in the
appropriate container log file 23-May-2017 17:34:53.585 INFO
[localhost-startStop-1]
org.apache.catalina.util.SessionIdGeneratorBase.createSecureRandom
Creation of SecureRandom instance for session ID generation using
[SHA1PRNG] took [179] milliseconds. 23-May-2017 17:34:53.585 SEVERE
[localhost-startStop-1]
org.apache.catalina.core.StandardContext.startInternal Context []
startup failed due to previous errors 23-May-2017 17:34:53.643 INFO
[main] org.apache.coyote.AbstractProtocol.start Starting
ProtocolHandler ["http-nio-8085"] 23-May-2017 17:34:53.649 INFO [main]
org.apache.catalina.startup.Catalina.start Server startup in 11644 ms
And here's what I have in my localhost log:
23-May-2017 17:34:53.401 INFO [localhost-startStop-1]
org.apache.catalina.core.ApplicationContext.log No Spring
WebApplicationInitializer types detected on classpath 23-May-2017
17:34:53.402 SEVERE [localhost-startStop-1]
org.apache.catalina.core.StandardContext.listenerStart Error
configuring application listener of class
com.atlassian.bamboo.setup.BootstrapLoaderListener
java.lang.UnsupportedClassVersionError:
com/atlassian/bamboo/setup/BootstrapLoaderListener : Unsupported
major.minor version 52.0 (unable to load class
com.atlassian.bamboo.setup.BootstrapLoaderListener) at
org.apache.catalina.loader.WebappClassLoaderBase.findClassInternal(WebappClassLoaderBase.java:2544)
at
org.apache.catalina.loader.WebappClassLoaderBase.findClass(WebappClassLoaderBase.java:858)
at
org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1301)
at
org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1166)
at
org.apache.catalina.core.DefaultInstanceManager.loadClass(DefaultInstanceManager.java:518)
at
org.apache.catalina.core.DefaultInstanceManager.loadClassMaybePrivileged(DefaultInstanceManager.java:499)
at
org.apache.catalina.core.DefaultInstanceManager.newInstance(DefaultInstanceManager.java:118)
at
org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4764)
at
org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5303)
at
org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:147)
at
org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1407)
at
org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1397)
at java.util.concurrent.FutureTask.run(FutureTask.java:262) at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
23-May-2017 17:34:53.403 SEVERE [localhost-startStop-1]
org.apache.catalina.core.StandardContext.listenerStart Error
configuring application listener of class
com.atlassian.bamboo.ww2.actions.setup.BambooContextLoaderListener
java.lang.UnsupportedClassVersionError:
com/atlassian/bamboo/ww2/actions/setup/BambooContextLoaderListener :
Unsupported major.minor version 52.0 (unable to load class
com.atlassian.bamboo.ww2.actions.setup.BambooContextLoaderListener) at
org.apache.catalina.loader.WebappClassLoaderBase.findClassInternal(WebappClassLoaderBase.java:2544)
at
org.apache.catalina.loader.WebappClassLoaderBase.findClass(WebappClassLoaderBase.java:858)
at
org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1301)
at
org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1166)
at
org.apache.catalina.core.DefaultInstanceManager.loadClass(DefaultInstanceManager.java:518)
at
org.apache.catalina.core.DefaultInstanceManager.loadClassMaybePrivileged(DefaultInstanceManager.java:499)
at
org.apache.catalina.core.DefaultInstanceManager.newInstance(DefaultInstanceManager.java:118)
at
org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4764)
at
org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5303)
at
org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:147)
at
org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1407)
at
org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1397)
at java.util.concurrent.FutureTask.run(FutureTask.java:262) at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
23-May-2017 17:34:53.404 SEVERE [localhost-startStop-1]
org.apache.catalina.core.StandardContext.listenerStart Error
configuring application listener of class
com.atlassian.bamboo.upgrade.UpgradeLauncher
java.lang.UnsupportedClassVersionError:
com/atlassian/bamboo/upgrade/UpgradeLauncher : Unsupported major.minor
version 52.0 (unable to load class
com.atlassian.bamboo.upgrade.UpgradeLauncher) at
org.apache.catalina.loader.WebappClassLoaderBase.findClassInternal(WebappClassLoaderBase.java:2544)
at
org.apache.catalina.loader.WebappClassLoaderBase.findClass(WebappClassLoaderBase.java:858)
at
org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1301)
at
org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1166)
at
org.apache.catalina.core.DefaultInstanceManager.loadClass(DefaultInstanceManager.java:518)
at
org.apache.catalina.core.DefaultInstanceManager.loadClassMaybePrivileged(DefaultInstanceManager.java:499)
at
org.apache.catalina.core.DefaultInstanceManager.newInstance(DefaultInstanceManager.java:118)
at
org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4764)
at
org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5303)
at
org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:147)
at
org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1407)
at
org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1397)
at java.util.concurrent.FutureTask.run(FutureTask.java:262) at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
23-May-2017 17:34:53.404 SEVERE [localhost-startStop-1]
org.apache.catalina.core.StandardContext.listenerStart Error
configuring application listener of class
com.atlassian.bamboo.session.SeraphHttpSessionDestroyedListener
java.lang.UnsupportedClassVersionError:
com/atlassian/bamboo/session/SeraphHttpSessionDestroyedListener :
Unsupported major.minor version 52.0 (unable to load class
com.atlassian.bamboo.session.SeraphHttpSessionDestroyedListener) at
org.apache.catalina.loader.WebappClassLoaderBase.findClassInternal(WebappClassLoaderBase.java:2544)
at
org.apache.catalina.loader.WebappClassLoaderBase.findClass(WebappClassLoaderBase.java:858)
at
org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1301)
at
org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1166)
at
org.apache.catalina.core.DefaultInstanceManager.loadClass(DefaultInstanceManager.java:518)
at
org.apache.catalina.core.DefaultInstanceManager.loadClassMaybePrivileged(DefaultInstanceManager.java:499)
at
org.apache.catalina.core.DefaultInstanceManager.newInstance(DefaultInstanceManager.java:118)
at
org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4764)
at
org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5303)
at
org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:147)
at
org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1407)
at
org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1397)
at java.util.concurrent.FutureTask.run(FutureTask.java:262) at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
23-May-2017 17:34:53.404 SEVERE [localhost-startStop-1]
org.apache.catalina.core.StandardContext.listenerStart Skipped
installing application listeners due to previous error(s)
If anyone could help that'd be most appreciated.
Thanks,
Matt
The hint is within the stack trace:
Unsupported major.minor version 52.0
Check the Java versions - 52.0 is Java 8: https://en.wikipedia.org/wiki/Java_class_file#General_layout
Latest versions of Bamboo require JDK 1.8 (Java 8)
https://confluence.atlassian.com/bamboo/supported-platforms-289276764.html
But Bamboo is starting with Java 7:
...C:\Program Files (x86)\Java\jdk1.7.0_55\bin...
So you need to upgrade your current JDK, or install another JDK and point it to that.

Apache2 is not starting after apt upgrade

I updated Apache2 on my Rapsberry Pi (using: apt install apache2 --only-upgrade) and now it is not starting:
root#pi:/etc/apache2 # service apache2 start
Job for apache2.service failed. See 'systemctl status apache2.service' and 'journalctl -xn' for details.
root#pi:/etc/apache2 # systemctl status apache2.service
● apache2.service - The Apache HTTP Server
Loaded: loaded (/lib/systemd/system/apache2.service; enabled)
Active: failed (Result: resources) since Sun 2017-02-05 16:19:48 CET; 28min ago
Feb 05 16:47:44 pi systemd[1]: Starting The Apache HTTP Server...
Feb 05 16:47:44 pi systemd[1]: apache2.service failed to run 'start' task: No such file or directory
Feb 05 16:47:44 pi systemd[1]: Failed to start The Apache HTTP Server.
Version of apache2:
Server version: Apache/2.4.25 (Raspbian)
Server built: 2017-01-25T22:59:26
apache2ctl -t shows:
Syntax OK
I tried disabling all virtual hosts (only default left) but it didn't change anything.
Output of just apache2:
[Mon Feb 06 01:25:09.079790 2017] [core:warn] [pid 2954] AH00111: Config variable ${APACHE_RUN_DIR} is not defined
apache2: Syntax error on line 80 of /etc/apache2/apache2.conf: DefaultRuntimeDir must be a valid directory, absolute or relative to ServerRoot
I had the same issue after upgrading a Dockerfile from 14.04 to 17.04.
The solution for me was to manually add the apache directory in /var/run
So the fix was:
mkdir /var/run/apache2
The DefaultRuntimeDir was set to /var/run/apache2 but the folder was missing.

unable to load solrconfig.xml

I am using Apache Solr on my drupal website.
Tomcat 6 is installed and I have replaced schema.xml, solr-config.xml and protwords.txt files with the new files which was present in module installation directory.
When I run localhost:8983, I get this error.
Log4j (org.slf4j.impl.Log4jLoggerFactory)
2528 [coreLoadExecutor-3-thread-1] ERROR org.apache.solr.core.CoreContainer – Failed to load file /opt/solr-4.5.1/example/solr/collection1/conf/solrconfig.xml
2529 [coreLoadExecutor-3-thread-1] ERROR org.apache.solr.core.CoreContainer – Unable to create core: egitraining-dev.esc.rl.ac.uk
org.apache.solr.common.SolrException: Could not load config file /opt/solr-4.5.1/example/solr/collection1/conf/solrconfig.xml
at org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:490)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:557)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:247)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:239)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:679)
Caused by: java.io.IOException: Can't find resource 'solrconfig.xml' in classpath or 'solr/collection1/conf/conf/', cwd=/opt/solr-4.5.1/example
at org.apache.solr.core.SolrResourceLoader.openResource(SolrResourceLoader.java:322)
at org.apache.solr.core.SolrResourceLoader.openConfig(SolrResourceLoader.java:287)
at org.apache.solr.core.Config.<init>(Config.java:116)
at org.apache.solr.core.Config.<init>(Config.java:86)
at org.apache.solr.core.SolrConfig.<init>(SolrConfig.java:129)
at org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:487)
... 11 more
2531 [coreLoadExecutor-3-thread-1] ERROR org.apache.solr.core.CoreContainer – null:org.apache.solr.common.SolrException: Unable to create core: egitraining-dev.esc.rl.ac.uk
at org.apache.solr.core.CoreContainer.recordAndThrow(CoreContainer.java:934)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:566)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:247)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:239)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:679)
Caused by: org.apache.solr.common.SolrException: Could not load config file /opt/solr-4.5.1/example/solr/collection1/conf/solrconfig.xml
at org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:490)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:557)
... 10 more
Caused by: java.io.IOException: Can't find resource 'solrconfig.xml' in classpath or 'solr/collection1/conf/conf/', cwd=/opt/solr-4.5.1/example
at org.apache.solr.core.SolrResourceLoader.openResource(SolrResourceLoader.java:322)
at org.apache.solr.core.SolrResourceLoader.openConfig(SolrResourceLoader.java:287)
at org.apache.solr.core.Config.<init>(Config.java:116)
at org.apache.solr.core.Config.<init>(Config.java:86)
at org.apache.solr.core.SolrConfig.<init>(SolrConfig.java:129)
at org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:487)
... 11 more
2533 [main] INFO org.apache.solr.servlet.SolrDispatchFilter – user.dir=/opt/solr-4.5.1/example
2533 [main] INFO org.apache.solr.servlet.SolrDispatchFilter – SolrDispatchFilter.init() done
2576 [main] INFO org.eclipse.jetty.server.AbstractConnector – Started SocketConnector#0.0.0.0:8983
Can anyone help Please?
Thanks
This might have something to do with the default Solr config files provided by the Search API Solr module. Try to remove the next couple of lines from solrconfig.xml:
<useCompoundFile>false</useCompoundFile>
<ramBufferSizeMB>32</ramBufferSizeMB>
<mergeFactor>10</mergeFactor>
Patch found at https://drupal.org/comment/7945999#comment-7945999.

Resources