It gives me this error pointing to one of the pom files. I checked the plugin and its not null. I even updated it to the latest version. still doesn't fix it.
Execution default-site of goal org.apache.maven.plugins:maven-site-plugin:3.4:site failed: Anchor name cannot be null
It looks like that exception surfaces in a lot of cases and is incredibly unhelpful.
There was an issue a while back where not having a license name defined in the pom would throw this exception.
I ran into this exception (with version 3.4) where I had a typo in my changes.xml file (verison instead of version). I was only tipped off by seeing that it failed in a ChangesReportGenerator class. Even then I glanced over the typo several times before seeing it.
For anyone else running into this, take the time to check for even the most random typos - it might just be the problem.
For reference, here's the complete error message from the log:
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] example.products.parent ...................... SUCCESS [0.709s]
[INFO] Example Product .............................. FAILURE [2.199s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 13.162s
[INFO] Finished at: Fri Sep 02 12:53:19 CDT 2016
[INFO] Final Memory: 26M/64M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-site-plugin:3.4:site (default-site) on project example.product: Execution default-site of goal org.apache.maven.plugins:maven-site-plugin:3.4:site failed: Anchor name cannot be null! -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.apache.maven.plugins:maven-site-plugin:3.4:site (default-site) on project example.product: Execution default-site of goal org.apache.maven.
plugins:maven-site-plugin:3.4:site failed: Anchor name cannot be null!
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:225)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
at org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:320)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:537)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:196)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:141)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:290)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:230)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:409)
at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:352)
Caused by: org.apache.maven.plugin.PluginExecutionException: Execution default-site of goal org.apache.maven.plugins:maven-site-plugin:3.4:site failed: Anchor name cannot be null!
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:110)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:209)
... 19 more
Caused by: java.lang.NullPointerException: Anchor name cannot be null!
at org.apache.maven.doxia.sink.XhtmlBaseSink.anchor(XhtmlBaseSink.java:1545)
at org.apache.maven.doxia.siterenderer.sink.SiteRendererSink.anchor(SiteRendererSink.java:253)
at org.apache.maven.doxia.sink.XhtmlBaseSink.anchor(XhtmlBaseSink.java:1533)
at org.apache.maven.plugin.issues.AbstractIssuesReportGenerator.sinkSectionTitle2Anchor(AbstractIssuesReportGenerator.java:181)
at org.apache.maven.plugin.changes.ChangesReportGenerator.constructRelease(ChangesReportGenerator.java:528)
at org.apache.maven.plugin.changes.ChangesReportGenerator.constructReleases(ChangesReportGenerator.java:511)
at org.apache.maven.plugin.changes.ChangesReportGenerator.doGenerateReport(ChangesReportGenerator.java:230)
at org.apache.maven.plugin.changes.ChangesMojo.executeReport(ChangesMojo.java:356)
at org.apache.maven.reporting.AbstractMavenReport.generate(AbstractMavenReport.java:196)
at org.apache.maven.plugins.site.render.ReportDocumentRenderer.renderDocument(ReportDocumentRenderer.java:224)
at org.apache.maven.doxia.siterenderer.DefaultSiteRenderer.renderModule(DefaultSiteRenderer.java:311)
at org.apache.maven.doxia.siterenderer.DefaultSiteRenderer.render(DefaultSiteRenderer.java:129)
at org.apache.maven.plugins.site.render.SiteMojo.renderLocale(SiteMojo.java:182)
at org.apache.maven.plugins.site.render.SiteMojo.execute(SiteMojo.java:141)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:101)
... 20 more
[ERROR]
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/PluginExecutionException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
Related
I am seeking if you can help in with the below error that is been thrown. I have tried to increase the heap size but it is still complaining about the same.
The process file endpoint doing the decryption of 3GB file size and more.., and it complaints about it, but with other files, it worked fine.
I am wondering why the the, in case of the heap issue, I am seeing null, , i.e
I have tried to gave the heap size upto 8GB still process complaint about it. I guess it may not be related to the JVM heap, I may request your input what would be the possible cause of it and any remediation can be done on.
[//some/folder/Receive] FileConsumer WARN Consumer
Consumer[file:///some folder/Receive?delete=true] failed polling
endpoint: Endpoint[file:///somefolder/Receive?delete=true]. Will try
again at next poll. Caused by: [java.lang.OutOfMemoryError - null]
java.lang.OutOfMemoryError
at java.io.ByteArrayOutputStream.hugeCapacity(ByteArrayOutputStream.java:123)
at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:117)
at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:153)
at org.apache.camel.converter.crypto.PGPKeyAccessDataFormat.unmarshal(PGPKeyAccessDataFormat.java:401)
at org.apache.camel.processor.UnmarshalProcessor.process(UnmarshalProcessor.java:67)
at org.apache.camel.management.InstrumentationProcessor.process(InstrumentationProcessor.java:72)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:191)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:118)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:80)
at org.apache.camel.processor.TryProcessor.process(TryProcessor.java:111)
at org.apache.camel.processor.TryProcessor.process(TryProcessor.java:82)
at org.apache.camel.management.InstrumentationProcessor.process(InstrumentationProcessor.java:72)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:191)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:191)
at org.apache.camel.component.file.GenericFileConsumer.processExchange(GenericFileConsumer.java:423)
at org.apache.camel.component.file.GenericFileConsumer.processBatch(GenericFileConsumer.java:211)
at org.apache.camel.component.file.GenericFileConsumer.poll(GenericFileConsumer.java:175)
at org.apache.camel.impl.ScheduledPollConsumer.doRun(ScheduledPollConsumer.java:187)
at org.apache.camel.impl.ScheduledPollConsumer.run(ScheduledPollConsumer.java:114)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
The error occurs sometimes ,and after reboot kylin(kylin.sh stop and then kylin.sh start), it will find the conf dir location and pass this step.
I am using Kylin version "2.6.2", and KYLIN_CONF="/opt/kylin/conf" is already set correctly.
The errors hints are different , as i have countered the following:
1.
java.lang.RuntimeException: Failed to read kylin_hive_conf.xml at '/opt/apache-kylin-2.6.2-bin-hadoop3/bin/meta/kylin_hive_conf.xml'
at org.apache.kylin.common.util.SourceConfigurationUtil.loadXmlConfiguration(SourceConfigurationUtil.java:88)
at org.apache.kylin.common.util.SourceConfigurationUtil.loadHiveConfiguration(SourceConfigurationUtil.java:61)
at org.apache.kylin.common.util.HiveCmdBuilder.<init>(HiveCmdBuilder.java:48)
at org.apache.kylin.source.hive.GarbageCollectionStep.cleanUpIntermediateFlatTable(GarbageCollectionStep.java:63)
at org.apache.kylin.source.hive.GarbageCollectionStep.doWork(GarbageCollectionStep.java:49)
at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:167)
at org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:71)
at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:167)
at org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:114)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2.
java.lang.RuntimeException: Failed to read kylin_hive_conf.xml at '/opt/apache-kylin-2.6.2-bin-hadoop3/bin/meta/kylin_hive_conf.xml'
3.
java.lang.RuntimeException: Failed to read kylin_hive_conf.xml at '/opt/apache-kylin-2.6.2-bin-hadoop3/conf/meta/kylin_hive_conf.xml'
who can kindly help me find the root cause and fix this problem ?
Thanks in advance.
I hope you have already solve the issue. I had encounter same problem and investigated about it.
Prefer to https://github.com/apache/kylin/blob/kylin-2.6.2/engine-mr/src/main/java/org/apache/kylin/engine/mr/common/AbstractHadoopJob.java#L481
When we use MapReduce, KYLIN_CONF will be set to different folder.
System.setProperty(KylinConfig.KYLIN_CONF, metaDir.getAbsolutePath());
I think to workaround with it, we have to create simple link for all XML xml configurations.
Try to check your Kylin log
cat YOUR_PATH/apache-kylin-2.6.3-bin-hbase1x/logs/kylin.log | grep "The absolute path"
You possibly see the result
2019-10-14 23:47:04,438 INFO [LocalJobRunner Map Task Executor #0] common.AbstractHadoopJob:482 : The absolute path for meta dir is /SOME_FOLDER/meta
The Nutch Crawler succesfully indexed the documents upto a particular time. At some point its stopeed abruptly don't know the reasons. i am posting the logs may i know the reason for this.
java.lang.Exception: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in shuffle in localfetcher#1
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:529)
Caused by: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in shuffle in localfetcher#1
at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:376)
at org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:319)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.OutOfMemoryError: Java heap space
at org.apache.hadoop.io.BoundedByteArrayOutputStream.<init>(BoundedByteArrayOutputStream.java:56)
at org.apache.hadoop.io.BoundedByteArrayOutputStream.<init>(BoundedByteArrayOutputStream.java:46)
at org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput.<init>(InMemoryMapOutput.java:63)
at org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl.unconditionalReserve(MergeManagerImpl.java:309)
at org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl.reserve(MergeManagerImpl.java:299)
at org.apache.hadoop.mapreduce.task.reduce.LocalFetcher.copyMapOutput(LocalFetcher.java:134)
at org.apache.hadoop.mapreduce.task.reduce.LocalFetcher.doCopy(LocalFetcher.java:102)
at org.apache.hadoop.mapreduce.task.reduce.LocalFetcher.run(LocalFetcher.java:85)
2018-08-30 03:15:54,758 ERROR indexer.IndexingJob - Indexer: java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:873)
at org.apache.nutch.indexer.IndexingJob.index(IndexingJob.java:147)
at org.apache.nutch.indexer.IndexingJob.run(IndexingJob.java:230)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.nutch.indexer.IndexingJob.main(IndexingJob.java:239)
Caused by: java.lang.OutOfMemoryError: Java heap space
It's a memory error
try adjust in solr.in.sh
SOLR_JAVA_MEM="-Xms512m -Xmx5120m"
For me after this it's work
i just configure my flink cluster (1.0.3 version) and i whanted to run tersort with the flink framework, but its me returned the next erreor, i used this comamnde to run my programm:
/bin/flink run -c --class es.udc.gac.flinkbench.ScalaTeraSort flinkbench-assembly-1.0.jar hdfs://192.168.3.89:8020/teragen/10mo hdfs://192.168.3.89:8020/teragen/rstflink
and its me return this :
Cluster configuration: Standalone cluster with JobManager at localhost/127.0.0.1:6123
Using address localhost:6123 to connect to JobManager.
JobManager web interface address http://localhost:8081
Starting execution of program
2017-06-23 10:28:43,692 INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths to process : 2
Spent 2278ms computing base-splits.
Spent 2ms computing TeraScheduler splits.
Computing input splits took 2281ms
Sampling 2 splits of 2
Making -1 from 100000 sampled records
------------------------------------------------------------
The program finished with the following exception:
org.apache.flink.client.program.ProgramInvocationException: The main method caused an error.
at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:545)
at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:419)
at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:339)
at org.apache.flink.client.CliFrontend.executeProgram(CliFrontend.java:831)
at org.apache.flink.client.CliFrontend.run(CliFrontend.java:256)
at org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:1073)
at org.apache.flink.client.CliFrontend$2.call(CliFrontend.java:1120)
at org.apache.flink.client.CliFrontend$2.call(CliFrontend.java:1117)
at org.apache.flink.runtime.security.HadoopSecurityContext$1.run(HadoopSecurityContext.java:43)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:40)
at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1116)
Caused by: java.lang.NegativeArraySizeException
at es.udc.gac.flinkbench.terasort.TeraInputFormat$TextSampler.createPartitions(TeraInputFormat.java:103)
at es.udc.gac.flinkbench.terasort.TeraInputFormat.writePartitionFile(TeraInputFormat.java:183)
at es.udc.gac.flinkbench.ScalaTeraSort$.main(ScalaTeraSort.scala:49)
at es.udc.gac.flinkbench.ScalaTeraSort.main(ScalaTeraSort.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:528)
... 13 more
I am trying to use sonar. The server is up and running (using the default embedded derby database), serving a web page with no projects. Then I was planning to use sonar-runner to examine the source code, but I cannot get it to work even with the simplest hello world program. Any hints on what is missing/I am doing wrong?
(hlovdal) localhost:/work/sonar>wget -q http://repository.codehaus.org/org/codehaus/sonar-plugins/sonar-runner/1.1/sonar-runner-1.1.zip
(hlovdal) localhost:/work/sonar>unzip sonar-runner-1.1.zip
Archive: sonar-runner-1.1.zip
creating: sonar-runner-1.1/
...
inflating: sonar-runner-1.1/lib/sonar-runner.jar
(hlovdal) localhost:/work/sonar>export SONAR_RUNNER_HOME=/work/sonar/sonar-runner-1.1
(hlovdal) localhost:/work/sonar>$SONAR_RUNNER_HOME/bin/sonar-runner -h
usage: sonar-runner [options]
Options:
-h,--help Display help information
-X,--debug Produce execution debug output
-D,--define <arg> Define property
(hlovdal) localhost:/work/sonar>
But trying to run sonar-runner fails:
(hlovdal) localhost:/work/sonar>cd helloworld/
(hlovdal) localhost:/work/sonar/helloworld>cat main.c
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char *argv[])
{
printf("hello world\n");
return EXIT_SUCCESS;
}
(hlovdal) localhost:/work/sonar/helloworld>grep -v -E '^#|^$' sonar-runner.properties
sonar.projectKey=helloworld
sonar.projectName=Hello world
sonar.projectVersion=1.0
sonar.language=c
sources=/work/sonar/helloworld
sonar.sourceEncoding=UTF-8
(hlovdal) localhost:/work/sonar/helloworld>$SONAR_RUNNER_HOME/bin/sonar-runner
Runner settings: /work/sonar/sonar-runner-1.1/conf/sonar-runner.properties
Runner version: 1.1
Server: http://localhost:9000
Work directory: /work/sonar/helloworld/.sonar
[INFO] Database dialect class org.sonar.jpa.dialect.Derby
[INFO] Initializing Hibernate
Exception in thread "main" org.sonar.batch.bootstrapper.BootstrapException: org.picocontainer.PicoLifecycleException: PicoLifecycleException: method 'public void org.sonar.batch.index.DefaultIndex.start()', instance 'org.sonar.batch.index.DefaultIndex#3d57211f, java.lang.RuntimeException: wrapper
at org.sonar.runner.Runner.delegateExecution(Runner.java:155)
at org.sonar.runner.Runner.execute(Runner.java:58)
at org.sonar.runner.Main.main(Main.java:52)
Caused by: org.picocontainer.PicoLifecycleException: PicoLifecycleException: method 'public void org.sonar.batch.index.DefaultIndex.start()', instance 'org.sonar.batch.index.DefaultIndex#3d57211f, java.lang.RuntimeException: wrapper
at org.picocontainer.monitors.NullComponentMonitor.lifecycleInvocationFailed(NullComponentMonitor.java:77)
at org.picocontainer.lifecycle.ReflectionLifecycleStrategy.monitorAndThrowReflectionLifecycleException(ReflectionLifecycleStrategy.java:132)
at org.picocontainer.lifecycle.ReflectionLifecycleStrategy.invokeMethod(ReflectionLifecycleStrategy.java:115)
at org.picocontainer.lifecycle.ReflectionLifecycleStrategy.start(ReflectionLifecycleStrategy.java:89)
at org.picocontainer.injectors.AbstractInjectionFactory$LifecycleAdapter.start(AbstractInjectionFactory.java:84)
at org.picocontainer.behaviors.AbstractBehavior.start(AbstractBehavior.java:169)
at org.picocontainer.behaviors.Stored$RealComponentLifecycle.start(Stored.java:132)
at org.picocontainer.behaviors.Stored.start(Stored.java:110)
at org.picocontainer.DefaultPicoContainer.potentiallyStartAdapter(DefaultPicoContainer.java:996)
at org.picocontainer.DefaultPicoContainer.startAdapters(DefaultPicoContainer.java:989)
at org.picocontainer.DefaultPicoContainer.start(DefaultPicoContainer.java:746)
at org.sonar.batch.bootstrap.Module.start(Module.java:88)
at org.sonar.batch.bootstrap.BootstrapModule.doStart(BootstrapModule.java:96)
at org.sonar.batch.bootstrap.Module.start(Module.java:89)
at org.sonar.batch.Batch.execute(Batch.java:74)
at org.sonar.runner.Launcher.executeBatch(Launcher.java:60)
at org.sonar.runner.Launcher.execute(Launcher.java:53)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at org.sonar.runner.Runner.delegateExecution(Runner.java:152)
... 2 more
Caused by: java.lang.RuntimeException: wrapper
at org.picocontainer.lifecycle.ReflectionLifecycleStrategy.monitorAndThrowReflectionLifecycleException(ReflectionLifecycleStrategy.java:130)
... 22 more
Caused by: java.lang.NullPointerException
at org.sonar.api.resources.Resource.hashCode(Resource.java:242)
at java.util.HashMap.put(HashMap.java:389)
at org.sonar.batch.index.DefaultIndex.doStart(DefaultIndex.java:98)
at org.sonar.batch.index.DefaultIndex.start(DefaultIndex.java:93)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at org.picocontainer.lifecycle.ReflectionLifecycleStrategy.invokeMethod(ReflectionLifecycleStrategy.java:110)
... 21 more
(hlovdal) localhost:/work/sonar/helloworld>
The projectKey does not look correct, it should be something like xxx:yyy