Running pubsub kafka connector standalone mode issues - google-cloud-pubsub

So, I have been trying to get a PubSub Kafka connector running for about a month now with various problems. I have reviewed many questions here about Kafka Connect and the Pubsub connector which have helped me get his far but I am stuck again. When I run this command:
.\bin\windows\connect-standalone.bat
.\etc\kafka\WorkerConfig.properties .\etc\kafka\configSink.properties .\etc\kafka\configSource.properties
I get a long list of errors linked here:
Right after it tries to start the rest server is when the errors "could not scan file [file name]..." start. I am unsure if I need to set the rest.host.name and rest.port because currently, for the standaloneConfig values, it reads
rest.host.name = null
Edit: After reviewing the log file for awhile, I found the following messages:
Kafka consumer created
Created connector CPSConnector
Initializing task CPSConnector-0 with config {connector.class=com.google.pubsub.kafka.sink.CloudPubSubSinkConnector, task.class=com.google.pubsub.kafka.sink.CloudPubSubSinkTask, tasks.max=1, topics=, cps.project=kohls-sis-sandbox, name=CPSConnector, cps.topic=test-pubsub}
Task CPSConnector-0 threw an uncaught and unrecoverable exception
org.apache.kafka.connect.errors.ConnectException: Sink tasks require a list of topics.
at org.apache.kafka.connect.runtime.WorkerSinkTask.initializeAndStart(WorkerSinkTask.java:202)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:139)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:140)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:175)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Edit: So, I fixed the above issue by adding topics=test in my configSink. The current error message is below. Does this indicate that you can only run either a sink connector or source connector?
Failed to create job for .\etc\kafka\configSource.properties
Stopping after connector error
java.util.concurrent.ExecutionException: org.apache.kafka.connect.errors.AlreadyExistsException: Connector CPSConnector already exists
at org.apache.kafka.connect.util.ConvertingFutureCallback.result(ConvertingFutureCallback.java:80)
at org.apache.kafka.connect.util.ConvertingFutureCallback.get(ConvertingFutureCallback.java:67)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:97)
Caused by: org.apache.kafka.connect.errors.AlreadyExistsException: Connector CPSConnector already exists
at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.putConnectorConfig(StandaloneHerder.java:145)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:94)
In my WorkerConfig.properites, I have bootstrap.servers=localhost:2181. My property files are here.
I am not sure how to fix since I have my properties files set, made sure the cps-kakfa-connector.jar is in the class path. I also set plugin.path=\share\java\kafka\kafka-connect-pubsub.
If anyone can point me in the right direction to fix this issue, that would be great. I followed the directions here: https://github.com/GoogleCloudPlatform/pubsub/tree/master/kafka-connector

Each Connector instance, whether it's a source or a sink, needs to have a unique name when you submit its configuration properties to a Kafka Connect cluster, or standalone worker.
In the above example, just name your Source differently than your Sink.
For instance:
$ head -n 1 configSource.properties
name=CPSSourceConnector
$ head -n 1 configSink.properties
name=CPSSinkConnector
or, might as well:
$ head -n 1 configSource.properties
name=Tom
$ head -n 1 configSink.properties
name=Jerry

Related

Failed to create collection 'techproducts' due to: Underlying core creation failed while creating collection: techproducts

I just started to learn solr with official documentation and during the first exercise "Index Techproducts Example Data" I failed due to following error: " Failed to create collection 'techproducts' due to: Underlying core creation failed while creating collection: techproducts".
I tried to change java version from 13 to 8 but it didn't helped.
Here is link to the documentation: https://lucene.apache.org/solr/guide/8_5/solr-tutorial.html#exercise-1
Stacktrace from solr Admin console
Collection: techproducts operation: create failed:org.apache.solr.common.SolrException: Underlying core creation failed while creating collection: techproducts
at org.apache.solr.cloud.api.collections.CreateCollectionCmd.call(CreateCollectionCmd.java:304)
at org.apache.solr.cloud.api.collections.OverseerCollectionMessageHandler.processMessage(OverseerCollectionMessageHandler.java:263)
at org.apache.solr.cloud.OverseerTaskProcessor$Runner.run(OverseerTaskProcessor.java:504)
at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:210)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
I had run into similar situation while following Solr
's official tutorial as following
➜ solr-8.7.0 ERROR: Failed to create collection 'techproducts' due to: Underlying core creation failed while creating collection: techproducts
And problem solved my turning off my vpn. I guess the vpn routing probably messed up with solr's localhost setting somehow.
I had the same Underlying core creation failed... error too. Using Java 11, Windows 10.
The log file was ${solr-home}\example\cloud\node1\logs\solr.log. Inside it had:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://192.168.1.16:7574/solr: Error CREATEing SolrCore 'techproducts_shard1_replica_n1': Unable to create core [techproducts_shard1_replica_n1] Caused by: no segments* file found in LockValidatingDirectoryWrapper(NRTCachingDirectory(MMapDirectory#{solr_home}\example\cloud\node2\solr\techproducts_shard1_replica_n1\data\index lockFactory=org.apache.lucene.store.NativeFSLockFactory#16326253; maxCacheMB=48.0 maxMergeSizeMB=4.0)): files: [write.lock] at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:681) ~[?:?]
at (etc. etc.)e
But this was the second time I launched solr. The first time it timed out trying to contact one of the nodes and the tutorial script aborted. But the nodes were still running. I killed them off using the windows task manager and not by using solr stop. So I suspect I left an instable mess behind and the second time the tutorial ran it crashed into this mess.
I erased everything and started over from unzipping and this third time there were no timeouts and the tutorial completed without error.
File: /opt/solr/server/etc/jetty.xml
(1) Name="requestHeaderSize" set Property name "solr.jetty.request.header.size" default="81920"
(2) Name="responseHeaderSize"> set Property name="solr.jetty.response.header.size" default="81920"
(3) Restart Solr
Hm, tried this, still getting the exact same error.
After Change:
[Set name="requestHeaderSize"][Property name="solr.jetty.request.header.size" default="81920" /][/Set]
[Set name="responseHeaderSize"][Property name="solr.jetty.response.header.size" default="81920" /][/Set]
I stopped everything and retried, then I had Windows Firewall prompt me for 'SAP Machine' authorization for java 11 message, I accepted it, and retried. Then it worked. Seems Windows Firewall related.

flink job submission org.apache.flink.runtime.messages.FlinkJobNotFoundException: Could not find Flink job

Getting the following flink job submission error,
#centos1 flink-1.10.0]$ ./bin/flink run -m 10.0.2.4:8081 ./examples/batch/WordCount.jar --input file:///storage/flink-1.10.0/test.txt --output file:///storage/flink-1.10.0/wordcount_out
Job has been submitted with JobID 33d489aee848401e08c425b053c854f9
------------------------------------------------------------
The program finished with the following exception:
org.apache.flink.client.program.ProgramInvocationException: The main method caused an error: org.apache.flink.runtime.rest.util.RestClientException: [org.apache.flink.runtime.rest.handler.RestHandlerException: org.apache.flink.runtime.messages.FlinkJobNotFoundException: Could not find Flink job (33d489aee848401e08c425b053c854f9)
....
Caused by: java.util.concurrent.CompletionException: org.apache.flink.runtime.messages.FlinkJobNotFoundException: Could not find Flink job (33d489aee848401e08c425b053c854f9)
Caused by: org.apache.flink.runtime.messages.FlinkJobNotFoundException: Could not find Flink job (33d489aee848401e08c425b053c854f9)
at org.apache.flink.runtime.dispatcher.Dispatcher.getJobMasterGatewayFuture(Dispatcher.java:776)
at org.apache.flink.runtime.dispatcher.Dispatcher.requestJobStatus(Dispatcher.java:505)
... 27 more
]
logs from the taskmanger nodes: saying the file not found.. Is the correct way of pointing files in a flink cluster setup.
2020-03-19 13:15:29,843 ERROR org.apache.flink.runtime.operators.BatchTask - Error in task code: CHAIN DataSource (at main(WordCount.java:69) (org.apache.flink.api.java.io.TextInputFormat)) -> FlatMap (FlatMap at main(WordCount.java:84)) -> Combine (SUM(1), at main(WordCount.java:87) (1/2)
java.io.IOException: Error opening the Input Split file:/storage/flink-1.10.0/test.txt [0,19]: /storage/flink-1.10.0/test.txt (No such file or directory)
at org.apache.flink.api.common.io.FileInputFormat.open(FileInputFormat.java:824)
at org.apache.flink.api.common.io.DelimitedInputFormat.open(DelimitedInputFormat.java:470)
how to troubleshoot the above error, what to check , very less clues in the flink logs
The reason why is happening is because you are submitting a job across a distributed cluster and the location you have specified is perhaps only accessible by Job manager or machine from where you have submitted your job. However, actual program and Job execution takes place in Task Manager. Better approach for this would be by specifying a location which is accessible by all the nodes, may be HDFS or NFS.

[AWS Glue]: org.apache.thrift.TApplicationException: Internal error processing createInterpreter

I'm trying to use zeppelin-0.8.0 to connect to AWS Glue Development endpoint and when executing a cell below error occurs.
And there is no helpful message to understand what could be the problem. Any leads appreciated
172318_1906434757 is finished, status: ERROR, exception: java.lang.RuntimeException: org.apache.thrift.TApplicationException: Internal error processing createInterpreter, result: %text org.apache.thrift.TApplicationException: Internal error processing createInterpreter
at org.apache.thrift.TApplicationException.read(TApplicationException.java:111)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:71)
at org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Client.recv_createInterpreter(RemoteInterpreterService.java:209)
at org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Client.createInterpreter(RemoteInterpreterService.java:192)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreter$2.call(RemoteInterpreter.java:169)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreter$2.call(RemoteInterpreter.java:165)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterProcess.callRemoteFunction(RemoteInterpreterProcess.java:135)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.internal_create(RemoteInterpreter.java:165)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.open(RemoteInterpreter.java:132)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.getFormType(RemoteInterpreter.java:299)
at org.apache.zeppelin.notebook.Paragraph.jobRun(Paragraph.java:407)
at org.apache.zeppelin.scheduler.Job.run(Job.java:188)
at org.apache.zeppelin.scheduler.RemoteScheduler$JobRunner.run(RemoteScheduler.java:307)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
UPDATE: So as in the answer below looks like 0.8.0 doesn't work with Glue yet.. I had problems running 0.7.x aw well with the javax.ws.rx package having a bunch of MethodNotFoundException when running with Java 8(did not help update-alternative to Java 7 as well). But when running inside a JDK 7 docker container it worked with no problems and was able to connect to my Dev end point. Highly appreciate if anyone can clarify the root cause of it
Could you please provide more information, such as zeppin instance location. Is it running on your desktop/laptop or is it running as AWS Notebook server? Also did you try connecting to zeppelin 0.7.3 version, as mentioned here in this AWS forum link :
https://forums.aws.amazon.com/thread.jspa?threadID=285128
As per the above link dated Jul 2018, think AWS Glue doesn't yet support Zeppelin 0.8 version.
I am assuming all other configurations, environment settings are done as needed. Can help more, if you can provide additional info.
UPDATE:
Anyway, please refer here and setting up zeppelin on windows, for any help on setting up local development environment & zeppelin notebook.
Once you set up the zeppelin notebook, have an SSH connection established (using AWS Glue DevEndpoint URL), so you can have access to the data catalog/crawlers,etc., and also the S3 bucket where your data resides. Then, you can create your python scripts in the zeppelin notebook, and run from the zeppelin.
You can use dev instance provided by Glue, but you may incur additional costs for the same(EC2 instance charges).
Environment settings (updated in response to comments):
JAVA_HOME=E:\Java7\jre7
Path=E:\Python27;E:\Python27\Lib;E:\Python27\Scripts;
PYTHONPATH=E:\spark-2.1.0-bin-hadoop2.7\python;E:\spark-2.1.0-bin-hadoop2.7\python\lib\py4j-0.10.4-src.zip;E:\spark-2.1.0-bin-hadoop2.7\python\lib\pys
park.zip
SPARK_HOME=E:\spark-2.1.0-bin-hadoop2.7
Change the drive name/ folders accordingly. Let me know if any help neeed.

Collecting Metrics with Graphite Plugin leads to "A metric named [..] already exists" error

when i configure the flink-conf.yaml to collect metrics with the graphite plugin,
the most time only incomplete metrics are being sent. On the Taskmanager output multiple errors occur like:
2018-08-15 00:58:59,016 WARN org.apache.flink.runtime.metrics.MetricRegistryImpl - Error while registering metric.
java.lang.IllegalArgumentException: A metric named mycomputer.taskmanager.8ceab4c3dfbf9fc5fa2af0447f1373a1.State machine job.Source: Custom Source.0.numRecordsOut already exists
at com.codahale.metrics.MetricRegistry.register(MetricRegistry.java:91)
at org.apache.flink.dropwizard.ScheduledDropwizardReporter.notifyOfAddedMetric(ScheduledDropwizardReporter.java:131)
at org.apache.flink.runtime.metrics.MetricRegistryImpl.register(MetricRegistryImpl.java:329)
at org.apache.flink.runtime.metrics.groups.AbstractMetricGroup.addMetric(AbstractMetricGroup.java:379)
at org.apache.flink.runtime.metrics.groups.AbstractMetricGroup.counter(AbstractMetricGroup.java:312)
at org.apache.flink.runtime.metrics.groups.AbstractMetricGroup.counter(AbstractMetricGroup.java:302)
at org.apache.flink.runtime.metrics.groups.OperatorIOMetricGroup.<init>(OperatorIOMetricGroup.java:41)
at org.apache.flink.runtime.metrics.groups.OperatorMetricGroup.<init>(OperatorMetricGroup.java:48)
at org.apache.flink.runtime.metrics.groups.TaskMetricGroup.addOperator(TaskMetricGroup.java:146)
at org.apache.flink.streaming.api.operators.AbstractStreamOperator.setup(AbstractStreamOperator.java:174)
at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.setup(AbstractUdfStreamOperator.java:82)
at org.apache.flink.streaming.runtime.tasks.OperatorChain.<init>(OperatorChain.java:143)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:267)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
at java.lang.Thread.run(Thread.java:748)
I've tried this on a completely freshly prepared flink-1.6.0 release with following config and the precompiled "State machine job" in the examples folder:
metrics.reporters: grph
metrics.reporter.grph.class: org.apache.flink.metrics.graphite.GraphiteReporter
metrics.reporter.grph.host: localhost
metrics.reporter.grph.port: 2003
metrics.reporter.grph.interval: 1 SECONDS
metrics.reporter.grph.protocol: TCP
I use the official graphite docker image (https://hub.docker.com/r/graphiteapp/docker-graphite-statsd/) that is running on the default configuration.
Has anybody an idea, how i can fix this issue?
Thank's and best regards
update
to exclude that a specific local setting is responsible for this behaviour, I repeated the process on a clean EC2 instance. There's exactly the same error here.
How to reproduce:
start EC2 t2.xlarge
installed java
download flink at https://www.apache.org/dyn/closer.lua/flink/flink-1.6.0/flink-1.6.0-bin-scala_2.11.tgz
added the flink-metrics-graphite-1.6.0.jar to lib
configured the flink-yaml.conf as mentioned in my previous post
./bin/start-cluster.sh
./bin/flink run examples/streaming/StateMachineExample.jar
I have not set up graphite in this case, because the error obviously already
occurs before.
After the job has been started you can view the error in the flink dashboard under Task Manager -> Logs

Sonarqube 5.6 database copy fails (Exception sending context initialized event to listener...)

I need to move a Sonarqube 5.6 installation from one server to another.
The new server will also run with a new database so my plan is to copy
the old data to the new database and then start the new Sonarqube instance
against the new database containing the copied data.
Both new and old are Sonarqube 5.6 with Oracle. The old database is
Oracle 11g and the new will be Oracle 12c, but I am using the Oracle
Express 11g and another local Sonarqube 5.6 installation in order to test the procedure.
I proceed as follows:
(1) Export old database with SQL Developer as DDL (insert format)
(2) Make some small changes to resulting sql:
Tablespace name is hard-coded and different in target database so adapt
clause "SEGMENT CREATION DEFERRED" not supported in target database, so I simply deleted it
(3) Import sql to new target database
(4) Start new Sonarqube instance connecting to new database
After this unfortunately the Sonarqube server ends and in the logs I see the error/exception:
Exception sending context initialized event to listener instance of class org.sonar.server.platform.PlatformServletContextListener
(full text below).
Further tests:
If I start the new Sonarqube instance against the new database with no imported data
fresh tables are created and all is well. After doing that I can also export the new database,
drop and recreate the new sonarqube database user, and re-import the data from the new environment,
also works fine.
That is to say the new installation in stand-alone mode works fine, the export/import also works
fine (at least with minimal data and exported from the same environment / database).
The problem therefore seems to be caused by something in the data I am importing from the old
Sonarqube installation.
I have also tried after the import rebuilding all indexes (no change), and deleting all rows
from all tables (sonarqube then tries to create new tables and runs into an error because
table projects alreads exists).
One thing that does occur to me is that the old installation has many plugins. I have tried
to get the new installation to the same state but it is not totally identical, there are a few
version differences and the old installation had some licenced plugins (Swift and Objective C)
that I do not have for my local test installation. There are also a few error messages in the log
to that effect, but these don't seem to be the critical problem.
**2017.01.21 00:07:53 ERROR web[cpp] No license for cpp
2017.01.21 00:07:53 ERROR web[objc] No license for objc**
I have also tried deleting the logs, data, temp directories in Sonarqube before starting the
new server against the new database.
I have of course searched for this error message but it seems to mostly occur when migrating
from one Sonar version to another which is not the case here.
Does anyone have any thoughts?
Should this procedure theoretically work or have I missed something?
Many thanks for any ideas!
2017.01.21 00:08:29 INFO web[o.s.s.n.NotificationService] Notification service stopped
2017.01.21 00:08:29 ERROR web[o.a.c.c.C.[.[.[/]] Exception sending context initialized event to listener instance of class org.sonar.server.platform.PlatformServletContextListener
java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.NullPointerException
at com.google.common.base.Throwables.propagate(Throwables.java:160) ~[guava-17.0.jar:na]
at org.sonar.server.es.BaseIndexer.index(BaseIndexer.java:82) ~[sonar-server-5.6.jar:na]
at org.sonar.server.es.BaseIndexer.index(BaseIndexer.java:88) ~[sonar-server-5.6.jar:na]
at org.sonar.server.es.IndexerStartupTask.execute(IndexerStartupTask.java:71) ~[sonar-server-5.6.jar:na]
at org.sonar.server.platform.platformlevel.PlatformLevelStartup$1.doPrivileged(PlatformLevelStartup.java:81) ~[sonar-server-5.6.jar:na]
at org.sonar.server.user.DoPrivileged.execute(DoPrivileged.java:44) ~[sonar-server-5.6.jar:na]
at org.sonar.server.platform.platformlevel.PlatformLevelStartup.start(PlatformLevelStartup.java:77) ~[sonar-server-5.6.jar:na]
at org.sonar.server.platform.Platform.executeStartupTasks(Platform.java:201) ~[sonar-server-5.6.jar:na]
at org.sonar.server.platform.Platform.doStart(Platform.java:114) ~[sonar-server-5.6.jar:na]
at org.sonar.server.platform.Platform.doStart(Platform.java:99) ~[sonar-server-5.6.jar:na]
at org.sonar.server.platform.PlatformServletContextListener.contextInitialized(PlatformServletContextListener.java:44) ~[sonar-server-5.6.jar:na]
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4812) [tomcat-embed-core-8.0.30.jar:8.0.30]
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5255) [tomcat-embed-core-8.0.30.jar:8.0.30]
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) [tomcat-embed-core-8.0.30.jar:8.0.30]
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1408) [tomcat-embed-core-8.0.30.jar:8.0.30]
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1398) [tomcat-embed-core-8.0.30.jar:8.0.30]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_65]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_65]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_65]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_65]
Caused by: java.util.concurrent.ExecutionException: java.lang.NullPointerException
at java.util.concurrent.FutureTask.report(FutureTask.java:122) [na:1.8.0_65]
at java.util.concurrent.FutureTask.get(FutureTask.java:192) [na:1.8.0_65]
at com.google.common.util.concurrent.Uninterruptibles.getUninterruptibly(Uninterruptibles.java:135) ~[guava-17.0.jar:na]
at org.sonar.server.es.BaseIndexer.index(BaseIndexer.java:80) ~[sonar-server-5.6.jar:na]
... 18 common frames omitted
Caused by: java.lang.NullPointerException: null
at java.io.FilterInputStream.close(FilterInputStream.java:181) ~[na:1.8.0_65]
at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:303) ~[commons-io-2.4.jar:2.4]
at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:246) ~[commons-io-2.4.jar:2.4]
at org.sonar.db.source.FileSourceDto.decodeTestData(FileSourceDto.java:169) ~[sonar-db-5.6.jar:na]
at org.sonar.server.test.index.TestResultSetIterator.read(TestResultSetIterator.java:79) ~[sonar-server-5.6.jar:na]
at org.sonar.server.test.index.TestResultSetIterator.read(TestResultSetIterator.java:60) ~[sonar-server-5.6.jar:na]
at org.sonar.db.ResultSetIterator.next(ResultSetIterator.java:82) ~[sonar-db-5.6.jar:na]
at org.sonar.server.test.index.TestIndexer.doIndex(TestIndexer.java:93) ~[sonar-server-5.6.jar:na]
at org.sonar.server.test.index.TestIndexer.doIndex(TestIndexer.java:80) ~[sonar-server-5.6.jar:na]
at org.sonar.server.test.index.TestIndexer.doIndex(TestIndexer.java:70) ~[sonar-server-5.6.jar:na]
at org.sonar.server.es.BaseIndexer$2.index(BaseIndexer.java:91) ~[sonar-server-5.6.jar:na]
at org.sonar.server.es.BaseIndexer$1.run(BaseIndexer.java:73) ~[sonar-server-5.6.jar:na]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_65]
... 4 common frames omitted
2017.01.21 00:08:29 ERROR web[o.a.c.c.StandardContext] One or more listeners failed to start. Full details will be found in the appropriate container log file
2017.01.21 00:08:29 ERROR web[o.a.c.c.StandardContext] Context [] startup failed due to previous errors
2017.01.21 00:08:29 WARN web[o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [Thread-4] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
java.net.SocketInputStream.socketRead0(Native Method)
java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
java.net.SocketInputStream.read(SocketInputStream.java:170)
java.net.SocketInputStream.read(SocketInputStream.java:141)
java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
java.io.BufferedInputStream.read(BufferedInputStream.java:345)
com.sun.jndi.ldap.Connection.run(Connection.java:860)
java.lang.Thread.run(Thread.java:745)
2017.01.21 00:08:29 WARN web[o.a.c.l.WebappClassLoaderBase] The web application [ROOT] appears to have started a thread named [Progress[BulkIndexer[tests]]] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
java.lang.Object.wait(Native Method)
java.util.TimerThread.mainLoop(Timer.java:552)
java.util.TimerThread.run(Timer.java:505)
2017.01.21 00:08:29 INFO web[o.a.c.h.Http11NioProtocol] Starting ProtocolHandler ["http-nio-0.0.0.0-9000"]
2017.01.21 00:08:29 INFO web[o.s.s.a.TomcatAccessLog] Web server is started
2017.01.21 00:08:29 INFO web[o.s.s.a.EmbeddedTomcat] HTTP connector enabled on port 9000
2017.01.21 00:08:29 WARN web[o.s.p.ProcessEntryPoint] Fail to start web
java.lang.IllegalStateException: Webapp did not start
at org.sonar.server.app.EmbeddedTomcat.isUp(EmbeddedTomcat.java:84) ~[sonar-server-5.6.jar:na]
at org.sonar.server.app.WebServer.isUp(WebServer.java:47) [sonar-server-5.6.jar:na]
at org.sonar.process.ProcessEntryPoint.launch(ProcessEntryPoint.java:105) ~[sonar-process-5.6.jar:na]
at org.sonar.server.app.WebServer.main(WebServer.java:68) [sonar-server-5.6.jar:na]
2017.01.21 00:08:29 INFO web[o.a.c.h.Http11NioProtocol] Pausing ProtocolHandler ["http-nio-0.0.0.0-9000"]
2017.01.21 00:08:30 INFO web[o.a.c.h.Http11NioProtocol] Stopping ProtocolHandler ["http-nio-0.0.0.0-9000"]
2017.01.21 00:08:30 INFO web[o.a.c.h.Http11NioProtocol] Destroying ProtocolHandler ["http-nio-0.0.0.0-9000"]
2017.01.21 00:08:30 INFO web[o.s.s.a.TomcatAccessLog] Web server is stopped
2017.01.21 00:08:30 INFO app[o.s.p.m.Monitor] Process[es] is stopping
2017.01.21 00:08:31 INFO es[o.s.p.StopWatcher] Stopping process
2017.01.21 00:08:31 INFO es[o.elasticsearch.node] [sonar-1484953654097] stopping ...
Server fails to start if target database is not an exact copy of the source database. You should double-check that all tables and sequences have exactly the same content, values of primary keys included. A strategy is to start a fresh install on the target db so that SonarQube creates the schema. Then a data backup can be restored.
OK working now so just a quick update maybe it will help others... It seems it is necessary to let the new sonar instance initialise the new database and then to do a "hard" copy by which I mean in the SQl Developer the options copy objects, replace existing target objects, truncate target data before copy.
I couldn't quite figure this out because the initial startup must do something that causes the error I was getting to go away, so something must be left of it in the database even after the hard copy. Soft copy not replacing objects allowed Sonar to start but with problems - e.g. key violations when creating users or groups. The latter could be fixed by rebuilding indexes and/or or dropping and reactivating constraints, the former was the result of differing initial values of sequences used to set user-id. But the hard copy circumvented all these problems, so that is the route I would recommend. I also deleted the directories data, temp, logs from SONAR_HOME, I'm not 100% sure if this is necessary.

Resources