Exporting data using solr streaming expressions is randomly not working - solr

I am using solr 7.4 version. I am trying to export data using streaming expressions.
Following is the simple curl query that i am using. I am indexing my data and testing the following query.
curl --data-urlencode 'expr=search(collection1,
q="text:solar",
fl="id",
sort="id asc",
qt="/export")' http://localhost:8983/solr/collection1/stream
when i run above query with initial few documents indexed, it is working fine. After indexing few thousand documents (around , i am getting the following error for the same query.
EXCEPTION":"java.util.concurrent.ExecutionException: java.io.IOException: --> http://localhost:8983/solr/collection1/: An exception has occurred on the server, refer to server log for details."
I've looked into my solr logs and the following is the error info
java.io.IOException: java.util.concurrent.ExecutionException: java.io.IOException: --> http://localhost:8983/solr/collection1/: An exception has occurred on the server, refer to server log for details.
at org.apache.solr.client.solrj.io.stream.CloudSolrStream.openStreams(CloudSolrStream.java:400)
at org.apache.solr.client.solrj.io.stream.CloudSolrStream.open(CloudSolrStream.java:275)
at org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:54)
at org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:397)
at org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:83)
at org.apache.solr.response.JSONWriter.writeMap(JSONResponseWriter.java:539)
at org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:181)
at org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:209)
at org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:325)
at org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:120)
at org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:71)
at org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
at org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:787)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:524)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1155)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)
at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at org.eclipse.jetty.server.Server.handle(Server.java:531)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:352)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:281)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102)
at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:126)
at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:366)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:760)
at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:678)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.ExecutionException: java.io.IOException: --> http://localhost:8983/solr/collection1/: An exception has occurred on the server, refer to server log for details.
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at org.apache.solr.client.solrj.io.stream.CloudSolrStream.openStreams(CloudSolrStream.java:394)
... 49 more
Caused by: java.io.IOException: --> http://localhost:8983/solr/collection1/: An exception has occurred on the server, refer to server log for details.
at org.apache.solr.client.solrj.io.stream.SolrStream.read(SolrStream.java:225)
at org.apache.solr.client.solrj.io.stream.CloudSolrStream$TupleWrapper.next(CloudSolrStream.java:484)
at org.apache.solr.client.solrj.io.stream.CloudSolrStream$StreamOpener.call(CloudSolrStream.java:507)
at org.apache.solr.client.solrj.io.stream.CloudSolrStream$StreamOpener.call(CloudSolrStream.java:494)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
... 1 more
Caused by: java.io.IOException: JSONTupleStream: expected OBJECT_START but got EOF
at org.apache.solr.client.solrj.io.stream.JSONTupleStream.expect(JSONTupleStream.java:99)
at org.apache.solr.client.solrj.io.stream.JSONTupleStream.advanceToDocs(JSONTupleStream.java:179)
at org.apache.solr.client.solrj.io.stream.JSONTupleStream.next(JSONTupleStream.java:77)
at org.apache.solr.client.solrj.io.stream.SolrStream.read(SolrStream.java:194)
... 8 more
Is it because new documents have something unusual in their content? Even if that is the case, i am requesting only one field which is id and there is nothing unusual about it.
Also, to mention, if i give q="*:*" it is working fine (which is kind of weird)
Any idea what what is issue here ?

Related

The configuration does not specify the checkpoint directory 'state.checkpoints.dir'

While submitting the flink job on the dataproc cluster getting the below error. Please find the code base and the error. I am using the flink 1.9.3 version.
The program finished with the following exception:
org.apache.flink.client.program.ProgramInvocationException: Could not retrieve the execution result. (JobID: f064ceaa5b318fdad9a77b2723b9ee64)
at org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:255)
at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:338)
at org.apache.flink.streaming.api.environment.StreamContextEnvironment.execute(StreamContextEnvironment.java:60)
at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1507)
at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1489)
at org.flink.ReadFromPubsub.main(ReadFromPubsub.java:28)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:604)
at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:466)
at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:274)
at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:746)
at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:273)
at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205)
at org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1008)
at org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1081)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1926)
at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1081)
Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed to submit JobGraph.
at org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$8(RestClusterClient.java:391)
at java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:884)
at java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:866)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990)
at org.apache.flink.runtime.concurrent.FutureUtils.lambda$retryOperationWithDelay$8(FutureUtils.java:263)
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
at java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:575)
at java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:943)
at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:456)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Caused by: org.apache.flink.runtime.rest.util.RestClientException: [Internal server error., <Exception on server side:
org.apache.flink.runtime.client.JobSubmissionException: Failed to submit job.
at org.apache.flink.runtime.dispatcher.Dispatcher.lambda$internalSubmitJob$2(Dispatcher.java:333)
at java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:836)
at java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:811)
at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:456)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.lang.RuntimeException: org.apache.flink.runtime.client.JobExecutionException: Could not set up JobManager
at org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:36)
at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
... 6 more
Caused by: org.apache.flink.runtime.client.JobExecutionException: Could not set up JobManager
at org.apache.flink.runtime.jobmaster.JobManagerRunner.<init>(JobManagerRunner.java:152)
at org.apache.flink.runtime.dispatcher.DefaultJobManagerRunnerFactory.createJobManagerRunner(DefaultJobManagerRunnerFactory.java:83)
at org.apache.flink.runtime.dispatcher.Dispatcher.lambda$createJobManagerRunner$5(Dispatcher.java:376)
at org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:34)
... 7 more
Caused by: org.apache.flink.runtime.client.JobExecutionException: Could not instantiate configured state backend
at org.apache.flink.runtime.executiongraph.ExecutionGraphBuilder.buildGraph(ExecutionGraphBuilder.java:303)
at org.apache.flink.runtime.executiongraph.ExecutionGraphBuilder.buildGraph(ExecutionGraphBuilder.java:106)
at org.apache.flink.runtime.scheduler.LegacyScheduler.createExecutionGraph(LegacyScheduler.java:207)
at org.apache.flink.runtime.scheduler.LegacyScheduler.createAndRestoreExecutionGraph(LegacyScheduler.java:184)
at org.apache.flink.runtime.scheduler.LegacyScheduler.<init>(LegacyScheduler.java:176)
at org.apache.flink.runtime.scheduler.LegacySchedulerFactory.createInstance(LegacySchedulerFactory.java:70)
at org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:278)
at org.apache.flink.runtime.jobmaster.JobMaster.<init>(JobMaster.java:266)
at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:98)
at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:40)
at org.apache.flink.runtime.jobmaster.JobManagerRunner.<init>(JobManagerRunner.java:146)
... 10 more
Caused by: org.apache.flink.configuration.IllegalConfigurationException: Cannot create the RocksDB state backend: The configuration does not specify the checkpoint directory 'state.checkpoints.dir'
at org.apache.flink.contrib.streaming.state.RocksDBStateBackendFactory.createFromConfig(RocksDBStateBackendFactory.java:44)
at org.apache.flink.contrib.streaming.state.RocksDBStateBackendFactory.createFromConfig(RocksDBStateBackendFactory.java:32)
at org.apache.flink.runtime.state.StateBackendLoader.loadStateBackendFromConfig(StateBackendLoader.java:154)
at org.apache.flink.runtime.state.StateBackendLoader.fromApplicationOrConfigOrDefault(StateBackendLoader.java:219)
at org.apache.flink.runtime.executiongraph.ExecutionGraphBuilder.buildGraph(ExecutionGraphBuilder.java:299)
... 20 more
End of exception on server side>]
at org.apache.flink.runtime.rest.RestClient.parseResponse(RestClient.java:389)
at org.apache.flink.runtime.rest.RestClient.lambda$submitRequest$3(RestClient.java:373)
at java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:966)
at java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:940)
... 4 more
Code snippet I executed
public class ReadFromPubsub
{
public static void main(String args[]) throws Exception
{
System.out.println("Flink Pubsub Code Read 1");
StreamExecutionEnvironment streamExecEnv = StreamExecutionEnvironment.getExecutionEnvironment();
System.out.println("Flink Pubsub Code Read 2");
DeserializationSchema<String> deserializer = new SimpleStringSchema();
System.out.println("Flink Pubsub Code Read 3");
SourceFunction<String> pubsubSource = PubSubSource.newBuilder()
.withDeserializationSchema(deserializer)
.withProjectName("vz-it-np-gudv-dev-vzntdo-0")
.withSubscriptionName("subscription1")
.build();
System.out.println("Flink Pubsub Code Read 4");
streamExecEnv.addSource(pubsubSource).print();
streamExecEnv.enableCheckpointing(10);
System.out.println("Flink Pubsub Code Read 5");
streamExecEnv.execute();
}
}
I can see all the print statements during the execution of the code. After the last print statement I am getting the error.
All the exceptions to be resolved
Generally, you would supply the appropriate savepoint/checkpointing directory within your Flink configuration within flink-conf.yaml as detailed in the docs. If you aren't currently setting it, you can do so in the following ways:
Via flink-conf.yaml (Preferred)
state.checkpoints.dir: "file://example/checkpoints"
Programmatically (within the job):
Configuration configuration = new Configuration();
conf.setString("state.checkpoints.dir", "file://example/checkpoints");
StreamExecutionEnvironment streamExecEnv =
StreamExecutionEnvironment.getExecutionEnvironment(configuration);
Via the CLI:
flink run -Dstate.checkpoints.dir="/example/checkpoints" your-job.jar
Additionally, if you didn't actually want to perform checkpointing, you could likely remove the following configuration within your job:
streamExecEnv.enableCheckpointing(10);

SymmetircDS: Extract Frequency by Channel

I'm trying to limit the extract frequency and found two ways in user guide (https://www.symmetricds.org/doc/3.9/html/user-guide.html#_extract_frequency_by_channel). None of them are working.
1) By setting extract_period_millis.
update sym_channel set extract_period_millis = 30000, last_update_time=CURRENT_TIMESTAMP where channel_id='channel_bps2swd'
After that, I was receiving the following error message on console:
[server] - SymmetricServlet - Error while processing GET request for node: 0001 at 127.0.0.1 with path: /server/pull
org.jumpmind.db.sql.SqlException: Failed to execute sql: null
at org.jumpmind.db.sql.AbstractSqlTemplate.translate(AbstractSqlTemplate.java:302)
at org.jumpmind.db.sql.JdbcSqlReadCursor.<init>(JdbcSqlReadCursor.java:120)
at org.jumpmind.db.sql.JdbcSqlTemplate.queryForCursor(JdbcSqlTemplate.java:140)
at org.jumpmind.db.sql.AbstractSqlTemplate.query(AbstractSqlTemplate.java:199)
at org.jumpmind.db.sql.AbstractSqlTemplate.query(AbstractSqlTemplate.java:195)
at org.jumpmind.db.sql.AbstractSqlTemplate.query(AbstractSqlTemplate.java:185)
at org.jumpmind.db.sql.AbstractSqlTemplate.query(AbstractSqlTemplate.java:121)
at org.jumpmind.symmetric.service.impl.ConfigurationService.getNodeChannels(ConfigurationService.java:436)
at org.jumpmind.symmetric.service.impl.ConfigurationService.getSuspendIgnoreChannelLists(ConfigurationService.java:531)
at org.jumpmind.symmetric.web.PullUriHandler.handlePull(PullUriHandler.java:112)
at org.jumpmind.symmetric.web.PullUriHandler.handleWithCompression(PullUriHandler.java:100)
at org.jumpmind.symmetric.web.AbstractCompressionUriHandler.handle(AbstractCompressionUriHandler.java:84)
at org.jumpmind.symmetric.web.SymmetricServlet.service(SymmetricServlet.java:114)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:833)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1650)
at org.jumpmind.symmetric.web.HttpMethodFilter.doFilter(HttpMethodFilter.java:62)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:190)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:168)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:166)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1155)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at org.eclipse.jetty.server.Server.handle(Server.java:561)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:334)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:279)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:104)
at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:124)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:247)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:140)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131)
at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:243)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:679)
at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:597)
at java.lang.Thread.run(Unknown Source)
Caused by: java.sql.SQLException: Parameter arg '0001' type: -2147483648 caused exception: Invalid parameter index 1.
at org.jumpmind.db.sql.JdbcSqlTemplate.setValues(JdbcSqlTemplate.java:1027)
at org.jumpmind.db.sql.JdbcSqlReadCursor.<init>(JdbcSqlReadCursor.java:92)
... 44 more
Caused by: java.sql.SQLException: Invalid parameter index 1.
at net.sourceforge.jtds.jdbc.JtdsPreparedStatement.getParameter(JtdsPreparedStatement.java:543)
at net.sourceforge.jtds.jdbc.JtdsPreparedStatement.setParameter(JtdsPreparedStatement.java:612)
at net.sourceforge.jtds.jdbc.JtdsPreparedStatement.setString(JtdsPreparedStatement.java:892)
at org.apache.commons.dbcp.DelegatingPreparedStatement.setString(DelegatingPreparedStatement.java:135)
at org.apache.commons.dbcp.DelegatingPreparedStatement.setString(DelegatingPreparedStatement.java:135)
at org.springframework.jdbc.core.StatementCreatorUtils.setValue(StatementCreatorUtils.java:453)
at org.springframework.jdbc.core.StatementCreatorUtils.setParameterValueInternal(StatementCreatorUtils.java:241)
at org.springframework.jdbc.core.StatementCreatorUtils.setParameterValue(StatementCreatorUtils.java:172)
at org.jumpmind.db.sql.JdbcSqlTemplate.setValues(JdbcSqlTemplate.java:1023)
... 45 more
[server] - NodeConcurrencyInterceptor - Error building response headers
I had to set back extract_period_millis = 0 to solve this error message.
Note: Table SYM_NODE_CHANNEL_CTL is empty
2) By setting start/endtime to on table SYM_NODE_GROUP_CHANNEL_WND for associated channel.
It didn't have any effects. Changes were synced shortly after they have occured.
System:
Symmetric 3.9.12, Windows 10 x64, mssql
Any solutions to set the sync frequency?
If your just looking to change the sync frequency you could use the following parameters as they are both defaulted to 1 min (60000ms).
job.pull.period.time.ms
job.push.period.time.ms
In class
org.jumpmind.symmetric.service.impl.ConfigurationServiceSqlMap.java
add this
putSql("selectNodeChannelControlLastExtractTimeSql" ,"select channel_id,last_extract_time from sym_node_channel_ctl where node_id=?");

NUTCH 1.13 fetch of url failed with: org.apache.nutch.protocol.ProtocolNotFound: protocol not found for url=http

fetch of httpurl failed with:
org.apache.nutch.protocol.ProtocolNotFound: protocol not found for
url=http at
org.apache.nutch.protocol.ProtocolFactory.getProtocol(ProtocolFactory.java:85)
at org.apache.nutch.fetcher.FetcherThread.run(FetcherThread.java:285)
Using queue mode : byHost
fetch of httpsurl failed with: org.apache.nutch.protocol.ProtocolNotFound: protocol not found for url=https
at org.apache.nutch.protocol.ProtocolFactory.getProtocol(ProtocolFactory.java:85)
at org.apache.nutch.fetcher.FetcherThread.run(FetcherThread.java:285)
I get above result while running nutch1.13 with solr6.6.0
command i used is
bin/crawl -i -D
solr.server.url=http://myip/solr/nutch/ urls/ crawl 2
below is plugin section in my nutch-site.xml
<name>plugin.includes</name>
<value>
protocol-(http|httpclient)|urlfilter-regex|parse-(html)|index-(basic|anchor)|indexer-solr|query-(basic|site|url)|response-(json|xml)|summary-basic|scoring-opic|urlnormalizer-(pass|regex|basic)
</value>
Below are my file contents
[root#localhost apache-nutch-1.13]# ls plugins
creativecommons index-more nutch-extensionpoints protocol-file scoring-similarity urlnormalizer-ajax
feed index-replace parse-ext protocol-ftp subcollection urlnormalizer-basic
headings index-static parsefilter-naivebayes protocol-htmlunit tld urlnormalizer-host
index-anchor language-identifier parsefilter-regex protocol-http urlfilter-automaton urlnormalizer-pass
index-basic lib-htmlunit parse-html protocol-httpclient urlfilter-domain urlnormalizer-protocol
indexer-cloudsearch lib-http parse-js protocol-interactiveselenium urlfilter-domainblacklist urlnormalizer-querystring
indexer-dummy lib-nekohtml parse-metatags protocol-selenium urlfilter-ignoreexempt urlnormalizer-regex
indexer-elastic lib-regex-filter parse-replace publish-rabbitmq urlfilter-prefix urlnormalizer-slash
indexer-solr lib-selenium parse-swf publish-rabitmq urlfilter-regex
index-geoip lib-xml parse-tika scoring-depth urlfilter-suffix
index-links microformats-reltag parse-zip scoring-link urlfilter-validator
index-metadata mimetype-filter plugin scoring-opic urlmeta
I'm stuck with this issue. As you can see i have included both protocol-(http|httpclient) .But still fetching url failed. Thanks in advance.
NEWER ISSUE hadoop.log
2017-09-01 14:35:07,172 INFO solr.SolrIndexWriter - SolrIndexer:
deleting 1/1 documents 2017-09-01 14:35:07,321 WARN
output.FileOutputCommitter - Output Path is null in cleanupJob()
2017-09-01 14:35:07,323 WARN mapred.LocalJobRunner -
job_local1176811933_0001 java.lang.Exception:
java.lang.IllegalStateException: Connection pool shut down at
org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
at
org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:529)
Caused by: java.lang.IllegalStateException: Connection pool shut down
at org.apache.http.util.Asserts.check(Asserts.java:34) at
org.apache.http.pool.AbstractConnPool.lease(AbstractConnPool.java:169)
at
org.apache.http.pool.AbstractConnPool.lease(AbstractConnPool.java:202)
at
org.apache.http.impl.conn.PoolingClientConnectionManager.requestConnection(PoolingClientConnectionManager.java:184)
at
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:415)
at
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:863)
at
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106)
at
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:57)
at
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:481)
at
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:240)
at
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:229)
at
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at
org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:482)
at
org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:463)
at
org.apache.nutch.indexwriter.solr.SolrIndexWriter.commit(SolrIndexWriter.java:191)
at
org.apache.nutch.indexwriter.solr.SolrIndexWriter.close(SolrIndexWriter.java:179)
at org.apache.nutch.indexer.IndexWriters.close(IndexWriters.java:117)
at
org.apache.nutch.indexer.CleaningJob$DeleterReducer.close(CleaningJob.java:122)
at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:244) at
org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:459)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392) at
org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:319)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266) at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748) 2017-09-01 14:35:07,679
ERROR indexer.CleaningJob - CleaningJob: java.io.IOException: Job
failed! at
org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:865) at
org.apache.nutch.indexer.CleaningJob.delete(CleaningJob.java:174) at
org.apache.nutch.indexer.CleaningJob.run(CleaningJob.java:197) at
org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at
org.apache.nutch.indexer.CleaningJob.main(CleaningJob.java:208)
I somehow solved the issue. I think the space in nutch-site.xml was causing issue new plugin.includes section for others coming here.
<name>plugin.includes</name>
<value>protocol-http|protocol-httpclient|urlfilter-regex|parse-(html)|index-(basic|anchor)|indexer-solr|query-(basic|site|url)|response-(json|xml)|summary-basic|scoring-opic|urlnormalizer-(pass|regex|basic)</value>

How to diagnose problems (Failed to route and batch data on XXX channel)?

I am using SymmetricDS and i have stumbled across a problem on one of my clients.
Log says:
2015-11-11 09:10:57,688 ERROR [blagajna_XXX] [RouterService] [blagajna_XXX-job-17] Failed to route and batch data on 'cFpromet' channel
java.lang.NullPointerException
There is no further explanation for null pointer exception, so i can't debug it myself. Replication itself works for some time and then this error appears and replication stops working. Identical system works without any problems.
select * from sym_outgoing_batch where error_flag=1; returns 0 rows, so how can i debug this problem?
GregaJ
EDIT:
java.lang.NullPointerException
at org.jumpmind.db.platform.AbstractJdbcDdlReader.getTableNamePattern(AbstractJdbcDdlReader.java:638)
at org.jumpmind.db.platform.AbstractJdbcDdlReader$3.execute(AbstractJdbcDdlReader.java:574)
at org.jumpmind.db.platform.AbstractJdbcDdlReader$3.execute(AbstractJdbcDdlReader.java:563)
at org.jumpmind.db.sql.JdbcSqlTemplate.execute(JdbcSqlTemplate.java:432)
at org.jumpmind.db.platform.AbstractJdbcDdlReader.readTable(AbstractJdbcDdlReader.java:563)
at org.jumpmind.db.platform.AbstractDatabasePlatform.readTableFromDatabase(AbstractDatabasePlatform.java:239)
at org.jumpmind.db.platform.AbstractDatabasePlatform.getTableFromCache(AbstractDatabasePlatform.java:314)
at org.jumpmind.symmetric.db.AbstractSymmetricDialect.getTable(AbstractSymmetricDialect.java:377)
at org.jumpmind.symmetric.service.impl.RouterService.routeData(RouterService.java:689)
at org.jumpmind.symmetric.service.impl.RouterService.selectDataAndRoute(RouterService.java:634)
at org.jumpmind.symmetric.service.impl.RouterService.routeDataForChannel(RouterService.java:436)
at org.jumpmind.symmetric.service.impl.RouterService.routeDataForEachChannel(RouterService.java:328)
at org.jumpmind.symmetric.service.impl.RouterService.routeData(RouterService.java:175)
at org.jumpmind.symmetric.job.RouterJob.doJob(RouterJob.java:40)
at org.jumpmind.symmetric.job.AbstractJob.invoke(AbstractJob.java:180)
at org.jumpmind.symmetric.job.AbstractJob.run(AbstractJob.java:224)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

How to solve Dataconfig error when running full import on solr instance?

I created solr instance and I am trying to run full import but i am getting Data config problem.
Url is:http://localhost:8080/apache-solr-4.0.0/collection1/dataimport?command=full-import&clean=true
5008data-config.xmlfull-importData Config problem: The processing instruction target matching "[xX][mM][lL]" is not allowed.org.apache.solr.handler.dataimport.DataImportHandlerException: Data Config problem: The processing instruction target matching "[xX][mM][lL]" is not allowed.
at org.apache.solr.handler.dataimport.DataImporter.loadDataConfig(DataImporter.java:233)
at org.apache.solr.handler.dataimport.DataImporter.maybeReloadConfiguration(DataImporter.java:131)
at org.apache.solr.handler.dataimport.DataImportHandler.handleRequestBody(DataImportHandler.java:167)
at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1699)
at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:455)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:276)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:217)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:279)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175)
at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:655)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:595)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:161)
at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:331)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:231)
at com.sun.enterprise.v3.services.impl.ContainerMapper$AdapterCallable.call(ContainerMapper.java:317)
at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:195)
at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:849)
at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:746)
at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:1045)
at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:228)
at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:137)
at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:104)
at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:90)
at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:79)
at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:54)
at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:59)
at com.sun.grizzly.ContextTask.run(ContextTask.java:71)
at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:532)
at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:513)
at java.lang.Thread.run(Thread.java:722)
Caused by: org.xml.sax.SAXParseException; systemId: solrres:/data-config.xml; lineNumber: 2; columnNumber: 6; The processing instruction target matching "[xX][mM][lL]" is not allowed.
at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(ErrorHandlerWrapper.java:198)
at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.fatalError(ErrorHandlerWrapper.java:177)
at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:441)
at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:368)
at com.sun.org.apache.xerces.internal.impl.XMLScanner.reportFatalError(XMLScanner.java:1375)
at com.sun.org.apache.xerces.internal.impl.XMLScanner.scanPIData(XMLScanner.java:662)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanPIData(XMLDocumentFragmentScannerImpl.java:979)
at com.sun.org.apache.xerces.internal.impl.XMLScanner.scanPI(XMLScanner.java:630)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$PrologDriver.next(XMLDocumentScannerImpl.java:913)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:607)
at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:116)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:488)
at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:835)
at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:764)
at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:123)
at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:240)
at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:300)
at org.apache.solr.handler.dataimport.DataImporter.loadDataConfig(DataImporter.java:224)
... 31 more
500
can any one help me on this?
Thanks in Advance.....
A couple of resources (here and here) seem to indicate that this error can be caused by whitespace preceding '<?xml ...' in the XML being read.

Resources