Apache camel how to implement an optional consumer for a wire tap - apache-camel

I set up some routes (Camel 2.22.1) that uses wire tap to log some stuff into a Mongo db.
from(DIRECT_NEXT).process(sendFile)
.wireTap( "direct:count-fetch?failIfNoConsumers=false" )
as you see i am using failIfNoConsumers=false.
from(COUNT_FETCH)
.routeId( MONGO_COUNT_FETCH_ROUTEID )
.autoStartup( false )
.process(countFetchProcessor)
.to(persistenceEndpoints.updateImage())
.log(LoggingLevel.DEBUG, "Counted fetch.");
The mongo DB is an optional component, the whole application will run without it.
I am using Mongo'S ServerMonitorListener to check if Mongo is available. I suspend or resume the rout using Camel's ControlBus accordingly.
All is running fine!
My Problem is that Camel tries to send the exchanges to the not running routes for 30s:
...
[DEBUG] 2019-01-03 14:02:45.848 [Camel (camel-1) thread #23 - WireTap] DirectBlockingProducer - Waited 20025 for consumer to be ready
...
Why the producer blocks? The default value for "block" should be false?!
And after it we see of course an exception:
No consumers available on endpoint: direct://count-fetch?failIfNoConsumers=false
What is the best approach to let camel discard the exchange immediately (how to set the time out?) and don't throw any exception (because it is normal application behavior, exception will only slow down)?
UPDATE:
here is the complete exception:
[ERROR] 2019-01-07 10:21:22.702 [Camel (camel-1) thread #4 - WireTap] DefaultErrorHandler - Failed delivery for (MessageId: ID-moritz-1546852848013-0-3 on ExchangeId: ID-moritz-1546852848013-0-2). Exhausted after delivery attempt: 1 caught: org.apache.camel.component.direct.DirectConsumerNotAvailableException: No consumers available on endpoint: direct://update-all?failIfNoConsumers=false. Exchange[ID-moritz-1546852848013-0-2]
Message History
---------------------------------------------------------------------------------------------------------------------------------------
RouteId ProcessorId Processor Elapsed (ms)
[route4 ] [route4 ] [timer://updateAll ] [ 30065]
[route4 ] [log1 ] [log ] [ 1]
[route4 ] [to3 ] [direct:updateAll ] [ 19]
[route5 ] [process2 ] [Processor#0x4e92466a ] [ 9]
[route5 ] [process3 ] [Processor#0x1b29d52b ] [ 7]
[route5 ] [wireTap1 ] [wireTap[direct:update-all?failIfNoConsumers=false] ] [ 1]
Stacktrace
---------------------------------------------------------------------------------------------------------------------------------------
org.apache.camel.component.direct.DirectConsumerNotAvailableException: No consumers available on endpoint: direct://update-all?failIfNoConsumers=false. Exchange[ID-moritz-1546852848013-0-2]
at org.apache.camel.component.direct.DirectBlockingProducer.getConsumer(DirectBlockingProducer.java:67) ~[camel-core-2.22.1.jar:2.22.1]
at org.apache.camel.component.direct.DirectBlockingProducer.process(DirectBlockingProducer.java:53) ~[camel-core-2.22.1.jar:2.22.1]
at org.apache.camel.processor.SendDynamicProcessor$1.doInAsyncProducer(SendDynamicProcessor.java:178) ~[camel-core-2.22.1.jar:2.22.1]
at org.apache.camel.impl.ProducerCache.doInAsyncProducer(ProducerCache.java:445) ~[camel-core-2.22.1.jar:2.22.1]
at org.apache.camel.processor.SendDynamicProcessor.process(SendDynamicProcessor.java:160) ~[camel-core-2.22.1.jar:2.22.1]
at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:548) [camel-core-2.22.1.jar:2.22.1]
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:201) [camel-core-2.22.1.jar:2.22.1]
at org.apache.camel.processor.DelegateAsyncProcessor.process(DelegateAsyncProcessor.java:97) [camel-core-2.22.1.jar:2.22.1]
at org.apache.camel.processor.WireTapProcessor$1.call(WireTapProcessor.java:160) [camel-core-2.22.1.jar:2.22.1]
at org.apache.camel.processor.WireTapProcessor$1.call(WireTapProcessor.java:155) [camel-core-2.22.1.jar:2.22.1]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_181]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_181]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_181]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]

Make sure to check the document for the version of Camel you use, which is 2.22.x
There you can see the block is default enabled: https://github.com/apache/camel/blob/camel-2.22.x/camel-core/src/main/docs/direct-component.adoc

Related

Apache Camel - split and aggregation bug

I tried to create a bug in Camel's issue tracker but it's not easy to get access there now. So maybe someone will be able to help me here.
I'm migrating gradually to the newest Camel version. Currently I'm going from the 3.7.3 to 3.11.7 but I checked that this bug happens also on 3.20.1.
Ok, so to the point.
When I have pipeline like this:
.to(SPLIT_WORKER_ROUTE_ID, OTHER_ROUTE_ID)
it should execute in sequence. But when somewhere inside SPLIT_WORKER_ROUTE_ID I have an aggregation code like this:
.split(body())
.process(splitWorkerProcessor)
.aggregate(exchangeProperty(CORRELATION_ID), new SplitAggregator()).completionSize(exchangeProperty(SPLIT_SIZE))
.to(AFTER_SPLIT_ROUTE_ID)
before it goes to AFTER_SPLIT_ROUTE_ID the OTHER_ROUTE_ID kicks in and starts to run in parallel with SPLIT_WORKER_ROUTE_ID.
When I rewrite the code like this (or go back to Camel 3.7.3):
.split(body(), new SplitAggregator()).parallelProcessing()
.process(splitWorkerProcessor)
.end()
.to(AFTER_SPLIT_ROUTE_ID)
everything runs as it should sequentially. Unfortunately, I have to use more complex aggregation conditions so I'm afraid I cannot use this workaround as aggregation configuration is not possible in this approach.
I guess that according to
https://camel.apache.org/manual/camel-3x-upgrade-guide-3_11.html#_aggregate_eip
something has changed in this area. (EDIT: I've just checked Camel 3.10 and it works properly so I'm 99.99% sure this change introduced this bug)
The problem leads to the situation that order of execution is disturbed and we have this:
.to(SPLIT_WORKER_ROUTE_ID, OTHER_ROUTE_ID)
the OTHER_ROUTE_ID can complete before this sequence SPLIT_WORKER_ROUTE_ID -> AFTER_SPLIT_ROUTE_ID.
Here is the log presenting the problem:
2023-02-02T18:45:41,229 [main] INFO direct://Main [...] [] [] [] [] - MAIN START
2023-02-02T18:45:41,230 [Camel (camel-1) thread #1 - Threads] INFO direct://splitWorker [...] [] [] [] [] - SPLIT_WORKER_ROUTE_ID START
2023-02-02T18:45:41,399 [Camel (camel-1) thread #1 - Threads] INFO direct://other [...] [] [] [] [] - OTHER_ROUTE_ID START
2023-02-02T18:45:41,399 [Camel (camel-1) thread #3 - Aggregator] INFO direct://splitWorker [...] [] [] [] [] - Aggregation just finished inside SPLIT_WORKER_ROUTE_ID START!
2023-02-02T18:45:41,399 [Camel (camel-1) thread #3 - Aggregator] INFO direct://splitWorker [...] [] [] [] [] - SPLIT_WORKER_ROUTE_ID FINISH
2023-02-02T18:45:41,400 [Camel (camel-1) thread #3 - Aggregator] INFO direct://afterSplit [...] [] [] [] [] - AFTER_SPLIT_ROUTE_ID START
2023-02-02T18:45:42,404 [Camel (camel-1) thread #1 - Threads] INFO direct://other [...] [] [] [] [] - OTHER_ROUTE_ID FINISH
2023-02-02T18:45:43,406 [Camel (camel-1) thread #3 - Aggregator] INFO direct://afterSplit [...] [] [] [] [] - AFTER_SPLIT_ROUTE_ID FINISH
2023-02-02T18:45:47,417 [Camel (camel-1) thread #6 - Delay] INFO direct://Main [...] [] [] [] [] - MAIN FINISH
I would appreciate any help, thanks a lot!
By default, the output of the aggregator is executed on a thread from the aggregator's thread pool. However, you can have the aggregator output run in the same thread as the calling route:
.aggregate(exchangeProperty(CORRELATION_ID), new SplitAggregator())
.completionSize(exchangeProperty(SPLIT_SIZE))
.executorService(new SynchronousExecutorService())
This technique is briefly described here.

TimerException in Flink Process

We are running a Flink job using several operators including map, windowing, flatMap() and the job fails with the following error - just wondering what causes this error:
2021-05-27 07:00:07,023 WARN org.apache.flink.runtime.taskmanager.Task [] - Collect Inventory (1/1)#0 (34d81cf2e59f350886f93a1e0f734d38) switched from RUNNING to FAILED with failure cause: org.apache.flink.streaming.runtime.tasks.AsynchronousException: Caught exception while processing timer.
at org.apache.flink.streaming.runtime.tasks.StreamTask$StreamTaskAsyncExceptionHandler.handleAsyncException(StreamTask.java:1282)
at org.apache.flink.streaming.runtime.tasks.StreamTask.handleAsyncException(StreamTask.java:1258)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invokeProcessingTimeCallback(StreamTask.java:1397)
at org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$null$16(StreamTask.java:1386)
at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.runThrowing(StreamTaskActionExecutor.java:50)
at org.apache.flink.streaming.runtime.tasks.mailbox.Mail.run(Mail.java:90)
at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.processMailsWhenDefaultActionUnavailable(MailboxProcessor.java:344)
at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.processMail(MailboxProcessor.java:330)
at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:202)
at org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:661)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:623)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:776)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:563)
at java.lang.Thread.run(Thread.java:748)
Caused by: TimerException{java.lang.RuntimeException: Assigned key must not be null!}
... 12 more
Caused by: java.lang.RuntimeException: Assigned key must not be null!
at org.apache.flink.streaming.runtime.io.RecordWriterOutput.pushToRecordWriter(RecordWriterOutput.java:109)
at org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:93)
at org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:44)
at org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:50)
at org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:28)
at org.apache.flink.streaming.api.operators.TimestampedCollector.collect(TimestampedCollector.java:50)
at org.apache.flink.streaming.api.functions.windowing.PassThroughWindowFunction.apply(PassThroughWindowFunction.java:35)
at org.apache.flink.streaming.runtime.operators.windowing.functions.InternalSingleValueWindowFunction.process(InternalSingleValueWindowFunction.java:48)
at org.apache.flink.streaming.runtime.operators.windowing.WindowOperator.emitWindowContents(WindowOperator.java:577)
at org.apache.flink.streaming.runtime.operators.windowing.WindowOperator.onProcessingTime(WindowOperator.java:533)
at org.apache.flink.streaming.api.operators.InternalTimerServiceImpl.onProcessingTime(InternalTimerServiceImpl.java:284)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invokeProcessingTimeCallback(StreamTask.java:1395)
... 11 more
Caused by: java.lang.NullPointerException: Assigned key must not be null!
at org.apache.flink.util.Preconditions.checkNotNull(Preconditions.java:76)
at org.apache.flink.runtime.state.KeyGroupRangeAssignment.assignKeyToParallelOperator(KeyGroupRangeAssignment.java:51)
at org.apache.flink.streaming.runtime.partitioner.KeyGroupStreamPartitioner.selectChannel(KeyGroupStreamPartitioner.java:63)
at org.apache.flink.streaming.runtime.partitioner.KeyGroupStreamPartitioner.selectChannel(KeyGroupStreamPartitioner.java:35)
at org.apache.flink.runtime.io.network.api.writer.ChannelSelectorRecordWriter.emit(ChannelSelectorRecordWriter.java:54)
at org.apache.flink.streaming.runtime.io.RecordWriterOutput.pushToRecordWriter(RecordWriterOutput.java:107)
... 22 more
2021-05-27 07:00:07,023 INFO org.apache.flink.runtime.taskmanager.Task [] - Triggering cancellation of task code Collect Inventory (1/1)#0 (34d81cf2e59f350886f93a1e0f734d38).
{eventTime:2021-05-27T07:00:05.878Z, batchId:4a0c09ad-1e28-4f74-a19a-8e1422c5bc5a, serialDetail:null, inventoryCount:{"location": "tmobile01", "sku": "190198496225", "state": "LOST", "inventoryStatus": "Adjusted-Out", "inventoryType": "ACC", "sellableFlag": false, "version": 1, "updateTimestamp": 2021-05-27T07:00:07.118Z, "globalQuantity": -1, "localQuantity": -1, "eventId": null, "movementSource": "", "locationType": "Store"
It seems that You are using keyBy operation and the extracted key is null. You can't really have null key in Flink.

Exception not getting handled by onException(Throwable.class) after enabling bridgeErrorHandler on SEDA

I have following route
onException(Throwable.class)
.handled(true)
.process(...) --- (A1) Have a shutdown code here.
.rollback();
from("file:D/data/input?fileName=in.txt")
.transacted("required") --- (A2) Have JPA Txn Manager and hikari pooled datasource behind the scene
.split(...)
.to("seda:DUMMY?blockWhenFull=true");
from("seda:DUMMY?bridgeErrorHandler=true")
.transacted("required") --- (A3) Creating new transaction due to SEDA
.process(...) --- (A4) reading from database and doing some computation
.to("file:D:/data/ouput?fileName=out.txt")
I managed to shutdown DB in between processing the file (in.txt) and so HikariDataSource started throwing exceptions. However, even after enabling the bridgeErrorHandler on SEDA consumer side, these exceptions were not getting handled by onException() clause.
In logs, I found that these exceptions are simply logged by TransactionErrorHandler. Could you please help how to trigger onException() in this case, so that the application can shutdown?
Please find the logs below:
2020-06-10 18:25:14,314 WARN com.zaxxer.hikari.pool.ProxyConnection [157] [Camel (camel-1) thread #1 - file://D:/data/input] -[TEST1]-[RW_TEST_ROUTE_2]-[3019]-[]- HikariPool-1 - Connection oracle.jdbc.driver.T4CConnection#6eeade6c marked as broken because of SQLSTATE(08006), ErrorCode(17002)
java.sql.SQLRecoverableException: IO Error: Connection reset by peer: socket write error
at oracle.jdbc.driver.T4CConnection.doCommit(T4CConnection.java:965) ~[ojdbc7-customized-12.1.0.2.0.jar!/:12.1.0.2.0]
at oracle.jdbc.driver.PhysicalConnection.commit(PhysicalConnection.java:2401) ~[ojdbc7-customized-12.1.0.2.0.jar!/:12.1.0.2.0]
at oracle.jdbc.driver.PhysicalConnection.commit(PhysicalConnection.java:2407) ~[ojdbc7-customized-12.1.0.2.0.jar!/:12.1.0.2.0]
at com.zaxxer.hikari.pool.ProxyConnection.commit(ProxyConnection.java:366) ~[HikariCP-3.4.1.jar!/:?]
at com.zaxxer.hikari.pool.HikariProxyConnection.commit(HikariProxyConnection.java) ~[HikariCP-3.4.1.jar!/:?]
at org.hibernate.resource.jdbc.internal.AbstractLogicalConnectionImplementor.commit(AbstractLogicalConnectionImplementor.java:81) ~[hibernate-core-5.4.9.Final.jar!/:5.4.9.Final]
at org.hibernate.resource.transaction.backend.jdbc.internal.JdbcResourceLocalTransactionCoordinatorImpl$TransactionDriverControlImpl.commit(JdbcResourceLocalTransactionCoordinatorImpl.java:282) ~[hibernate-core-5.4.9.Final.jar!/:5.4.9.Final]
at org.hibernate.engine.transaction.internal.TransactionImpl.commit(TransactionImpl.java:101) ~[hibernate-core-5.4.9.Final.jar!/:5.4.9.Final]
at org.springframework.orm.jpa.JpaTransactionManager.doCommit(JpaTransactionManager.java:534) ~[spring-orm-5.2.2.RELEASE.jar!/:5.2.2.RELEASE]
at org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:744) ~[spring-tx-5.2.2.RELEASE.jar!/:5.2.2.RELEASE]
at org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:712) ~[spring-tx-5.2.2.RELEASE.jar!/:5.2.2.RELEASE]
at org.springframework.transaction.support.TransactionTemplate.execute(TransactionTemplate.java:152) ~[spring-tx-5.2.2.RELEASE.jar!/:5.2.2.RELEASE]
at org.apache.camel.spring.spi.TransactionErrorHandler.doInTransactionTemplate(TransactionErrorHandler.java:182) ~[camel-spring-3.0.0.jar!/:3.0.0]
at org.apache.camel.spring.spi.TransactionErrorHandler.processInTransaction(TransactionErrorHandler.java:140) ~[camel-spring-3.0.0.jar!/:3.0.0]
at org.apache.camel.spring.spi.TransactionErrorHandler.process(TransactionErrorHandler.java:107) ~[camel-spring-3.0.0.jar!/:3.0.0]
at org.apache.camel.spring.spi.TransactionErrorHandler.process(TransactionErrorHandler.java:116) ~[camel-spring-3.0.0.jar!/:3.0.0]
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:228) ~[camel-base-3.0.0.jar!/:3.0.0]
at org.apache.camel.processor.Pipeline.doProcess(Pipeline.java:103) ~[camel-base-3.0.0.jar!/:3.0.0]
at org.apache.camel.processor.Pipeline.lambda$process$1(Pipeline.java:87) ~[camel-base-3.0.0.jar!/:3.0.0]
at org.apache.camel.impl.engine.DefaultReactiveExecutor$3.run(DefaultReactiveExecutor.java:116) [camel-base-3.0.0.jar!/:3.0.0]
at org.apache.camel.impl.engine.DefaultReactiveExecutor$Worker.schedule(DefaultReactiveExecutor.java:185) [camel-base-3.0.0.jar!/:3.0.0]
at org.apache.camel.impl.engine.DefaultReactiveExecutor.scheduleMain(DefaultReactiveExecutor.java:59) [camel-base-3.0.0.jar!/:3.0.0]
at org.apache.camel.processor.Pipeline.process(Pipeline.java:87) [camel-base-3.0.0.jar!/:3.0.0]
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:228) [camel-base-3.0.0.jar!/:3.0.0]
at org.apache.camel.component.file.GenericFileConsumer.processExchange(GenericFileConsumer.java:454) [camel-file-3.0.0.jar!/:3.0.0]
at org.apache.camel.component.file.GenericFileConsumer.processBatch(GenericFileConsumer.java:223) [camel-file-3.0.0.jar!/:3.0.0]
at org.apache.camel.component.file.GenericFileConsumer.poll(GenericFileConsumer.java:186) [camel-file-3.0.0.jar!/:3.0.0]
at org.apache.camel.support.ScheduledPollConsumer.doRun(ScheduledPollConsumer.java:183) [camel-support-3.0.0.jar!/:3.0.0]
at org.apache.camel.support.ScheduledPollConsumer.run(ScheduledPollConsumer.java:102) [camel-support-3.0.0.jar!/:3.0.0]
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) [?:1.8.0_251]
at java.util.concurrent.FutureTask.runAndReset(Unknown Source) [?:1.8.0_251]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(Unknown Source) [?:1.8.0_251]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source) [?:1.8.0_251]
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [?:1.8.0_251]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [?:1.8.0_251]
at java.lang.Thread.run(Unknown Source) [?:1.8.0_251]
Caused by: java.net.SocketException: Connection reset by peer: socket write error
at java.net.SocketOutputStream.socketWrite0(Native Method) ~[?:1.8.0_251]
at java.net.SocketOutputStream.socketWrite(Unknown Source) ~[?:1.8.0_251]
at java.net.SocketOutputStream.write(Unknown Source) ~[?:1.8.0_251]
at oracle.net.ns.DataPacket.send(DataPacket.java:209) ~[ojdbc7-customized-12.1.0.2.0.jar!/:12.1.0.2.0]
at oracle.net.ns.NetOutputStream.flush(NetOutputStream.java:215) ~[ojdbc7-customized-12.1.0.2.0.jar!/:12.1.0.2.0]
at oracle.net.ns.NetInputStream.getNextPacket(NetInputStream.java:302) ~[ojdbc7-customized-12.1.0.2.0.jar!/:12.1.0.2.0]
at oracle.net.ns.NetInputStream.read(NetInputStream.java:249) ~[ojdbc7-customized-12.1.0.2.0.jar!/:12.1.0.2.0]
at oracle.net.ns.NetInputStream.read(NetInputStream.java:171) ~[ojdbc7-customized-12.1.0.2.0.jar!/:12.1.0.2.0]
at oracle.net.ns.NetInputStream.read(NetInputStream.java:89) ~[ojdbc7-customized-12.1.0.2.0.jar!/:12.1.0.2.0]
at oracle.jdbc.driver.T4CSocketInputStreamWrapper.readNextPacket(T4CSocketInputStreamWrapper.java:123) ~[ojdbc7-customized-12.1.0.2.0.jar!/:12.1.0.2.0]
at oracle.jdbc.driver.T4CSocketInputStreamWrapper.read(T4CSocketInputStreamWrapper.java:79) ~[ojdbc7-customized-12.1.0.2.0.jar!/:12.1.0.2.0]
at oracle.jdbc.driver.T4CMAREngineStream.unmarshalUB1(T4CMAREngineStream.java:429) ~[ojdbc7-customized-12.1.0.2.0.jar!/:12.1.0.2.0]
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:397) ~[ojdbc7-customized-12.1.0.2.0.jar!/:12.1.0.2.0]
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:257) ~[ojdbc7-customized-12.1.0.2.0.jar!/:12.1.0.2.0]
at oracle.jdbc.driver.T4C7Ocommoncall.doOCOMMIT(T4C7Ocommoncall.java:73) ~[ojdbc7-customized-12.1.0.2.0.jar!/:12.1.0.2.0]
at oracle.jdbc.driver.T4CConnection.doCommit(T4CConnection.java:910) ~[ojdbc7-customized-12.1.0.2.0.jar!/:12.1.0.2.0]
... 35 more
2020-06-10 18:25:14,318 WARN org.apache.camel.spring.spi.TransactionErrorHandler [276] [Camel (camel-1) thread #1 - file://D:/data/input] -[TEST1]-[RW_TEST_ROUTE_2]-[3019]-[]- Transaction rollback (0x21f9764) redelivered(false) for (MessageId: ID-RAJUP-1591793597060-0-1 on ExchangeId: ID-RAJUP-1591793597060-0-2) caught: Unable to commit against JDBC Connection; nested exception is org.hibernate.TransactionException: Unable to commit against JDBC Connection
2020-06-10 18:25:14,319 WARN org.apache.camel.component.file.GenericFileOnCompletion [144] [Camel (camel-1) thread #1 - file://D:/data/input] -[TEST1]-[RW_TEST_ROUTE_2]-[3019]-[RW_TEST_ROUTE_2-1]- Rollback file strategy: org.apache.camel.component.file.strategy.GenericFileRenameProcessStrategy#6834b2f3 for file: GenericFile[D:\data\input\in.txt]
2020-06-10 18:25:27,306 WARN org.hibernate.engine.jdbc.spi.SqlExceptionHelper [137] [Camel (camel-1) thread #2 - seda://ADPIN] -[TEST1]-[RW_TEST_ROUTE_2]-[3019]-[RW_TEST_ROUTE_2-4]- SQL Error: 0, SQLState: null
2020-06-10 18:25:27,306 ERROR org.hibernate.engine.jdbc.spi.SqlExceptionHelper [142] [Camel (camel-1) thread #2 - seda://ADPIN] -[TEST1]-[RW_TEST_ROUTE_2]-[3019]-[RW_TEST_ROUTE_2-4]- HikariPool-1 - Connection is not available, request timed out after 30000ms.
2020-06-10 18:25:27,307 WARN org.apache.camel.spring.spi.TransactionErrorHandler [276] [Camel (camel-1) thread #2 - seda://ADPIN] -[TEST1]-[RW_TEST_ROUTE_2]-[3019]-[RW_TEST_ROUTE_2-4]- Transaction rollback (0x21f9764) redelivered(false) for (MessageId: ID-RAJUP-1591793597060-0-140 on ExchangeId: ID-RAJUP-1591793597060-0-5227) caught: Could not open JPA EntityManager for transaction; nested exception is org.hibernate.exception.JDBCConnectionException: Unable to acquire JDBC Connection
I hope I found the reason of above behaviour:
Enabling bridgeErrorHandler on SEDA consumers or any capable consumers tells the error handler (in our case TransactionErrorHandler, not the OnException() clause) to handle the exception. ErrorHandlers are the last resort i.e if any exception doesn't get catched by OnException() clause, then error handlers will be there to handle that.
As the name 'bridgeErrorHandler' implies bridging with error handlers, so any exception during message consumption (in our case from SEDA) will directly trigger the error handlers.
Thanks to 'exceptionHandler' parameter of SEDA consumer side url to given a hook to achieve our desired functionality.
NOTE: JMS consumer side url used to have 'bridgeErrorHandler' option, but after this, it got removed.

camel-akka and response from actor

Dear Akka/Camel Masters!
I have following route:
(netty4:tcp) -> (akka:actor)
I'm using akka-camel module where:
akka:actor is of type UntypedConsumerActor
netty4:tcp is an endpoint defined in getEndopointUri method of akka:actor
netty4:tcp://localhost:8000?textline=true
When I send bytes to tcp socket I receive exception which tells that socket channel is closed:
Caused by: java.nio.channels.ClosedChannelException: null
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source) [netty-all-4.1.4.Final.jar:4.1.4.Final]
Message History
---------------------------------------------------------------------------------------------------------------------------------------
RouteId ProcessorId Processor Elapsed (ms)
[akka://FileDaemonS] [akka://FileDaemonS] [tcp://localhost:8000 ] [ 60061]
[akka://FileDaemonS] [to1 ] [akka://FileDaemonSystem/user/FileDaemonTcpEndpoint?autoAck=false&replyTimeout=] [ 60037]
java.util.concurrent.TimeoutException: Failed to get response from the actor [ActorEndpointPath(akka://FileDaemonSystem/user/FileDaemonTcpEndpoint)] within timeout [1 minute]. Check replyTimeout and blocking settings [Endpoint[akka://FileDaemonSystem/user/FileDaemonTcpEndpoint?autoAck=false&replyTimeout=60000+milliseconds]]
at akka.camel.internal.component.ActorProducer$$anonfun$1.applyOrElse(ActorComponent.scala:151) ~[akka-camel_2.11-2.4.9.jar:na]
at akka.camel.internal.component.ActorProducer$$anonfun$1.applyOrElse(ActorComponent.scala:148) ~[akka-camel_2.11-2.4.9.jar:na]
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36) ~[scala-library-2.11.8.jar:na]
at scala.PartialFunction$AndThen.apply(PartialFunction.scala:186) [scala-library-2.11.8.jar:na]
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) [scala-library-2.11.8.jar:na]
What am I doing wrong?
I found a solution. Setting netty endpoint as a one-way solves the problem.
netty4:tcp://localhost:8000?textline=true&sync=false

Lily-Solr-HBase Integration

I'm trying to integrate lily-solr-hbase .
I'm using Hortonworks distribution (2.4 version)
Solr (5.2.1 version)
Lily (2.0 version)
I'm following the lily documentation from official link :
http://docs.ngdata.com/lily-docs-2_0/414-lily/432-lily.html
But while running the steps as mentioned to start the lily server .. I'm getting the below exception .
[root#BAN-SDU-OVS2-VM7 lily-2.0]# bin/lily-server
[INFO ][11:32:41,606][main ] org.kauriproject.runtime.info - Starting the Kauri Runtime.
[INFO ][11:32:42,163][main ] org.kauriproject.runtime.info - Reading module configurations of 12 modules.
[INFO ][11:32:43,152][main ] org.kauriproject.runtime.info - Starting the modules.
[INFO ][11:32:43,162][main ] org.kauriproject.runtime.info - Starting module pluginregistry - /root/lily-2.0/lily-2.0/lily-2.0/lib/org/lilyproject/lily-pluginregistry-impl/2.0/lily-pluginregistry-impl-2.0.jar
[INFO ][11:32:44,203][main ] org.kauriproject.runtime.info - Starting module general - /root/lily-2.0/lily-2.0/lily-2.0/lib/org/lilyproject/lily-general-module/2.0/lily-general-module-2.0.jar
org.kauriproject.runtime.KauriRTException: Error constructing module defined at /root/lily-2.0/lily-2.0/lily-2.0/lib/org/lilyproject/lily-general-module/2.0/lily-general-module-2.0.jar
at org.kauriproject.runtime.module.build.ModuleBuilder.buildInt(ModuleBuilder.java:152)
at org.kauriproject.runtime.module.build.ModuleBuilder.build(ModuleBuilder.java:55)
at org.kauriproject.runtime.KauriRuntime.start(KauriRuntime.java:240)
at org.kauriproject.runtime.cli.KauriRuntimeCli.run(KauriRuntimeCli.java:292)
at org.kauriproject.runtime.cli.KauriRuntimeCli.main(KauriRuntimeCli.java:63)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.kauriproject.launcher.RuntimeCliLauncher.run(RuntimeCliLauncher.java:79)
at org.kauriproject.launcher.RuntimeCliLauncher.launch(RuntimeCliLauncher.java:58)
at org.kauriproject.launcher.RuntimeCliLauncher.main(RuntimeCliLauncher.java:54)
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'hbaseConfiguration' defined in KAURI-INF/spring/services.xml in /root/lily-2.0/lily-2.0/lily-2.0/lib/org/lilyproject/lily-general-module/2.0/lily-general-module-2.0.jar: Instantiation of bean failed; nested exception is org.springframework.beans.BeanInstantiationException: Could not instantiate bean class [org.lilyproject.server.modules.general.HadoopConfigurationFactoryImpl]: Constructor threw exception; nested exception is org.apache.hadoop.hbase.client.NoServerForRegionException: Unable to find region for after 1 tries.
at org.springframework.beans.factory.support.ConstructorResolver.autowireConstructor(ConstructorResolver.java:254)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireConstructor(AbstractAutowireCapableBeanFactory.java:925)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:835)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:440)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory$1.run(AbstractAutowireCapableBeanFactory.java:409)
at java.security.AccessController.doPrivileged(Native Method)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:380)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:264)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:261)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:185)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:164)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:429)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:728)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:380)
at org.kauriproject.runtime.module.build.ModuleBuilder.buildInt(ModuleBuilder.java:89)
... 11 more
Caused by: org.springframework.beans.BeanInstantiationException: Could not instantiate bean class [org.lilyproject.server.modules.general.HadoopConfigurationFactoryImpl]: Constructor threw exception; nested exception is org.apache.hadoop.hbase.client.NoServerForRegionException: Unable to find region for after 1 tries.
at org.springframework.beans.BeanUtils.instantiateClass(BeanUtils.java:115)
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:87)
at org.springframework.beans.factory.support.ConstructorResolver.autowireConstructor(ConstructorResolver.java:248)
... 26 more
Caused by: org.apache.hadoop.hbase.client.NoServerForRegionException: Unable to find region for after 1 tries.
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:914)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:820)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:788)
at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:249)
at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:213)
at org.lilyproject.server.modules.general.HadoopConfigurationFactoryImpl.waitOnHBase(HadoopConfigurationFactoryImpl.java:67)
at org.lilyproject.server.modules.general.HadoopConfigurationFactoryImpl.<init>(HadoopConfigurationFactoryImpl.java:53)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.springframework.beans.BeanUtils.instantiateClass(BeanUtils.java:100)
... 28 more
Startup failed. Will try to shutdown and exit.
[INFO ][11:33:45,632][main ] org.kauriproject.runtime.info - Shutting down the modules.
[INFO ][11:33:45,633][main ] org.kauriproject.runtime.info - Stopping the restservice manager.
Can anyone help so as how to fix this and proceed ?
Thanks in advance .

Resources