Camel - forceShutdown & rollback - apache-camel

I have route for file processing something like below,
fromF("file:in?recursive=false&noop=true&maxMessagePerPoll=10&readLock=idempotent&idempotentRepository=#fileRepo#readLockRemoveOnCommit=readLockRemoveOnRollback=true&delete=true&moveFailed=failedDir")
.onCompletion()
.process("CompletionProcess")
.end()
.threads(5)
.processe("fileProcess")
.nd();
cCurrently when it receives shutdown it waits 45secs and if it's not completed within 45 secs it goes forceshutdown and that time it call onCompletion(CompletionProcess). I have a code to update DB in CompletionProcess and because of that it throws ConfigurationPropertiesBindException as ....ApplicationContext has been closed already. and the file got move to failedDir.
My goal is to stop the route after 45 secs without any roll back, no onCompletion call, and file is as it is in dir.
I am pretty new to camel and trying to understand rollback stragety/shutdown strategy but can't find a solution for above yet. Please guide me.
Thanks!

Related

Aggregate after exception from ftp consumer: FatalFallbackErrorHandler

My camel route tries to pick up some files from sftp, transfer them to network, and delete them from sftp. If the sftp is unreachable after 3 attempts, I want the route to send an email warning the admin about the problem.
For this reason my sftp address has the following parameters:
maximumReconnectAttempts=2&throwExceptionOnConnectFailed=true&consumer.bridgeErrorHandler=true
In case the network location is not available, i want the route to notify the admin and not delete the files from sftp.
For this reason i have set .handled(false) in onException.
However, when connecting to sftp fails, aggregation also fails and no emails are coming. I have made a minimalist example below:
/configure
onException(Throwable.class)
.retryAttemptedLogLevel(LoggingLevel.WARN)
.redeliveryDelay(1000)
.handled(false)
.log(LoggingLevel.ERROR, LOG, "XXX - Error moving files")
.to(AGGREGATEROUTE)
.end();
from(downloadFrom)
.to(to)
.log(LoggingLevel.INFO, LOG, "XXX - Moving file OK")
.to(AGGREGATEROUTE);
from(AGGREGATEROUTE)
.log(LoggingLevel.INFO, LOG, "XXX - Starting aggregation.")
.aggregate(constant(true), new GroupedExchangeAggregationStrategy())
.completionFromBatchConsumer()
.completionTimeout(10000)
.log(LoggingLevel.INFO, LOG, "XXX - Aggregation completed, sending mail.");
In the logs i see:
16:02| ERROR | CamelLogger.java 156 | XXX - Error moving files
Then the logs for the Exception occurring during connection.
And then this:
16:02| ERROR | FatalFallbackErrorHandler.java 174 | Exception occurred while trying to handle previously thrown exception on exchangeId: ID-LP0641-1552662095664-0-2 using: [Pipeline[[Channel[Log(proefjes.camel_cursus.routebuilders.MoveWithPickupExceptions)[XXX - Error moving files]], Channel[sendTo(direct://aggregate)]]]].
16:02| ERROR | FatalFallbackErrorHandler.java 172 | \--> New exception on exchangeId: ID-LP0641-1552662095664-0-2
org.apache.camel.component.file.GenericFileOperationFailedException: Cannot connect to sftp://user#mycompany.nl:22
at org.apache.camel.component.file.remote.SftpOperations.connect(SftpOperations.java:149)
I do not see "XXX - Starting aggregation." which i would expect to see in the log. Does some kind of error occur befor aggregation? The breakpoint on aggregate(*, *) is never reached.
First, I just want to clarify something. You write "In case the network location is not available, i want the route to notify the admin and not delete the files from sftp", but shouldn't that be obvious anyhow? I mean, if the network location is not available, wouldn't deleting the files from sftp be impossible?
It's a little confusing that your exception handler is also routing .to(AGGREGATEROUTE). Given that you want to email an admin, shouldn't that be in the exception handler, not in the happy path? Why would you and how would you "aggregate" a connection failure?
Finally, and here I think is a real problem with your implementation, you may have misunderstood what handled(false) does. Setting this to false means routing should stop and propagate the exception returned to the caller. I'm not sure what having to the .to(AGGREGATEROUTE) would do in this case, but I'm not surprised it's not being called.
I suggest trying a few things. I don't have your code so I'm not sure which will work best. These are all related and any might work:
Change handled(false) to handled(true).
Replace handled with continued(true).
Use a Dead Letter Channel.
Reference:
Handle and Continue Exceptions
Dead Letter Channel
Since errorhandling is different depending on which endpoint causes the error, i have solved this by having two different versions of onException:
//configure exception on sft end
onException(Throwable.class)
.maximumRedeliveries(2)
.retryAttemptedLogLevel(LoggingLevel.WARN)
.redeliveryDelay(1000)
.onWhen(new hasSFTPErrorPredicate())
// .continued(true) // tries to connect once, mails and continues to aggregation with empty exchange
//.handled(false) // tries to connect twice but does not reach mail
.handled(true) // tries to connect once, does reach mail
// handled not defined: tries to connect twice but does not reach mail
.log(LoggingLevel.INFO, LOG, "XXX - SFTP exception")
.to(MAIL_ROUTE)
.end();
// exception anywhere else
onException(Throwable.class)
.maximumRedeliveries(2)
.retryAttemptedLogLevel(LoggingLevel.WARN)
.redeliveryDelay(1000)
.log(LoggingLevel.ERROR, LOG, "XXX - Error moving file ${file:name}: ${exception}")
.to(AGGREGATEROUTE)
.handled(false)
.end();
Exceptions occuring at the sftp end are handled in the first onException, because there the hasSFTPErrorPredicate returns 'true'. All this predicate does is check if any exception or their cause has "Cannot connect to sftp:" in the message.
No rollback is required in this case because nothing has happened yet.
Any other exception is handled by the second onException.

FaspManager embedded client stops prematurely while sending multiple files

I am using FaspManager as an embedded client in my Java application. My program works fine when I am sending just a single file. When I am trying to send multiple files (each having its own session & jobId) they are starting well and progressing for some time. However, after several minutes when one or two of the transfers complete, rest all of the transfers are stopping without completing.
In the aspera log I can see below messages:
2019-02-11 20:48:22.985 INFO 11120 --- [il.SelectThread] c.c.e.t.aspera.FaspTransferListener : Client session: 149aaa9b-d632-43e4-9653-fbbf768c69b5 | PROGRESS | Rate: 353.6 Kb/s | Target rate: 1.0 Gb/s
2019-02-11 20:48:23.024 INFO 11120 --- [il.SelectThread] com.asperasoft.faspmanager.Session : 149aaa9b-d632-43e4-9653-fbbf768c69b5 - cancel sent
I have not been able to find out who/how a cancel request has been sent. I have tried searching in Google for possible cause but have not been able to resolve it yet. So, I will really appreciate any help on this.
Thank you,
Sourav
The cancel sent message in Session is called if the user specifically calls FaspManager#cancelTransfer(String sessionId), or FaspManager#stop(), or if an error occurs while reading an input stream in FileTransferSession#addSource(StreamReader, String).
I'd guess you're calling stop on the FaspManager after the first session finishes, but I'd need a more complete log, or a snippet of your code to see.

Camel JPA component route is not executing fully

I have created a simple route using JPA component
`fromF("jpa:%s?consumer.namedQuery=step1&delay=5s&consumeDelete=false&consumeLockEntity=false", Event.class.getName())
.log("Query Fired")
.process(exchange -> System.out.println(exchange.getIn().getBody()))
.end();
in console I can see the query being fired
Hibernate: select event0_.eventId as eventId1_3_, event0_.event_desc as event_desc2_3_, event0_.event_name as event_name3_3_, event0_.event_type as event_type4_3_, event0_.valid_from_date as valid_from_date5_3_, event0_.insight as insight6_3_, event0_.is_processed as is_processed7_3_, event0_.severity as severity8_3_, event0_.source_system_name as source_system_name9_3_, event0_.valid_to_date as valid_to_date10_3_ from OV90PLFM.event event0_ where event0_.is_processed=0
but after this I cannot see log getting printed and the processor is also not getting executed.
after a delay a query continues to get fire but no exception is there and route processing is not completed. the log is not printing and the processor is also not getting called.
I have changed the log level still exception is not there.
I just want my route to complete its execution so i can write some thing in processor.
same query gives all the rows in database.
Please suggest what is going wrong?

Camel errorHandler / deadLetterChannel REST response

I have a Camel rest endpoint (Jetty) which validates and processes incoming requests. Besides specific Exception handlers (onException) it uses a DLQ error handler (errorHandler(deadLetterChannel...)) which is setup to retry 3 times - if unsuccessful the message is moved to the DLQ.
My question is, how do I still return a user friendly error message back to the client if an unexpected Exception occurs rather than the full Exception body? Is there some config I'm missing on the errorHandler?
I've tried to find some examples on the camel unit tests (DeadLetterChannelHandledExampleTest) and camel in action 2 (Chapter 11) but none seemed to have specific examples for this scenario.
Code is:
.from(ROUTE_URI)
.errorHandler(deadLetterChannel("{{activemq.webhook.dlq.queue}}")
.onPrepareFailure(new FailureProcessor())
.maximumRedeliveries(3)
.redeliveryDelay(1000))
.bean(ParcelProcessor.class, "process");
Thank you for your help!
Use a 2nd route as the DLQ, eg direct:dead and then send the message first to the real DLQ, and then do the message transformation afterwards to return a friendly response.
errorHandler(deadLetterChannel("direct:dead")
from("direct:dead")
.to("{{activemq.webhook.dlq.queue}}")
.transform(constant("Sorry something was wrong"));

Camel routing issue

I have created some routes. Following is the code which is having issues.
Following is the expected behavior:
Exchange at first gets processed at hourlyFeedParts queue and then passed to dailyProcessor.
In dailyProcessor a property currHour is being checked if it is 23 or not. If not, it just passes on.
If currHour == 23, code inside it shall be processed. This part again has following functionality,
If property feedsleft is not zero, all code inside the choice currHour==23 is executed. This is fine.
if property feedsLeft is zero, code inside it processed. Code within looks for any further messages. If yes, they are send to hourlyFeedParts. Here comes the issue: If there is any message to be processed the code beyond to("direct:hourlyFeedParts") is not executed. Though, if nothing is returned, the code works fine.
I guess the issue could be code ends at to. So, what shall be the alternative?
from("direct:dailyProcessor")
.choice()
.when(simple("${property.currHour} == 23"))
.choice()
.when(simple("${property.feedsLeft} == 0"))
.split(beanExpression(APNProcessor.class, "recheckFeeds"))
.to("direct:hourlyFeedParts")
.endChoice()
.end()
.split(beanExpression(new S3FileKeyProcessorFactory(), "setAPNS3Header"))
.parallelProcessing()
.id("APN Daily PreProcessor / S3 key generator ")
.log("Uploading file ${file:name}")
.to("{{apn.destination}}")
.id("APN Daily S3 > uploader")
.log("Uploaded file ${file:name} to S3")
.endChoice()
.end()
I believe that the issue is the nested choice().
try to extract the inner choice to a seperate route e.g.:
from("direct:dailyProcessor")
.choice()
.when(simple("${property.currHour} == 23"))
.to("direct:inner")
.split(beanExpression(new S3FileKeyProcessorFactory(), "setAPNS3Header"))
.parallelProcessing()
.id("APN Daily PreProcessor / S3 key generator ")
.log("Uploading file ${file:name}")
.to("{{apn.destination}}")
.id("APN Daily S3 > uploader")
.log("Uploaded file ${file:name} to S3");
from("direct:inner")
.choice()
.when(simple("${property.feedsLeft} == 0"))
.split(beanExpression(APNProcessor.class, "recheckFeeds"))
.to("direct:hourlyFeedParts");
I haven't tested it, but I guess you get the point.

Resources