I have created some routes. Following is the code which is having issues.
Following is the expected behavior:
Exchange at first gets processed at hourlyFeedParts queue and then passed to dailyProcessor.
In dailyProcessor a property currHour is being checked if it is 23 or not. If not, it just passes on.
If currHour == 23, code inside it shall be processed. This part again has following functionality,
If property feedsleft is not zero, all code inside the choice currHour==23 is executed. This is fine.
if property feedsLeft is zero, code inside it processed. Code within looks for any further messages. If yes, they are send to hourlyFeedParts. Here comes the issue: If there is any message to be processed the code beyond to("direct:hourlyFeedParts") is not executed. Though, if nothing is returned, the code works fine.
I guess the issue could be code ends at to. So, what shall be the alternative?
from("direct:dailyProcessor")
.choice()
.when(simple("${property.currHour} == 23"))
.choice()
.when(simple("${property.feedsLeft} == 0"))
.split(beanExpression(APNProcessor.class, "recheckFeeds"))
.to("direct:hourlyFeedParts")
.endChoice()
.end()
.split(beanExpression(new S3FileKeyProcessorFactory(), "setAPNS3Header"))
.parallelProcessing()
.id("APN Daily PreProcessor / S3 key generator ")
.log("Uploading file ${file:name}")
.to("{{apn.destination}}")
.id("APN Daily S3 > uploader")
.log("Uploaded file ${file:name} to S3")
.endChoice()
.end()
I believe that the issue is the nested choice().
try to extract the inner choice to a seperate route e.g.:
from("direct:dailyProcessor")
.choice()
.when(simple("${property.currHour} == 23"))
.to("direct:inner")
.split(beanExpression(new S3FileKeyProcessorFactory(), "setAPNS3Header"))
.parallelProcessing()
.id("APN Daily PreProcessor / S3 key generator ")
.log("Uploading file ${file:name}")
.to("{{apn.destination}}")
.id("APN Daily S3 > uploader")
.log("Uploaded file ${file:name} to S3");
from("direct:inner")
.choice()
.when(simple("${property.feedsLeft} == 0"))
.split(beanExpression(APNProcessor.class, "recheckFeeds"))
.to("direct:hourlyFeedParts");
I haven't tested it, but I guess you get the point.
Related
I'm trying to add error handling to my parallel processing:
...
.multicast(new GroupedMessageAggregationStrategy())
.parallelProcessing()
.to("direct:getAndSaveRoute1")
.to("direct:getAndSaveRoute2")
.end()
.split(body())
.choice()
.when(simple("${body.errorOcurred} == true"))
//TODO:: end route returning current body
.endChoice()
.otherwise()
.log(...)
.endChoice()
.end()
//after split, if no error occurred
.to("direct:nextRoute")
.end()
I can't seem to figure out though how to return/ end the route (and pass back the current body as the rest response body) within the choice in the split. end() and endRest() seem to cause issues...
It is also not clear as how many end()s I need; Adding an end() for the split causes an exception and makes Spring fail to boot.
For those in the future, I ended up making a bean to turn the list into a single message, and then do a choice based on that.
Not very 'camel', but needed to be wrapped up
My camel route tries to pick up some files from sftp, transfer them to network, and delete them from sftp. If the sftp is unreachable after 3 attempts, I want the route to send an email warning the admin about the problem.
For this reason my sftp address has the following parameters:
maximumReconnectAttempts=2&throwExceptionOnConnectFailed=true&consumer.bridgeErrorHandler=true
In case the network location is not available, i want the route to notify the admin and not delete the files from sftp.
For this reason i have set .handled(false) in onException.
However, when connecting to sftp fails, aggregation also fails and no emails are coming. I have made a minimalist example below:
/configure
onException(Throwable.class)
.retryAttemptedLogLevel(LoggingLevel.WARN)
.redeliveryDelay(1000)
.handled(false)
.log(LoggingLevel.ERROR, LOG, "XXX - Error moving files")
.to(AGGREGATEROUTE)
.end();
from(downloadFrom)
.to(to)
.log(LoggingLevel.INFO, LOG, "XXX - Moving file OK")
.to(AGGREGATEROUTE);
from(AGGREGATEROUTE)
.log(LoggingLevel.INFO, LOG, "XXX - Starting aggregation.")
.aggregate(constant(true), new GroupedExchangeAggregationStrategy())
.completionFromBatchConsumer()
.completionTimeout(10000)
.log(LoggingLevel.INFO, LOG, "XXX - Aggregation completed, sending mail.");
In the logs i see:
16:02| ERROR | CamelLogger.java 156 | XXX - Error moving files
Then the logs for the Exception occurring during connection.
And then this:
16:02| ERROR | FatalFallbackErrorHandler.java 174 | Exception occurred while trying to handle previously thrown exception on exchangeId: ID-LP0641-1552662095664-0-2 using: [Pipeline[[Channel[Log(proefjes.camel_cursus.routebuilders.MoveWithPickupExceptions)[XXX - Error moving files]], Channel[sendTo(direct://aggregate)]]]].
16:02| ERROR | FatalFallbackErrorHandler.java 172 | \--> New exception on exchangeId: ID-LP0641-1552662095664-0-2
org.apache.camel.component.file.GenericFileOperationFailedException: Cannot connect to sftp://user#mycompany.nl:22
at org.apache.camel.component.file.remote.SftpOperations.connect(SftpOperations.java:149)
I do not see "XXX - Starting aggregation." which i would expect to see in the log. Does some kind of error occur befor aggregation? The breakpoint on aggregate(*, *) is never reached.
First, I just want to clarify something. You write "In case the network location is not available, i want the route to notify the admin and not delete the files from sftp", but shouldn't that be obvious anyhow? I mean, if the network location is not available, wouldn't deleting the files from sftp be impossible?
It's a little confusing that your exception handler is also routing .to(AGGREGATEROUTE). Given that you want to email an admin, shouldn't that be in the exception handler, not in the happy path? Why would you and how would you "aggregate" a connection failure?
Finally, and here I think is a real problem with your implementation, you may have misunderstood what handled(false) does. Setting this to false means routing should stop and propagate the exception returned to the caller. I'm not sure what having to the .to(AGGREGATEROUTE) would do in this case, but I'm not surprised it's not being called.
I suggest trying a few things. I don't have your code so I'm not sure which will work best. These are all related and any might work:
Change handled(false) to handled(true).
Replace handled with continued(true).
Use a Dead Letter Channel.
Reference:
Handle and Continue Exceptions
Dead Letter Channel
Since errorhandling is different depending on which endpoint causes the error, i have solved this by having two different versions of onException:
//configure exception on sft end
onException(Throwable.class)
.maximumRedeliveries(2)
.retryAttemptedLogLevel(LoggingLevel.WARN)
.redeliveryDelay(1000)
.onWhen(new hasSFTPErrorPredicate())
// .continued(true) // tries to connect once, mails and continues to aggregation with empty exchange
//.handled(false) // tries to connect twice but does not reach mail
.handled(true) // tries to connect once, does reach mail
// handled not defined: tries to connect twice but does not reach mail
.log(LoggingLevel.INFO, LOG, "XXX - SFTP exception")
.to(MAIL_ROUTE)
.end();
// exception anywhere else
onException(Throwable.class)
.maximumRedeliveries(2)
.retryAttemptedLogLevel(LoggingLevel.WARN)
.redeliveryDelay(1000)
.log(LoggingLevel.ERROR, LOG, "XXX - Error moving file ${file:name}: ${exception}")
.to(AGGREGATEROUTE)
.handled(false)
.end();
Exceptions occuring at the sftp end are handled in the first onException, because there the hasSFTPErrorPredicate returns 'true'. All this predicate does is check if any exception or their cause has "Cannot connect to sftp:" in the message.
No rollback is required in this case because nothing has happened yet.
Any other exception is handled by the second onException.
I am trying to doing ETL job (read,convert, store an excel) .
This route work fine :
from("file:src/data?move=processed&moveFailed=error&idempotent=true")
.bean(ExcelTransformer.class,"process")
.to("jpa:cie.service.receiver.DatiTelefonici?entityType=java.util.ArrayList");
but I need a customization in case of Exception : I need to move file in other folder , then applied :
onException(Throwable.class).maximumRedeliveries(0).to("file:src/data?move=error");
But file component can't move the file because is locket by first file comp instance.
Then I am tryinf to use try/catch but it doesn't work (probably the move operation inside catch is unaware of correct file name ? )
from("file:src/data?noop=true")
.doTry()
.bean(ExcelTransformer.class,"process")
.to("jpa:cie.service.receiver.DatiTelefonici?entityType=java.util.ArrayList")
.to("file:src/data?move=processed")
.doCatch(Throwable.class)
.to("file:src/data?move=error")
.end();
tanks
After many comments my current code looks like :
from("file:src/data?noop=false&delete=true")
.doTry()
.bean(ExcelTransformer.class,"process") .to("jpa:cie.service.receiver.DatiTelefonici?entityType=java.util.ArrayList")
.to("file:src/data/processed")
.doCatch(Throwable.class)
.to("file:src/data/error")
/*
.doFinally()
.to("file:src/data:delete=true")
*/
.end();
It move correctly the file in processed and error folder but the file remain in main folder and is preocessed more ,recursively
If I understood your question well then you need remove the idempotent=true from the parameters, then it should work:
from("file:src/data?move=processed&moveFailed=error")
.bean(ExcelTransformer.class,"process")
.to("jpa:cie.service.receiver.DatiTelefonici?entityType=java.util.ArrayList");
The previous route moves the file to the processed folder if the routing was successful otherwise it moves the file to the error folder (if any exception happens). The filename won't be changed.
Other solution with try-catch
from("file://src/data?delete=true")
.doTry()
.bean(ExcelTransformer.class,"process")
.to("jpa:cie.service.receiver.DatiTelefonici?entityType=java.util.ArrayList")
.to("file://src/data/processed")
.doCatch(Throwable.class)
.to("file://src/data/error")
.end();
I've fairly simple looking route
sftp://hostname:22//incoming/folder/location/?username=username&password=xxxxx
&localWorkDirectory=/tmp&readLock=changed&readLockCheckInterval=2000
&move=processed/$simple{date:now:yyyy}/$simple{date:now:MM}/$simple{date:now:dd}${file:name}
&consumer.delay=450000&stepwise=false&streamDownload=true&disconnect=true
I also have an onException clause
onException(ValidationException.class)
.handled(true)
.logStackTrace(true)
.filter(header("VALIDATION_ERROR").isEqualTo(true))
.choice()
.when(header("CamelFileName").contains("Param1"))
.to(sftp://hostname:22//One/error/folder?password=xxxxxx&username=username)
.when(header("CamelFileName").contains("Param2"))
.to(sftp://hostname:22//Two/error/folder?password=xxxxxx&username=username)
.endChoice();
When I have single file, the route seems to work as expected. When more than one file and exception occurs, I get many different exceptions like
org.apache.camel.component.file.GenericFileOperationFailedException: Cannot list directory: incoming/folder/location
Caused by: java.lang.IndexOutOfBoundsException
I tried using all the attributes mentioned in the route viz. streamDownload, stepwise, readLock, localWorkDirectory et al. However, the error handling while multiple files is not working. I see the first file getting processed. However, it doesn't move to the processed folder once the exception occurs and then incoming/folder/location becomes non listable. I tried using continued(true) as well instead of handled(true)
The problem was with multiple files being handled in the same exchange. On exception, the route was trying to FTP back the error file on the same server. The solution was to split body into multiple exchanges so that each file has its own exchange and process them separately.
from(sftp://hostname:22//incoming/folder/location/?username=username&password=xxxxx
&localWorkDirectory=/tmp&readLock=changed&disconnect=true&stepwise=false
&move=processed/$simple{date:now:yyyy}/$simple{date:now:MM}/$simple{date:now:dd}${file:name}
&consumer.delay=450000).split(body()).processRef("incomingProcessor").end();
After transforming a file with xslt I have to append a file downloaded from ftp. So I did the following:
from("direct:adobe_productList_incremental")
.id("routeADOBESPtransformPI_productList")
.log(LoggingLevel.INFO, "---------Starting file: ${body}")
.convertBodyTo(InputStream.class)
.to("xslt:classpath:" + xsltTransformationProductList)
.log(LoggingLevel.INFO, "---------Transformed file: ${body}")
.pollEnrich(ftpType+"://"+ftpUsername+"#"+ ftpUrl +":" + ftpPort + ftpPath_incrementalComplete +"?password="+ftpPassword+"&fileName="+ftpFilename_incrementalComplete+"&passiveMode=true&binary=true&delete=false",10000)
.log(LoggingLevel.INFO, "---------After poll enrich: ${body}")
.to("file:{{file.root}}{{file.outbox.products_list_incremental}}?fileName={{file.outbox.products_list_incremental.name}}.final");
untill the poll everythin works (the transformation is done correctly), but after the pollEnrich the current body is overrided by the ftp content (and not appended as it should be).
Any help?
No it works as designed.
By default the content will be overridden. If you need to append/merge or whatever, you need to use a custom aggregation strategy, and implement code logic that does this.
See the Camel docs at: http://camel.apache.org/content-enricher.html about the ExampleAggregationStrategy.
The Camel docs says
The aggregation strategy is optional. If you do not provide it
Camel will by default just use the body obtained from the resource.