I am using FaspManager as an embedded client in my Java application. My program works fine when I am sending just a single file. When I am trying to send multiple files (each having its own session & jobId) they are starting well and progressing for some time. However, after several minutes when one or two of the transfers complete, rest all of the transfers are stopping without completing.
In the aspera log I can see below messages:
2019-02-11 20:48:22.985 INFO 11120 --- [il.SelectThread] c.c.e.t.aspera.FaspTransferListener : Client session: 149aaa9b-d632-43e4-9653-fbbf768c69b5 | PROGRESS | Rate: 353.6 Kb/s | Target rate: 1.0 Gb/s
2019-02-11 20:48:23.024 INFO 11120 --- [il.SelectThread] com.asperasoft.faspmanager.Session : 149aaa9b-d632-43e4-9653-fbbf768c69b5 - cancel sent
I have not been able to find out who/how a cancel request has been sent. I have tried searching in Google for possible cause but have not been able to resolve it yet. So, I will really appreciate any help on this.
Thank you,
Sourav
The cancel sent message in Session is called if the user specifically calls FaspManager#cancelTransfer(String sessionId), or FaspManager#stop(), or if an error occurs while reading an input stream in FileTransferSession#addSource(StreamReader, String).
I'd guess you're calling stop on the FaspManager after the first session finishes, but I'd need a more complete log, or a snippet of your code to see.
Related
I have route for file processing something like below,
fromF("file:in?recursive=false&noop=true&maxMessagePerPoll=10&readLock=idempotent&idempotentRepository=#fileRepo#readLockRemoveOnCommit=readLockRemoveOnRollback=true&delete=true&moveFailed=failedDir")
.onCompletion()
.process("CompletionProcess")
.end()
.threads(5)
.processe("fileProcess")
.nd();
cCurrently when it receives shutdown it waits 45secs and if it's not completed within 45 secs it goes forceshutdown and that time it call onCompletion(CompletionProcess). I have a code to update DB in CompletionProcess and because of that it throws ConfigurationPropertiesBindException as ....ApplicationContext has been closed already. and the file got move to failedDir.
My goal is to stop the route after 45 secs without any roll back, no onCompletion call, and file is as it is in dir.
I am pretty new to camel and trying to understand rollback stragety/shutdown strategy but can't find a solution for above yet. Please guide me.
Thanks!
My camel route tries to pick up some files from sftp, transfer them to network, and delete them from sftp. If the sftp is unreachable after 3 attempts, I want the route to send an email warning the admin about the problem.
For this reason my sftp address has the following parameters:
maximumReconnectAttempts=2&throwExceptionOnConnectFailed=true&consumer.bridgeErrorHandler=true
In case the network location is not available, i want the route to notify the admin and not delete the files from sftp.
For this reason i have set .handled(false) in onException.
However, when connecting to sftp fails, aggregation also fails and no emails are coming. I have made a minimalist example below:
/configure
onException(Throwable.class)
.retryAttemptedLogLevel(LoggingLevel.WARN)
.redeliveryDelay(1000)
.handled(false)
.log(LoggingLevel.ERROR, LOG, "XXX - Error moving files")
.to(AGGREGATEROUTE)
.end();
from(downloadFrom)
.to(to)
.log(LoggingLevel.INFO, LOG, "XXX - Moving file OK")
.to(AGGREGATEROUTE);
from(AGGREGATEROUTE)
.log(LoggingLevel.INFO, LOG, "XXX - Starting aggregation.")
.aggregate(constant(true), new GroupedExchangeAggregationStrategy())
.completionFromBatchConsumer()
.completionTimeout(10000)
.log(LoggingLevel.INFO, LOG, "XXX - Aggregation completed, sending mail.");
In the logs i see:
16:02| ERROR | CamelLogger.java 156 | XXX - Error moving files
Then the logs for the Exception occurring during connection.
And then this:
16:02| ERROR | FatalFallbackErrorHandler.java 174 | Exception occurred while trying to handle previously thrown exception on exchangeId: ID-LP0641-1552662095664-0-2 using: [Pipeline[[Channel[Log(proefjes.camel_cursus.routebuilders.MoveWithPickupExceptions)[XXX - Error moving files]], Channel[sendTo(direct://aggregate)]]]].
16:02| ERROR | FatalFallbackErrorHandler.java 172 | \--> New exception on exchangeId: ID-LP0641-1552662095664-0-2
org.apache.camel.component.file.GenericFileOperationFailedException: Cannot connect to sftp://user#mycompany.nl:22
at org.apache.camel.component.file.remote.SftpOperations.connect(SftpOperations.java:149)
I do not see "XXX - Starting aggregation." which i would expect to see in the log. Does some kind of error occur befor aggregation? The breakpoint on aggregate(*, *) is never reached.
First, I just want to clarify something. You write "In case the network location is not available, i want the route to notify the admin and not delete the files from sftp", but shouldn't that be obvious anyhow? I mean, if the network location is not available, wouldn't deleting the files from sftp be impossible?
It's a little confusing that your exception handler is also routing .to(AGGREGATEROUTE). Given that you want to email an admin, shouldn't that be in the exception handler, not in the happy path? Why would you and how would you "aggregate" a connection failure?
Finally, and here I think is a real problem with your implementation, you may have misunderstood what handled(false) does. Setting this to false means routing should stop and propagate the exception returned to the caller. I'm not sure what having to the .to(AGGREGATEROUTE) would do in this case, but I'm not surprised it's not being called.
I suggest trying a few things. I don't have your code so I'm not sure which will work best. These are all related and any might work:
Change handled(false) to handled(true).
Replace handled with continued(true).
Use a Dead Letter Channel.
Reference:
Handle and Continue Exceptions
Dead Letter Channel
Since errorhandling is different depending on which endpoint causes the error, i have solved this by having two different versions of onException:
//configure exception on sft end
onException(Throwable.class)
.maximumRedeliveries(2)
.retryAttemptedLogLevel(LoggingLevel.WARN)
.redeliveryDelay(1000)
.onWhen(new hasSFTPErrorPredicate())
// .continued(true) // tries to connect once, mails and continues to aggregation with empty exchange
//.handled(false) // tries to connect twice but does not reach mail
.handled(true) // tries to connect once, does reach mail
// handled not defined: tries to connect twice but does not reach mail
.log(LoggingLevel.INFO, LOG, "XXX - SFTP exception")
.to(MAIL_ROUTE)
.end();
// exception anywhere else
onException(Throwable.class)
.maximumRedeliveries(2)
.retryAttemptedLogLevel(LoggingLevel.WARN)
.redeliveryDelay(1000)
.log(LoggingLevel.ERROR, LOG, "XXX - Error moving file ${file:name}: ${exception}")
.to(AGGREGATEROUTE)
.handled(false)
.end();
Exceptions occuring at the sftp end are handled in the first onException, because there the hasSFTPErrorPredicate returns 'true'. All this predicate does is check if any exception or their cause has "Cannot connect to sftp:" in the message.
No rollback is required in this case because nothing has happened yet.
Any other exception is handled by the second onException.
when i try to synchronize my caldav server implementation with Thunderbird 45.4.0 and Lightning 4.7.4 (one particular calendar collection) it doesnt show any data or events in the calendar though the last call of the sequence provided the data.
In the Thunderbird error log i can see one error:
Zeitstempel: 07.11.16, 14:21:12
Fehler: [calCachedCalendar] replay action failed: null,
uri=http://127.0.0.1:8003/sap/sports/webdav/appsvc/webdav/services/
server.xsjs/cal/_D043133/, result=2147500037, op=[xpconnect wrapped
calIOperation]
Quelldatei:
file:///Users/d043133/Library/Thunderbird/Profiles/hfbvuk9f.default/
extensions/%7Be2fda1a4-762b-4020-b5ad-a41df1933103%7D/calendar-
js/calCachedCalendar.js
Zeile: 327
the call sequence is as follows (detailed content via gist-links):
Propfind Request - Response
Options Request - Response
Propfind Request - Response
Report Request - Response - Response Raw
The synchronization with other clients like macOS-calendar and ios-calendar works in principle and shows the data. Does anyone has a clue what is going wrong here?
Not sure whether that is the cause but I can see two incorrect things:
a) Your <href/> property has trailing spaces:
<d:href>/sap/sports/webdav/appsvc/webdav/services/server.xsjs/cal/_D043133/EVENT%3A070768ba5dd78ff15458f1985cdaabb1.ics
</d:href>
b) your ORGANIZER property is not a valid URI
ORGANIZER:_D043133
i was able to find the cause of the above issue by debugging Thunderbird as propsed by Philipp. The Report Response has http status code 200, but as it is a multistatus response Thunderbird/Lightning expects status code 207 ;-)
Thanks for the hints!
I'm using Camel within a Spring boot application and integrate with RabbitMQ but am encountering strange behaviour.
My app has Restful endpointswhich convert the http request to a RabbitMQ message and publish this to a predefined exchange. There is a separate consumer app which listens to a queue and processes the messages.
I have deliberately entered an incorrect rabbitmq exchange name (invalidxchangename)to check that the application will fail if the exchange does not exist however the camel context starts without error and when I send in a first request is does not report any error. This message gets lost as there is no matching RabbitMQ exchange. When I submit a second request I receive the following exception which I would have expected on route startup.
com.rabbitmq.client.AlreadyClosedException: channel is already closed due to channel error; protocol method: #method<channel.close>(reply-code=404, reply-text=NOT_FOUND - no exchange 'invalidxchangename' in vhost
EDIT:
I've tried a more simple example to show the issue in Camel.
I've created a simple route as follows:
from("file:in?fileName=in.txt").log(LoggingLevel.DEBUG, "in here!").to("rabbitmq://localhost:5762/invalidexchange?declare=false");
where there is an existing RabbitMQ exchange called validexchange (so I have deliberately made a typo in the RabbitMQ uri). I would expect the camel route to fail at startup since the exchange doesn't exist, or even the first time it tries to process a new in.txt file.
What I am actually seeing in the logs is that on start up it reports no error and only on the 2nd invocation of the route does it report an error.
2015-03-11 16:17:04.356 INFO 9756 : ID-SBMELW7W-06220-59960-1426051020468-0-2 >>> (route2) from(file://in?fileName=in.txt) --> log[in here!] <<< Pattern:InOnly, Headers:...
2015-03-11 16:17:04.360 INFO 9756 : ID-SBMELW7W-06220-59960-1426051020468-0-2 >>> (route2) log[in here!] --> rabbitmq://localhost:5762/customerchannel.exchang?declare=false <<< Pattern:InOnly, Headers:...
2015-03-11 16:17:45.073 INFO 9756 : ID-SBMELW7W-06220-59960-1426051020468-0-4 >>> (route2) from(file://in?fileName=in.txt) --> log[in here!] <<< Pattern:InOnly, Headers: ...
2015-03-11 16:17:45.079 INFO 9756 : ID-SBMELW7W-06220-59960-1426051020468-0-4 >>> (route2) log[in here!] --> rabbitmq://localhost:5762/customerchannel.exchang?declare=false <<< Pattern:InOnly, Headers:...
2015-03-11 16:17:45.092 ERROR 9756 : Failed delivery for (MessageId: ID-SBMELW7W-06220-59960-1426051020468-0-3 on ExchangeId: ID-SBMELW7W-06220-59960-1426051020468-0-4). Exhausted after delivery attempt: 1 caught: com.rabbitmq.client.AlreadyClosedException: channel is already closed due to channel error; protocol method: #method<channel.close>(reply-code=404, reply-text=NOT_FOUND - no exchange 'customerchannel.exchang' in vhost '/', class-id=60, method-id=40)
It looks like the first request is causing an error which closes the connection and logs the reason, and when you try to use the channel the second time it's returning an AlreadyClosedException with the message that caused the channel to close in the first call.
You can test this by trying to publish the second message to a different exchange name in the same channel and checking which exchange is in the error. E.g. publish the second message to invalidxchangename2 and you should still see invalidxchangename as the exchange in the error.
To fix, you should handle the publish result when you publish and re-establish the connection if there's an error.
If you want to be sure that a message got delivered to a RabbitMQ queue, then you have to use publisher confirms: https://www.rabbitmq.com/confirms.html
That you are able to publish a message it doesn't mean that the message will reach a queue. You could go to a mailbox and leave a letter inside, but between the time you left the letter there and a postman picked up, many things could have happened, for example, the mailbox catching fire and so on.
I'm running into the following error when running an export to CSV job on AppEngine using the new Google Cloud Storage library (appengine-gcs-client). I have about ~30mb of data I need to export on a nightly basis. Occasionally, I will need to rebuild the entire table. Today, I had to rebuild everything (~800mb total) and I only actually pushed across ~300mb of it. I checked the logs and found this exception:
/task/bigquery/ExportVisitListByDayTask
java.lang.RuntimeException: Unexpected response code 200 on non-final chunk: Request: PUT https://storage.googleapis.com/moose-sku-data/visit_day_1372392000000_1372898225040.csv?upload_id=AEnB2UrQ1cw0-Jbt7Kr-S4FD2fA3LkpYoUWrD3ZBkKdTjMq3ICGP4ajvDlo9V-PaKmdTym-zOKVrtVVTrFWp9np4Z7jrFbM-gQ
x-goog-api-version: 2
Content-Range: bytes 4718592-4980735/*
262144 bytes of content
Response: 200 with 0 bytes of content
ETag: "f87dbbaf3f7ac56c8b96088e4c1747f6"
x-goog-generation: 1372898591905000
x-goog-metageneration: 1
x-goog-hash: crc32c=72jksw==
x-goog-hash: md5=+H27rz96xWyLlgiOTBdH9g==
Vary: Origin
Date: Thu, 04 Jul 2013 00:43:17 GMT
Server: HTTP Upload Server Built on Jun 28 2013 13:27:54 (1372451274)
Content-Length: 0
Content-Type: text/html; charset=UTF-8
X-Google-Cache-Control: remote-fetch
Via: HTTP/1.1 GWA
at com.google.appengine.tools.cloudstorage.oauth.OauthRawGcsService.put(OauthRawGcsService.java:254)
at com.google.appengine.tools.cloudstorage.oauth.OauthRawGcsService.continueObjectCreation(OauthRawGcsService.java:206)
at com.google.appengine.tools.cloudstorage.GcsOutputChannelImpl$2.run(GcsOutputChannelImpl.java:147)
at com.google.appengine.tools.cloudstorage.GcsOutputChannelImpl$2.run(GcsOutputChannelImpl.java:144)
at com.google.appengine.tools.cloudstorage.RetryHelper.doRetry(RetryHelper.java:78)
at com.google.appengine.tools.cloudstorage.RetryHelper.runWithRetries(RetryHelper.java:123)
at com.google.appengine.tools.cloudstorage.GcsOutputChannelImpl.writeOut(GcsOutputChannelImpl.java:144)
at com.google.appengine.tools.cloudstorage.GcsOutputChannelImpl.waitForOutstandingWrites(GcsOutputChannelImpl.java:186)
at com.moose.task.bigquery.ExportVisitListByDayTask.doPost(ExportVisitListByDayTask.java:196)
The task is pretty straightforward, but I'm wondering if there is something wrong with the way I'm using waitForOutstandingWrites() or the way I'm serializing my outputChannel for the next task run. One thing to note, is that each task is broken into daily groups, each outputting their own individual file. The day tasks are scheduled to run 10 minutes apart concurrently to push out all 60 days.
In the task, I create a PrintWriter like so:
OutputStream outputStream = Channels.newOutputStream( outputChannel );
PrintWriter printWriter = new PrintWriter( outputStream );
and then write data out to it 50 lines at a time and call the waitForOutstandingWrites() function to push everything over to GCS. When I'm coming up to the open-file limit (~22 seconds) I put the outputChannel into Memcache and then reschedule the task with the data iterator's cursor.
printWriter.print( outputString.toString() );
printWriter.flush();
outputChannel.waitForOutstandingWrites();
This seems to be working most of the time, but I'm getting these errors which is creating ~corrupted and incomplete files on the GCS. Is there anything obvious I'm doing wrong in these calls? Can I only have one channel open to GCS at a time per application? Is there some other issue going on?
Appreciate any tips you could lend!
Thanks!
Evan
A 200 response indicates that the file has been finalized. If this occurs on an API other than close, the library throws an error, as this is not expected.
This is likely occurring do to the way you are rescheduling the task. It may be that when you reschedule the task, the task queue is duplicating the delivery of the task for some reason. (This can happen) and if there are no checks to prevent this, there could be two instances attempting to write to the same file at the same time. When one closes the file the other sees an error. The net result is a corrupt file.
The simple solution is not to re-schedule the task. There is no time limit on how long a file can be held open with the GCS client. (Unlike the deprecated Files API.)