Here's the problem I've been grappling with for a while...I'm using Camel (v2.10.2) to set up many file routes to move data across file systems, servers, and in/out of the organisation (B2B). There are data and signal files in their respective dirs with some of the routes being short lived, while others run as services on different VMs/servers. These processes (routes) are run under different unix 'functional' ids, but there is an attempt to make them belong to the same unix group(s) if possible...
Of course on unix there is always the potential for file/dir permission problems...and that is the issue I'm facing/trying to solve.
I use the DefaultErrorHandler and log success or failure for an exchange via a custom RoutePolicy within the onExchangeDone(...) checking the Exchange.isFailed(). The signal file is either moved to the destination on success or moved to .error dir on fail, with an alert written to a system-wide alert log (checked by Tivoli)
The file route is configured to propagate errors occurring while picking up files, etc via the consumer.bridgeErrorHandler=true
Basically, if I have any unix permission related errors, then I want to stop (and maybe remove) the effected route, indicating clearly that this has happened and why - a permission issue is not easily solvable programmatically, so stop and alert is the only option.
So I'll illustrate a test case that causes an issue...
App_A creates some data files in ./data/. Then App_A creates the same number of signal files in ./signal/, but due to some 'data' related bug it also creates a signal file ./signal/acc_xyz.csv that doesn't have a corresponding data file.
Route starts to process ./signal/acc_xyz.csv and the 'validation process' finds that ./data/acc_xyz.csv doesn't exist and throws an exception to indicate this, hence stopping the exchange being processed further.
The File component is configured with moveFailed=.error to move the signal file to ./signal/.error/, but this dir is locked (don't worry why this is) to the functional user id executing the Java process and internal Camel processing throws a GenericFileOperationFailedException indicating the cause to be an underlying 'Permission denied' issue.
Oh dear, the same signal file is then processed again, and again, and...
I have tried to get this 'secondary error' propagated to my code, but have failed, hence I can't stop the route.
How can I get this and other internal Camel errors propagated to my code/exception handler/whatever and not just seeing it be logged and swallowed?
thanks in advance
ok more detail from log4j...note the sequence of times
Camel DefaultErrorHandler:
2013-04-25 15:06:26,001 [Camel (camel-1) thread #0 - file:///FTROOT/fileTransfer/outbound/signal] ERROR (MarkerIgnoringBase.java:161) - Failed delivery for (MessageId: ID-rwld601-rw-discoverfinancial-com-60264-1366902384246-0-1 on ExchangeId: ID-rwld601-rw-discoverfinancial-com-60264-1366902384246-0-2). Exhausted after delivery attempt: 1 caught: java.lang.IllegalStateException: missingFile: route [App_A.outboundReceipt] has missing file at /FTROOT/fileTransfer/outbound/data/stuff.log
java.lang.IllegalStateException: missingFile: route [App_A.outboundReceipt] has missing file at /FTROOT/fileTransfer/outbound/data/stuff.log
at com.myco.mft.process.BaseFileRouteBuilder.checkFile(BaseFileRouteBuilder.java:934)
My alert logger via the RoutePolicy.onExchangeDone(...) - at this pont the exchange has completed with a failure:
2013-04-25 15:06:26,011|Camel (camel-1) thread #0 - file:///FTROOT/fileTransfer/outbound/signal|exchange|App_A.outboundReceipt|signalFile=/FTROOT/fileTransfer/outbound/signal/stuff.log|there has been a routing failure|missingFile: route [App_A.outboundReceipt] has missing file at /FTROOT/fileTransfer/outbound/data/stuff.log
Camel endpoint post-processing - this is the stuff that Camel doesn't propagate to me:
2013-04-25 15:06:26,027 [Camel (camel-1) thread #0 - file:///FTROOT/fileTransfer/outbound/signal] WARN (GenericFileOnCompletion.java:149) - Rollback file strategy: org.apache.camel.component.file.strategy.GenericFileDeleteProcessStrategy#104e28b for file: GenericFile[/FTROOT/fileTransfer/outbound/signal/stuff.log]
2013-04-25 15:06:28,038 [Camel (camel-1) thread #0 - file:///FTROOT/fileTransfer/outbound/signal] WARN (MarkerIgnoringBase.java:136) - Caused by: [org.apache.camel.component.file.GenericFileOperationFailedException - Error renaming file from /FTROOT/fileTransfer/outbound/signal/stuff.log to /FTROOT/fileTransfer/outbound/signal/.error/stuff.log]
org.apache.camel.component.file.GenericFileOperationFailedException: Error renaming file from /FTROOT/fileTransfer/outbound/signal/stuff.log to /FTROOT/fileTransfer/outbound/signal/.error/stuff.log
at org.apache.camel.component.file.FileOperations.renameFile(FileOperations.java:72)
...
Caused by: java.io.FileNotFoundException: /FTROOT/fileTransfer/outbound/signal/stuff.log (Permission denied)
at java.io.FileInputStream.open(Native Method)
Related
When loading OBJs into an MDLAsset using the MDLAsset(url:) initializer (to eventually get the model into SceneKit), the operation fails frequently and inconsistently on iOS14. This operation works fine for these same files on previous iOS versions. I've also observed the bug on iPadOS, although maybe less frequently. Not sure if it's relevant, but these OBJs are pulled from server and stored locally. But this bug is occurring after files are already downloaded. Sometimes the same file will fail multiple times before randomly working, and vice versa.
The console output seems to indicate a failure to communicate with ModelIO XPC service. I tried restarting my device, but the bug continues to occur. Console output:
connection to com.apple.ModelIO.AssetLoader was interrupted
AssetLoader.loadURL errorHandler: Error Domain=NSCocoaErrorDomain Code=4097 "connection to service on pid 0 named com.apple.ModelIO.AssetLoader" UserInfo={NSDebugDescription=connection to service on pid 0 named com.apple.ModelIO.AssetLoader}
Couldn’t communicate with a helper application.
connection to com.apple.ModelIO.AssetLoader was interrupted
Has anyone else run into this issue on iOS14?
Alternatively, are there any workarounds anyone has tried in the meantime? As far as I know, loading an OBJ (that is downloaded from server) into SceneKit can only be done through ModelIO, without writing an OBJ parser myself.
This seems to be fixed in 14.3.
2020-10-13 18:31:36.989282+0300 Studia3D Viewer[1452:348335] connection to com.apple.ModelIO.AssetLoader was interrupted
2020-10-13 18:31:36.989368+0300 Studia3D Viewer[1452:347676] AssetLoader.loadURL errorHandler: Error Domain=NSCocoaErrorDomain Code=4097 “connection to service on pid 0 named com.apple.ModelIO.AssetLoader” UserInfo={NSDebugDescription=connection to service on pid 0 named com.apple.ModelIO.AssetLoader}
2020-10-13 18:31:36.989404+0300 Studia3D Viewer[1452:348332] connection to com.apple.ModelIO.AssetLoader was interrupted
2020-10-13 18:31:36.997352+0300 Studia3D Viewer[1452:347676] Не удалось установить связь с приложением-помощником.
The same thing happens with local files
No solution yet
I'm using Camel Exec for automated shutdowns on some of our devices.
The shutdown command is pretty simple, and it mostly works fine:
from(START_DEEP_SLEEP)
.setBody(constant(null)) // we don't want stdin for exec
.setHeader(ExecBinding.EXEC_COMMAND_ARGS, constant("""shutdown $shutdownDelay "starting deep sleep shutdown" """))
.to("exec:sudo")
Obviously, this command will send a shutdown to the application executing it. That too isn't much of an issue, except that sometimes this produces an exit value of 143. I know the meaning of the return value, and it makes sense to see it here, but this only happens on some devices. Most others just return 0. They are all of the same type, so I really don't know where this discrepancy comes from, but it's not even that big an issue. The shutdown works none the less.
The problem is that camel exec logs this as an error:
ERROR 549 --- [Camel (camel-1) thread #1 - seda://start-deepsleep] o.a.camel.component.exec.ExecProducer : The command ExecCommand [args=[shutdown, now, starting deep sleep shutdown], executable=sudo, timeout=9223372036854775807, outFile=null, workingDir=null, useStderrOnEmptyStdout=false] returned exit value 143
This produces undesired noise in our monitoring, and I would rather not have it logged.
The core issue here is that Camel Exec does not throw, so there's no exception I could handle. It just logs the error, which then gets picked up by our log analysis.
I would like to handle that exit code gracefully without camel Exec logging an error. The return value is already logged separately anyways. How can I do that?
According to the docu http://camel.apache.org/exec.html there is a header ExecBinding.EXEC_EXIT_VALUE filled with the error number. Yours should be 143 (the docu states that this depends on the OS).
That could be a "hook" to handle the log entry, e.g. deleting the last entry with the same error number.
Of course this is only a cosmetic fix. The implementation could be like this:
from(START_DEEP_SLEEP)
.setBody(constant(null)) // we don't want stdin for exec
.setHeader(ExecBinding.EXEC_COMMAND_ARGS, constant("""shutdown $shutdownDelay "starting deep sleep shutdown" """))
.to("exec:sudo")
.when(header(ExecBinding.EXEC_EXIT_VALUE))
.to("direct:edit_the_log")
Please note that I did not test that code. Maybe u access that header with
.when(header(EXEC_EXIT_VALUE))
instead.
Please, inform me if that could be a proper solution or not.
I am using Camel version 2.17.1 over a clustered environment with 2 nodes to process files. I use an idempotent readlock in my file consumer endpoint using a JDBCMessageIdRepository as the idempotent repository to stop multiple servers trying to process the same file.
Occasionally I see warnings in the SystemOut logs of one node saying a file could not be renamed while the other node processes the same file successfully.
Warning example:
[12/10/16 10:21:03:312 BST] 0000008f SystemOut O 12 Oct 2016 10:21:03 [r-inboundCamelContext_Worker-1] FileConsumer WARN Endpoint[file://drop?delay=30000&idempotentKey=${file:name}-${file:modified}-${file:size}&idempotentRepository=#filesInIdempotentRepository&move=done/${date:now:yyyyMMdd}/${date:now:yyyyMMddHHmmssSSS}-${file:name}&moveFailed=error/${date:now:yyyyMMdd}/${date:now:yyyyMMddHHmmssSSS}-${file:name}&preMove=processing&readLock=idempotent&readLockRemoveOnCommit=true&runLoggingLevel=DEBUG&scheduler=quartz2&scheduler.cron=*+*+*+*+*+?] cannot begin processing file: GenericFile[drop\file-20161012_101320.xml] due to: Cannot rename file: GenericFile[drop\file-20161012_101320.xml] to: GenericFile[drop\processing\file-20161012_101320.xml]. Caused by: [org.apache.camel.component.file.GenericFileOperationFailedException - Cannot rename file: GenericFile[drop\file-20161012_101320.xml] to: GenericFile[drop\processing\file-20161012_101320.xml]]
org.apache.camel.component.file.GenericFileOperationFailedException: Cannot rename file: GenericFile[drop\file-20161012_101320.xml] to: GenericFile[drop\processing\file-20161012_101320.xml]
at org.apache.camel.component.file.strategy.GenericFileProcessStrategySupport.renameFile(GenericFileProcessStrategySupport.java:115)
at org.apache.camel.component.file.strategy.GenericFileRenameProcessStrategy.begin(GenericFileRenameProcessStrategy.java:43)
at org.apache.camel.component.file.GenericFileConsumer.processExchange(GenericFileConsumer.java:367)
at org.apache.camel.component.file.GenericFileConsumer.processBatch(GenericFileConsumer.java:226)
at org.apache.camel.component.file.GenericFileConsumer.poll(GenericFileConsumer.java:190)
at org.apache.camel.impl.ScheduledPollConsumer.doRun(ScheduledPollConsumer.java:175)
at org.apache.camel.impl.ScheduledPollConsumer.run(ScheduledPollConsumer.java:102)
at org.apache.camel.pollconsumer.quartz2.QuartzScheduledPollConsumerJob.execute(QuartzScheduledPollConsumerJob.java:61)
at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573)
stack
Is there a way to stop both nodes trying to process the same file?
I need to implement a handler that reacts on ZipException to move away corrupted gz files, otherwise the route will endlessly retry to unmarshal the gz.
The problem is that at the moment the exception is thrown there is a lock on this file (on linux canWrite() returns false) and there is the Camel lock file.
Is there an elegant Camel way to say/configure the onException that the lock is released (get write access and remove lockfile - if there is one)?
At the moment my code looks like that:
onException(ZipException.class)
.handled(true)
.process(corruptedFileProcessor)
.stop();
Thanks in advance.
The following route reads gzipped files from srcDir, writes unzipped files to destDir (without the .gz extension) and when a ZipException occurs, sends the file to errorDir.
from("file://srcDir/?delete=true")
.onException(ZipException.class)
.handled(true).useOriginalMessage()
.to("file://errorDir?autoCreate=true")
.end()
.unmarshal().gzip()
.to("file://destDir?autoCreate=true&fileName=${file:name.noext}");
We are working on a POC to use Spring integration and Rabbit MQ. We have two modules producer module and consumer module both are runs in different JVMs. The Producer module listen on a Folder (input folder) as soon as new files arrives, creates a message then push to (incoming.q.in) queue and also move to process folder.
In the producer module we have below code. When I dropped about 100 files in incoming folder about 90 files processed and moved to process folder but 10 files didn't move to process folder.
For failed cases these are the messages in log file
....
[07/30/13 07:34:23:023 EDT] [taskExecutor-3] DEBUG org.springframework.integration.file.FileReadingMessageSource Added to queue: [test.xml]
[07/30/13 07:34:23:023 EDT] [taskExecutor-3] DEBUG org.springframework.integration.endpoint.SourcePollingChannelAdapter Received no Message during the poll, returning 'false'
....
for Successful case
....
[07/30/13 07:34:32:032 EDT] [taskExecutor-1] DEBUG org.springframework.integration.file.FileReadingMessageSource Added to queue: [test_0.xml]
[07/30/13 07:34:32:032 EDT] [taskExecutor-1] INFO org.springframework.integration.file.FileReadingMessageSource Created message: [[Payload=/apps/incoming/test_0.xml][Headers={timestamp=1375184072466, id=d8d4cea4-a25d-4869-b287-e76cfb76f554}]]
....
Here is the code
<file:inbound-channel-adapter id="inboundAdapter" channel="inboundChannel" directory="file:${incoming_folder}" prevent-duplicates="true" filename-pattern="*.*" auto-startup="true" >
<int:poller id="fileInboudPoller" fixed-rate="3" receive-timeout="3" time-unit="SECONDS" max-messages-per-poll="1" task-executor="taskExecutor"/>
<file:nio-locker />
</file:inbound-channel-adapter>
It generally means the locker couldn't lock the file (presumably because the file is in use elsewhere).
BTW, a common error with applications like these is copying files "in place" such that the consumer might see an incomplete file.
A common technique to avoid these issues is to copy the file with a temporary name and rename it only when it is completely written.