In my application, we have multiple nodes for sharing File system. So in this case,idempotentRepository required for Locking mechanism.
Question:
In Apache Camel why "readLock = idempotent" is working only for File component but not for FTP?
readLock: idempotent not compatible for FTP, its compatible only for File component.
<from uri="ftp://XXX:xxxxxx#localhost/var/opt/irs/message?delete=true&readLock=idempotent&idempotentRepository=#idempotentRepo&readLockLoggingLevel=WARN&shuffle=true" />
readLock: rename is compatible for FTP
and I achieved with this
<from uri="ftp://XXX:xxxxxx#localhost/var/opt/irs/message?delete=true&readLock=rename&idempotentRepository=#idempotentRepo&readLockLoggingLevel=WARN&shuffle=true" />
I would like to know the reson behind it.
Can anybody explain the reson behind it?
As you can clearly see in the document here. It states
idempotent -- Camel 2.16 (only file component) is for using a idempotentRepository as the read-lock.
You can achieve this for ftp by doing something like this
from("ftps://{{ftp.username}}#{{ftp.host}}/{{ftp.importDirectory}}?password="
+ "{{ftp.password}}&readLock=changed&move={{ftp.processed}}"
+ "&moveFailed={{ftp.failed}}"
+ "&securityProtocol=SSL&execProt=P&execPbsz=0&passiveMode=true")
.idempotentConsumer(header("camelFileAbsolutePath"),
MemoryIdempotentRepository.memoryIdempotentRepository(200))
.to("bean:ftpConsumer?method=consumeMethod")
Related
I'm running into some troubles with using OrcTableSource to fetch Orc file from cloud Object storage(IBM COS), the code fragment is provided below:
OrcTableSource soORCTableSource = OrcTableSource.builder() // path to ORC
.path("s3://orders/so.orc") // s3://orders/so.csv
// schema of ORC files
.forOrcSchema(OrderHeaderORCSchema)
.withConfiguration(orcconfig)
.build();
seems this path is incorrect but anyone can help out? appreciate a lot!
Caused by: java.io.FileNotFoundException: File /so.orc does not exist
at
org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:611)
at
org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:824)
at
org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:601)
at
org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:428)
at
org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.(ChecksumFileSystem.java:142)
at
org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:346)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:768) at
org.apache.orc.impl.ReaderImpl.extractFileTail(ReaderImpl.java:528)
at org.apache.orc.impl.ReaderImpl.(ReaderImpl.java:370) at
org.apache.orc.OrcFile.createReader(OrcFile.java:342) at
org.apache.flink.orc.OrcRowInputFormat.open(OrcRowInputFormat.java:225)
at
org.apache.flink.orc.OrcRowInputFormat.open(OrcRowInputFormat.java:63)
at
org.apache.flink.runtime.operators.DataSourceTask.invoke(DataSourceTask.java:170)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711) at
java.lang.Thread.run(Thread.java:748)
By the way, I've already set up flink-s3-fs-presto-1.6.2 and had following code running correctly. The question is limited to OrcTableSource only.
DataSet<Tuple5<String, String, String, String, String>> orderinfoSet =
env.readCsvFile("s3://orders/so.csv")
.types(String.class, String.class, String.class
,String.class, String.class);
The problem is that Flink's OrcRowInputFormat uses two different file systems: One for generating the input splits and one for reading the actual input splits. For the former, it uses Flink's FileSystem abstraction and for the latter it uses Hadoop's FileSystem. Therefore, you need to configure Hadoop's configuration core-site.xml to contain the following snippet
<property>
<name>fs.s3.impl</name>
<value>org.apache.hadoop.fs.s3a.S3AFileSystem</value>
</property>
See this link for more information about setting up S3 for Hadoop.
This is a limitation of Flink's OrcRowInputFormat and should be fixed. I've created the corresponding issue.
I'm seeing 100% utilisation of activemq's temp storage (configured to be 100mb), and the activemq client is blocking. This 100% usage remains permanently, and I have no idea what's going on
I have a camel route, which consumes from a queue (QUEUE.IN) using the JmsTransactionManager.
public final class RouteUnderTest extends RouteBuilder {
#Override
public void configure() throws Exception {
from("activemq-transacted:QUEUE.IN")
.bean(myBean)
.to("activemq:QUEUE.OUT");
}
}
While processing the message from this queue I'm invoking a spring-integration client (myBean) which is configured as follows
<int:gateway id="myBean" service-interface="MyBean">
<int:method name="request" request-channel="channel"/>
</int:gateway>
<int:chain input-channel="channel">
<int:transformer ref="transformedToJsonHere"/>
<jms:outbound-gateway request-destination-name="QUEUE.MYBEAN"
receive-timeout="5000"
explicit-qos-enabled="true"
time-to-live="5000"
delivery-persistent="false"/>
<int:transformer ref="transformedToAnObjectHere"/>
</int:chain>
My broker is configured to use LevelDB, and with the following usage limits:
<persistenceAdapter>
<levelDB directory="${activemq.data}/leveldb"/>
</persistenceAdapter>
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage percentOfJvmHeap="70"/>
</memoryUsage>
<storeUsage>
<storeUsage limit="500 mb"/>
</storeUsage>
<tempUsage>
<tempUsage limit="100 mb"/>
</tempUsage>
</systemUsage>
</systemUsage>
When my route consumes the message and then attempts to put a non-persistent message on QUEUE.OUT the client is blocked and my broker shows 100% usage of temp storage.
And I see the following activemq logs
2015-07-28 15:44:59,678 | INFO | Usage(default:temp:queue://QUEUE.MYBEAN:temp) percentUsage=0%, usage=104857600, limit=104857600, percentUsageMinDelta=1%;Parent:Usage(default:temp) percentUsage=100%, usage=104857600, limit=104857600, percentUsageMinDelta=1%: Temp Store is Full (0% of 104857600). Stopping producer (ID:orbit-vm-55561-1438094698190-1:1:3:1) to prevent flooding queue://QUEUE.MYBEAN. See http://activemq.apache.org/producer-flow-control.html for more info (blocking for: 1s) | org.apache.activemq.broker.region.Queue | ActiveMQ NIO Worker 6
The queues look like (You can see that the QUEUE.IN message has been not been dequeued because it's still being processed transactionally, and no message has gone to QUEUE.MYBEAN)
I can fix this problem with any one of the following approaches:
Use KahaDB instead of LevelDB
Increase temp storage limit (150MB seems to do it but I haven't experimented a great deal)
Configure tempDataStore in activemq.xml (see below)
When configuring the tempDataStore it looks like:
<tempDataStore>
<bean xmlns="http://www.springframework.org/schema/beans" class="org.apache.activemq.leveldb.LevelDBStore">
<property name="directory" value="${activemq.data}/tmp" />
</bean>
</tempDataStore>
I should add, we were using KahaDB previously and this worked fine, but the upgrade to LevelDB has exposed this issue. Reverting to KahaDB is not an option.
I'm hoping someone could explain what we're seeing here, as the results are really difficult to understand. Why does using LevelDB necessitate a higher temp usage limit?, and why does configuring the tempDataStore explicitly also fix the problem?
I don't fully understand what's going on here so I'm worried that simply increasing the temp usage limit a little will just hide the problem until a later date.
Versions:
ActiveMQ: 5.11.1
Camel: 2.14.0
Spring: 4.0.8.RELEASE
Spring Integration: 4.0.5.RELEASE
We ran into exactly the same issue with ActiveMQ 5.13.2
The solution when using LevelDB is to explicitly configure a dedicated tempDataStore as you did.
If not, the broker uses the same store (LevelDB) for both persistent (persistent usage) and non-persistent messages (temp usage). You may therefore end-up in situations where the broker doesn't accept any non-persistent message anymore just because the store already holds persistent ones up to the configured tempUsage limit. It will however accept persistent ones if your storeUsage limit is set higher...
When using KahaDB, the broker automatically uses another store for the non-persistent messages (created in the tmp directory). So you don't have the problem...
Look at the following code for more indepth information: https://github.com/apache/activemq/blob/activemq-5.13.2/activemq-broker/src/main/java/org/apache/activemq/broker/BrokerService.java#L1739
When reading that code, remember LevelDBStore implements PListStore, but KahaDBStore doesn't...
I am struggling with type conversion in my camel route for handling ftp files. My route looks like this (in Spring DSL):
<route id="processIncomingFtpFile" errorHandlerRef="parkingCitErrHandler">
<from uri="{{ftp.parkingcit.input.path}}"/>
<bean ref="ftpZipFileHandler"/>
<unmarshal ref="bindyCsvFormat"/>
<bean ref="parkingTicketsHandler"/>
<split>
<simple>${body}</simple>
<marshal ref="jaxbFormatter"/>
<convertBodyTo type="java.io.File"/>
<to uri="{{ftp.parkingcit.output.path}}"/>
</split>
</route>
And my handler signature looks like this:
public File handleIncomingFile(File incomingfile)...
However, this yields the following type conversion problem:
org.apache.camel.InvalidPayloadException: No body available of type: java.io.File but has value: RemoteFile[test.zip] of type: org.apache.camel.component.file.remote.RemoteFile on: test.zip. Caused by: No type converter available to convert from type: org.apache.camel.component.file.remote.RemoteFile to the required type: java.io.File with value RemoteFile[test.zip]. Exchange[test.zip]. Caused by: [org.apache.camel.NoTypeConversionAvailableException - No type converter available to convert from type: org.apache.camel.component.file.remote.RemoteFile to the required type: java.io.File with value RemoteFile[test.zip]]
My question is: should I be able to handle my ftp file in-memory, without explicitly telling camel to write it to disk, with type converters doing the work automagically behind the scenes for me? Or is what I am trying to do senseless, given that my handler wants a java.io.File as its input parameter, i.e. I must write the data to disk for this to work?
java.io.File is for file systems, and not for FTP files.
You would need to change the type in your bean method signature to either Camel's GenericFile or the actual type which is from the commons-net FTP client library that Camel uses.
hmm, I would recommend to work with a local file (I always do that with the ftp endpoint). There is an option in the ftp endpoint called localWorkDirectory that lets you define a directory to download the file ( typically, /tmp ) before further processing. The idea is to avoid any error due to network issue or being disconnected during process
Can you try it It's easy enough (just add &localWorkDirectory =mydir in the uri ) and it will dl the file for you
see https://camel.apache.org/ftp2.html.
Just make sure you have write right on the directory, of course
After transforming a file with xslt I have to append a file downloaded from ftp. So I did the following:
from("direct:adobe_productList_incremental")
.id("routeADOBESPtransformPI_productList")
.log(LoggingLevel.INFO, "---------Starting file: ${body}")
.convertBodyTo(InputStream.class)
.to("xslt:classpath:" + xsltTransformationProductList)
.log(LoggingLevel.INFO, "---------Transformed file: ${body}")
.pollEnrich(ftpType+"://"+ftpUsername+"#"+ ftpUrl +":" + ftpPort + ftpPath_incrementalComplete +"?password="+ftpPassword+"&fileName="+ftpFilename_incrementalComplete+"&passiveMode=true&binary=true&delete=false",10000)
.log(LoggingLevel.INFO, "---------After poll enrich: ${body}")
.to("file:{{file.root}}{{file.outbox.products_list_incremental}}?fileName={{file.outbox.products_list_incremental.name}}.final");
untill the poll everythin works (the transformation is done correctly), but after the pollEnrich the current body is overrided by the ftp content (and not appended as it should be).
Any help?
No it works as designed.
By default the content will be overridden. If you need to append/merge or whatever, you need to use a custom aggregation strategy, and implement code logic that does this.
See the Camel docs at: http://camel.apache.org/content-enricher.html about the ExampleAggregationStrategy.
The Camel docs says
The aggregation strategy is optional. If you do not provide it
Camel will by default just use the body obtained from the resource.
I've been unable to find any easy way of figuring out the version string for a WAR file deployed with Tomcat 7 versioned naming (ie app##version.war). You can read about it here and what it enables here.
It'd be nice if there was a somewhat more supported approach other than the usual swiss army knife of reflection powered ribcage cracking:
final ServletContextEvent event ...
final ServletContext applicationContextFacade = event.getServletContext();
final Field applicationContextField = applicationContextFacade.getClass().getDeclaredField("context");
applicationContextField.setAccessible(true);
final Object applicationContext = applicationContextField.get(applicationContextFacade);
final Field standardContextField = applicationContext.getClass().getDeclaredField("context");
standardContextField.setAccessible(true);
final Object standardContext = standardContextField.get(applicationContext);
final Method webappVersion = standardContext.getClass().getMethod("getWebappVersion");
System.err.println("WAR version: " + webappVersion.invoke(standardContext));
I think the simplest solution is using the same version (SVN revision + padding as an example) in .war, web.xml and META-INF/MANIFEST.MF properties files, so you could retrieve the version of these files later in your APP or any standard tool that read version from a JAR/WAR
See MANIFEST.MF version-number
Another solution described here uses the path name on the server of the deployed WAR. You'd extract the version number from the string between the "##" and the "/"
runningVersion = StringUtils.substringBefore(
StringUtils.substringAfter(
servletConfig.getServletContext().getRealPath("/"),
"##"),
"/");
Starting from Tomcat versions 9.0.32, 8.5.52 and 7.0.101, the webapp version is exposed as a ServletContext attribute with the name org.apache.catalina.webappVersion.
Link to the closed enhancement request: https://bz.apache.org/bugzilla/show_bug.cgi?id=64189
The easiest way would be for Tomcat to make the version available via a ServletContext attribute (org.apache.catalina.core.StandardContext.webappVersion) or similar. The patch to do that would be trivial. I'd suggest opening an enhancement request in Tomcat's Bugzilla. If you include a patch then it should get applied fairly quickly.