We use a splitter to iterate through the files within a zipped file, and along, we use a custom aggregator as well that gives us a list of bodies - of both the files within that main zipped file. Now, after that split, I'd like to extract the headers set during the aggregation block- processing which happens on the aggregator's result. But, the aggregator's output seems to get lost, and I don't get anything back after the split block.
I'm sure I'm not getting the basics of this. Would appreciate it if someone could help here.
<route id="main-route">
<split streaming="true">
<ref>zipSplitter</ref>
<choice>
<when>
<method bean="fileHandler" method="isY" />
<to uri="direct:y" />
</when>
<otherwise>
<to uri="direct:x" />
</otherwise>
</choice>
<to uri="direct:aggregate" />
</split>
<!--Do something by extracting the headers set during the processing underneath in the aggregation block i.e. process-data -->
</route>
<route id="aggregate-data">
<from uri="direct:aggregate" />
<camel:aggregate strategyRef="aggregationStrategy" completionSize="2">
<camel:correlationExpression>
<constant>true</constant>
</camel:correlationExpression>
<to uri="direct:process-data"/>
</camel:aggregate>
</route>
Aggregator-
public Exchange aggregate(Exchange oldExchange, Exchange newExchange) {
Object newBody = newExchange.getIn().getBody();
Map<String, Object> newHeaders = newExchange.getIn().getHeaders();
ArrayList<Object> list = null;
if (oldExchange == null) {
list = new ArrayList<Object>();
list.add(newBody);
newExchange.getIn().setBody(list);
return newExchange;
} else {
Map<String, Object> olderHeaders = oldExchange.getIn().getHeaders();
olderHeaders.putAll(newHeaders);
list = oldExchange.getIn().getBody(ArrayList.class);
list.add(newBody);
return oldExchange;
}
}
You have to keep the aggregate logic inside the split scope. There should be single aggregate instance doing aggregation for your split as below,
<route id="main-route">
<split streaming="true" strategyRef="aggregationStrategy">
<ref>zipSplitter</ref>
<choice>
<when>
<method bean="fileHandler" method="isY" />
<to uri="direct:y" />
</when>
<otherwise>
<to uri="direct:x" />
</otherwise>
</choice>
</split>
</route>
you have to specify your aggregation strategy in split tag as a attribute like above code. So that, exchange return by every iteration will be available in aggregation strategy bean to aggregate.
Hope it helps :)
Related
I have a project with Camel and my route has a recursive call to itself in order to implement logic "call stored procedure while it returns a data set":
<route id="trxnReader">
<from uri="direct:query"/>
<to uri="sql-stored:classpath:sql/getTrxnsProcedure.sql?dataSource=myDataSource"
id="storedprocGetTrxns"/>
<choice>
<when>
<simple>${body} != null</simple>
<split>
<simple>${body.transactions}</simple>
<filter>
<method ref="trnxFilter" method="filter"/>
<to uri="direct:processTrxn"/>
</filter>
</split>
<to uri="direct:query"/>
</when>
<otherwise>
<log id="failUploadInfo" message="Transactions don't exist" loggingLevel="INFO"/>
</otherwise>
</choice>
</route>
The problem with this code is that if my stored procedure constantly returns something for a long time not allowing to exit the recursion, I get java.lang.StackOverflowError. I need something like loop. What is the best way to implement such logic with Camel? I'm using Camel 2.15.3.
I found a workaround with custom bean and loop exit condition (isLoopDone property)
public class LoopBean {
#Handler
public void loop(#ExchangeProperty("loopEndpoint") String endpoint, Exchange exchange) {
ProducerTemplate producerTemplate = exchange.getContext().createProducerTemplate();
boolean isLoopDone = false;
Exchange currentExchange = exchange;
do {
currentExchange = producerTemplate.send(endpoint, currentExchange);
Object isLoopDoneProperty = currentExchange.getProperty("isLoopDone");
if (isLoopDoneProperty != null) {
isLoopDone = (boolean) isLoopDoneProperty;
}
}
while (!isLoopDone);
}
}
<route id="trxnReader">
<from uri="direct:query"/>
<setProperty propertyName="loopEndpoint">
<simple>direct:callStoredProcWhileItHasTransactions</simple>
</setProperty>
<bean ref="loopBean"/>
</route>
<route id="storedProcCallingLoop">
<from uri="direct:callStoredProcWhileItHasTransactions"/>
<to uri="sql-stored:classpath:sql/getTrxnsProcedure.sql?dataSource=myDataSource"
id="storedprocGetTrxns"/>
<choice>
<when>
<simple>${body} != null</simple>
<split>
<simple>${body.transactions}</simple>
<filter>
<method ref="trnxFilter" method="filter"/>
<to uri="direct:processTrxn"/>
</filter>
</split>
</when>
<otherwise>
<log id="failUploadInfo" message="Transactions don't exist" loggingLevel="INFO"/>
<setProperty propertyName="isLoopDone">
<simple resultType="java.lang.Boolean">true</simple>
</setProperty>
</otherwise>
</choice>
</route>
Hi I have a route which is
<route id="invokeGetMortgageAccountDetails">
<from uri="direct:invokeGetMortgageAccountDetails" />
<removeHeaders pattern="operationNamespace" />
<setHeader headerName="operationName">
<constant>getMortgageDetailsRequest</constant>
</setHeader>
<to uri="cxf:bean:getBastionAcctDetailsClient" />
<removeHeaders pattern="*" />
</route>
Now i want to change the 'to uri' if length of parameter account is equal to 8.
I am new to Apache camel and there is not very helpful information on internet.
I am using camel version 2.15 and i tried passing an extra property called length of account number in exchange and tried to match with value in route but it did not work.
Processor:
public void processMortgage(final Exchange exchange) throws
ServiceException { MessageContentsList messageContentsList =
(MessageContentsList) exchange.getIn().getBody(); List
paramsList = new ArrayList(); String systemID =
messageContentsList.get(0).toString().trim(); String brandID =
messageContentsList.get(1).toString().trim(); String account =
messageContentsList.get(2).toString().trim(); String len =
Integer.toString(account.length()); paramsList.add(Constants.HUB);
paramsList.add(brandID.toUpperCase()); paramsList.add(account);
exchange.setProperty(Constants.SystemID, systemID);
exchange.setProperty(len, len);
exchange.setProperty(Constants.ErrorCode, null);
exchange.setProperty("mortgageAccountNumber",
Integer.parseInt(account)); }
exchange.getIn().setBody(paramsList); }
Route Config:
<route id="invokeGetMortgageAccountDetails">
<from uri="direct:invokeGetMortgageAccountDetails" /> <removeHeaders pattern="operationNamespace" />
<setHeader headerName="operationName">
<constant>getMortgageDetailsRequest</constant> </setHeader> <choice>
<when>
<simple>${body.len} == '8'</simple>
<to uri="cxf:bean:getPhoebusClient" />
</when>
<otherwise>
<to uri="cxf:bean:getBastionAcctDetailsClient" />
</otherwise>
</choice>
<removeHeaders pattern="*" />
</route>
If you are using Apache Camel version > 2.16 then you can use the
Dynamic To Endpoint
You will probably need to use Spring Expression Language to build your dynamic uri
I am using aggregator to aggregate exchanges to a list and then pass that list for batch insertion to database. Once its inserted into first table then all objects in list are modified and then send to another db. The issue i am facing while doing this is that sometime an object which was sent by aggregator as one element in list for insertion earlier is also sent again in next list with modified value.
The Aggregator policy i am using is :
public class ArrayListAggregationStrategy implements AggregationStrategy {
public Exchange aggregate(Exchange oldExchange, Exchange newExchange) {
Object newBody = newExchange.getIn().getBody();
ArrayList<Object> list = null;
if (oldExchange == null) {
list = new ArrayList<Object>();
list.add(newBody);
newExchange.getIn().setBody(list);
return newExchange;
} else {
list = oldExchange.getIn().getBody(ArrayList.class);
list.add(newBody);
return oldExchange;
}
}
}
And the route for same is :
<routeContext id="coreCdrRoute" xmlns="http://camel.apache.org/schema/spring">
<route errorHandlerRef="CoreErrorHandler" id="coreEngineRoute">
<from
uri="activemq:queue:{{coreEngine.queue}}?concurrentConsumers={{core.consumer}}" />
<log loggingLevel="DEBUG"
message="Message received from core Engine queue is ${body}"></log>
<multicast>
<pipeline>
<setHeader headerName="CamelRedis.Command">
<constant>RPUSH</constant>
</setHeader>
<setHeader headerName="CamelRedis.Key">
<constant>CoreCdrBatch</constant>
</setHeader>
<setHeader headerName="CamelRedis.Value">
<simple>${body}</simple>
</setHeader>
<log loggingLevel="DEBUG" message="Adding to redis value : ${body}"></log>
<to uri="spring-redis://localhost:6379?serializer=#stringSerializer" />
</pipeline>
<pipeline>
<unmarshal ref="gsonCoreEngine"></unmarshal>
<bean beanType="com.bng.upload.processors.GetCdr"
method="process(com.bng.upload.beans.CoreEngineEvent,${exchange})" />
<transform>
<method ref="insertionBean" method="finalCoreCdr"></method>
</transform>
<aggregate strategyRef="aggregatorRef" completionInterval="300000">
<correlationExpression>
<constant>true</constant>
</correlationExpression>
<completionSize>
<simple>${properties:core.batch.size}</simple>
</completionSize>
<to uri="mybatis:batchInsertCore?statementType=InsertList"></to>
<log message="Inserted in masterCallLogs : ${in.header.CamelMyBatisResult}"></log>
<multicast>
<pipeline>
<choice>
<when>
<simple>${properties:coreEngine.write.file} == true</simple>
<setHeader headerName="path">
<simple>${properties:coreEngine.cdr.folder}</simple>
</setHeader>
<bean beanType="com.bng.upload.processors.SaveToFile"
method="processCore(${exchange})" />
<log
message="Going to write ivr cdrs to file : ${in.header.CamelFileName}" />
<to uri="file://?fileExist=Append&bufferSize=32768"></to>
</when>
</choice>
</pipeline>
<pipeline>
<transform>
<method ref="logCounter" method="callConferenceLogs(${exchange})"></method>
</transform>
<choice>
<when>
<simple>${body.size()} != 0</simple>
<to uri="mybatis:callConfCounter?statementType=InsertList"></to>
</when>
</choice>
</pipeline>
</multicast>
</aggregate>
</pipeline>
<bean beanType="com.bng.upload.processors.RedisImpl" method="removeOnInsertion(CoreCdrBatch)" />
<log message="Reached end of transaction for CoreEngine successfully" />
</multicast>
</route>
</routeContext>
I am unable to debug what i am doing wrong here.
My xml is given below
<camelContext trace="false" xmlns="http://camel.apache.org/schema/spring">
<propertyPlaceholder id="placeholder" location="classpath:application.properties" />
<!--Route:1 for POLLUX Data Processing -->
<route id="processPolluxData-Route" startupOrder="1">
<from uri="{{POLLUX_INPUT_PATH}}?noop=true"/>
<unmarshal ref="csvBindyDataformatForPolluxData"/>
<camel:bean ref="polluxDataController" method="processPolluxData"/>
<camel:log message="Line:${body}" loggingLevel="INFO"/>
<to uri="sqlComponent:{{sql.insertPolluxData}}?batch=true" />
</route>
<!-- Route:2 for RSI Data Processing -->
<route id="processRsiData-Route" startupOrder="2">
<from uri="{{RSI_INPUT_PATH}}?noop=true"/>
<unmarshal ref="csvBindyDataformatForRsiData"/>
<camel:bean ref="rsiDataController" method="processRsiData"/>
<camel:log message="Line:${body}" loggingLevel="INFO"/>
<to uri="sqlComponent:{{sql.insertRsiData}}?batch=true" />
</route>
<!-- Route for Global Data Processing -->
<route id="processGlobalData-Route" >
<from uri="sqlComponent:{{sql.selectOrder}}?consumer.useIterator=false" />
<camel:bean ref="globalDataController" method="processGlobalData" />
<marshal>
<csv delimiter=","/>
</marshal>
<log message="${body}" />
<setHeader headerName="camelFilename">
<constant>result.csv</constant>
</setHeader>
<to uri="{{GLOBAL_OUTPUT_PATH}}?fileExist=Append" />
</route>
My sql statement is
sql.selectOrder=select STID,CLLTR,SOURCE from GSI_DEVL.POLLUX_DATA
bean class for processing result set is
public class GlobalDataController {
List<Map<String, Object>> globalStationProccessedList = new ArrayList<Map<String, Object>>();
List<Map<String, Object>> globalStationMap = new ArrayList<Map<String, Object>>();
#SuppressWarnings("unchecked")
public List<Map<String, Object>> processGlobalData(Exchange exchange) throws Exception {
// System.out.println("Processing " + exchange.getIn().getBody());
globalStationMap = (List<Map<String, Object>>) exchange.getIn().getBody();
globalStationProccessedList.addAll(globalStationMap);
return globalStationProccessedList;
}
}
Problem now is Route 1 data is transffered to csv file with exact number of rows in database.But no data in the route 2 is append to the csv file
I am using camel 2.16
If the problem only in large number of files (not in the file format), then here is the solution:
<route id="processOrder-route">
<from uri="sqlComponent:{{sql.selectOrder}}"/>
<camel:bean ref="controllerformarshalling" method="processGlobalData" />
<marshal >
<csv delimiter="," useMaps="true" > </csv>
</marshal>
<log message="${body}"/>
<setHeader headerName="CamelFileName">
<constant>result.csv</constant>
</setHeader>
<to uri="file://D://cameltest//output&fileExist=Append" />
</route>
For next pool you can set another file name, based on current time, maybe.
Have you tried setting this parameter on the sql component?
consumer.useIterator boolean true
Camel 2.11: SQL consumer only: If true each row returned when
polling will be processed individually. If false the entire
java.util.List of data is set as the IN body.
Try setting this to false. That way you should get the entire sql resultset into a single list and then write the entire list to a single file.
I want to send a massage in the form of csv file to webservice endpoint, split message to process each csv row separately, aggregate checked exceptions, and send a response with exceptions summary:
My route is:
<route>
<from uri="cxf:bean:MyEndpoint" />
<split strategyRef="myAggregateStrategy" >
<tokenize token="\n" />
<unmarshal>
<csv delimiter=";" />
</unmarshal>
<process ref="MyProcessor" />
<to uri="bean:myWebservice?method=process" />
</split>
</route>
How can I do that? Response must be send to webservice
How about using <doTry> and <doCatch> within your logic? You could have whatever logic you want inside the catch, e.g. a bean to handle/aggregate/summarize the exceptions.
Something roughly like this:
<route>
<from uri="cxf:bean:MyEndpoint" />
<split strategyRef="myAggregateStrategy" >
<doTry>
<tokenize token="\n" />
<unmarshal>
<csv delimiter=";" />
</unmarshal>
<process ref="MyProcessor" />
<to uri="bean:myWebservice?method=process" />
<doCatch>
<exception>java.lang.Exception</exception>
<handled>
<constant>true</constant>
</handled>
<bean ref="yourExceptionHandlingBean" method="aggregateException"/>
</doCatch>
</doTry>
</split>
</route>
I finally found solution to my problem. I used aggregator and in case of exception, a collect it on the list in old exchange body and remove exception from new exchange:
public class ExceptionAggregationStrategy implements AggregationStrategy {
#Override
public Exchange aggregate(Exchange oldExchange, Exchange newExchange) {
Object body = newExchange.getIn().getBody(String.class);
Exception exception = newExchange.getException();
if (exception != null) {
newExchange.setException(null); // remove the exception
body = exception;
}
if (oldExchange == null) {
List<Object> list = new ArrayList<>();
list.add(body);
newExchange.getIn().setBody(list);
return newExchange;
}
#SuppressWarnings("unchecked")
List<Object> list = oldExchange.getIn().getBody(List.class);
list.add(body);
return oldExchange;
}
}
The list is of type java.lang.Object because I collect original message too (in case of there is no excepton).