I have a route which multicast to 2 places. If a exception occurs when calling 1 of the place, I'm not able to retain the aggregation result. Inside the processor of the onException, the Map which I created during aggregating is not there. Im using camel 2.25.
onException(RuntimeException.class)
.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
Map<String, String> results = exchange.getProperty(SimpleAggregationStrategy.RESULTS, Map.class);
System.out.println(results);
}
});
from(DIRECT_FIRST)
.log("First route")
.setBody(constant("FIRST TEXT"));
from(DIRECT_SECOND)
.log("Second route")
.setBody(constant("SECOND TEXT"))
.throwException(new RuntimeException("Dummy Exception"));
from(DIRECT_ENTRY)
.multicast().stopOnException().aggregationStrategy(new AggregationStrategy() {
public static final String RESULTS = "RESULTS";
#Override
public Exchange aggregate(Exchange oldExchange, Exchange newExchange) {
System.out.println("INSIDE SimpleAggregationStrategy !!!!!!!!!!!!!!!!");
Map<String, String> results;
if (oldExchange != null) {
results = oldExchange.getProperty(RESULTS, Map.class);
} else {
results = new HashMap<>();
}
results.put(newExchange.getIn().getBody(String.class), newExchange.getIn().getBody(String.class));
return newExchange;
}
})
.to(DIRECT_FIRST, DIRECT_SECOND);
I assume that the aggregator aborts processing due to stopOnException() and therefore does not return the (incomplete) result.
You could try to put the aggregation strategy into a context managed bean and make the Map an instance variable that is accessible through a getter method.
In case of an exception you could try to get the incomplete Map from the bean. But I have no idea if it still holds the data or if it is emptied when the processing is aborted.
The solution is simpler than I thought. We only need to create a exchangeProperty before the multicast step. Then that exchange property can store the aggregation results even in the case of exception while doing multicast
onException(RuntimeException.class)
.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
Map<String, String> results = exchange.getProperty("RESULTS", Map.class);
System.out.println(results);
}
});
from(DIRECT_FIRST)
.log("First route")
.setBody(constant("FIRST TEXT"));
from(DIRECT_SECOND)
.log("Second route")
.setBody(constant("SECOND TEXT"))
.throwException(new RuntimeException("Dummy Exception"));
from(DIRECT_ENTRY)
.process(exch -> {
exch.setProperty("RESULTS", new HashMap<String, String>())
})
.multicast().stopOnException().aggregationStrategy(new AggregationStrategy() {
public static final String RESULTS = "RESULTS";
#Override
public Exchange aggregate(Exchange oldExchange, Exchange newExchange) {
Map<String, String> results;
if (oldExchange != null) {
results = oldExchange.getProperty(RESULTS, Map.class);
} else {
results = new HashMap<>();
}
results.put(newExchange.getIn().getBody(String.class), newExchange.getIn().getBody(String.class));
return newExchange;
}
})
.to(DIRECT_FIRST, DIRECT_SECOND);
Related
Can someone please help in writing the unit tests for this class.
==============================================================================================
#Component("edi820AdapterRouteBuilder")
public class EDI820AdapterRouteBuilder extends BaseRouteBuilder {
private static final Logger LOGGER = LoggerFactory.getLogger(EDI820AdapterRouteBuilder.class);
private static final String DIRECT_Q_RECEIVER = "direct:queueReceiver";
private static final String DIRECT_PROCESS_820 = "direct:process820";
private static final String DIRECT_TO_HIX = "direct:toHIX";
private static final String DIRECT_TO_CMS = "direct:toCMS";
private static final String MDC_TRANSACTIONID = "transactionId";
private static final String REQUEST_ID = "RequestID";
private static final String DIRECT_PUT_MDC = "direct:putMDC";
private static final String DIRECT_REMOVE_MDC = "direct:removeMDC";
#Override
public void configure() throws Exception {
super.configure();
LOGGER.debug("configure called.");
String queueName = appConfig.getInboundQueueName();
LOGGER.debug("inboundQueueName: {}", queueName);
String toHIXendpoint = appConfig.getEndpointWithOptions("toHIXendpoint");
LOGGER.debug("toHIXendpoint: {}", toHIXendpoint);
String toCMSendpoint = appConfig.getEndpointWithOptions("toCMSendpoint");
LOGGER.debug("toCMSendpoint: {}", toCMSendpoint);
String routeDelay = appConfig.getRouteDelay();
LOGGER.debug("routeDelay: {}",routeDelay);
from("timer://runOnce?repeatCount=1&delay="+routeDelay)
.to("bean:edi820AdapterRouteBuilder?method=addRoute")
.end();
from(DIRECT_Q_RECEIVER)
.to(PERSIST_EDI820_XDATA)
.to(EDI820_REQUEST_TRANSFORMER)
.to(DIRECT_PROCESS_820)
.log(LoggingLevel.INFO, LOGGER,"Executed "+DIRECT_Q_RECEIVER)
.end();
from(DIRECT_PROCESS_820)
.choice()
.when(header(TRANSACTION_SOURCE_STR).isEqualTo(HIX_SOURCE_SYSTEM))
.log(LoggingLevel.INFO, LOGGER,"Calling route for: "+HIX_SOURCE_SYSTEM)
.to(DIRECT_TO_HIX)
.when(header(TRANSACTION_SOURCE_STR).isEqualTo(CMS_SOURCE_SYSTEM))
.log(LoggingLevel.INFO, LOGGER,"Calling route for: "+CMS_SOURCE_SYSTEM)
.to(DIRECT_TO_CMS)
.otherwise()
.log(LoggingLevel.INFO, LOGGER,"Invalid "+TRANSACTION_SOURCE_STR+" ${header["+TRANSACTION_SOURCE_STR+"]}")
.end();
from(DIRECT_TO_HIX).routeId("edi820adapter-to-hix-producer-route")
.log(LoggingLevel.INFO, LOGGER,"Executing edi820adapter-to-hix-producer-route")
.marshal().json(JsonLibrary.Jackson)//Convert body to json string
.to(toHIXendpoint)
.log(LoggingLevel.DEBUG, LOGGER, "json body sent to edi820-hix: ${body}")
.log(LoggingLevel.INFO, LOGGER,"Executed edi820adapter-to-hix-producer-route")
.end();
from(DIRECT_TO_CMS).routeId("edi820adapter-to-cms-producer-route")
.log(LoggingLevel.INFO, LOGGER,"Executing edi820adapter-to-cms-producer-route")
.marshal().json(JsonLibrary.Jackson)//Convert body to json string
.to(toCMSendpoint)
.log(LoggingLevel.DEBUG, LOGGER, "json body sent to edi820-cms: ${body}")
.log(LoggingLevel.INFO, LOGGER,"Executed edi820adapter-to-cms-producer-route")
.end();
from(DIRECT_PUT_MDC).process(new Processor() {
public void process(Exchange exchange) throws Exception {
if (exchange.getIn().getHeader(REQUEST_ID) != null) {
MDC.put(MDC_TRANSACTIONID, (String) exchange.getIn().getHeader(REQUEST_ID));
}
}
}).end();
from(DIRECT_REMOVE_MDC).process(new Processor() {
public void process(Exchange exchange) throws Exception {
MDC.remove(MDC_TRANSACTIONID);
}
}).end();
}
public void addRoute(Exchange exchange) {
try {
CamelContext context = exchange.getContext();
ModelCamelContext modelContext = context.adapt(ModelCamelContext.class);
modelContext.addRouteDefinition(buildRouteDefinition());
} catch (Exception e) {
LOGGER.error("Exception in addRoute: {}", e.getMessage());
LOGGER.error(ExceptionUtils.getFullStackTrace(e));
}
}
private RouteDefinition buildRouteDefinition() {
String queueName = appConfig.getInboundQueueName();
RouteDefinition routeDefinition = new RouteDefinition();
routeDefinition
.from("jms:queue:" + queueName).routeId("edi820-adapter-jms-consumer-route")
.to(DIRECT_PUT_MDC)
.log(LoggingLevel.INFO, LOGGER,"Executing edi820-adapter-jms-consumer-route")
.log(LoggingLevel.DEBUG, LOGGER, "Received Message from queue: "+queueName)
.to(DIRECT_Q_RECEIVER)
.log(LoggingLevel.INFO, LOGGER,"Executed edi820-adapter-jms-consumer-route")
.to(DIRECT_REMOVE_MDC)
.end();
return routeDefinition;
}
}
==============================================================================
Please let me know if any additional information is needed. Thanks for the help in advance.
It is not possible to write entire test cases mentioning those classes. So I'm writing snippet code for understanding purposes.
Sample Example for Understanding:
Below is a snippet of code for the route builder class:
public class CheckRouteBuilder extends RouteBuilder{
#Override
public void configure() throws Exception {
from("timer://runOnce?repeatCount=1&delay="+routeDelay)
.to("bean:edi820AdapterRouteBuilder?method=addRoute")
.end();
}
}
Follow the test cases of the above class:
public class CheckRouteBuilderTest extends CamelTestSupport {
#Test
public void testConfigure() throws Exception {
context.addRoutes(new CheckRouteBuilder ());
RouteDefinition route = context.getRouteDefinitions().get(0);
AdviceWith.adviceWith(route, context, new AdviceWithRouteBuilder() {
#Override
public void configure() throws Exception {
replaceFromWith("direct:TestEndpoint"); // timer://runOnce?repeatCount=1&delay="+routeDelay replace with direct:TestEndpoint because during test timer component not support that's why need to replace with direct component.
weaveAddLast().to("mock:endPoint");
}
});
MockEndpoint mockEndpoint = getMockEndpoint("mock:endPoint");
mockEndpoint.expectedMessageCount(1);
template.sendBodyAndHeaders("direct:TestEndpoint", "#", "##");
mockEndpoint.assertIsSatisfied();
}
}
# -> Mention body like String, JSONObject, XML. If body is not required then simply pass the null value.
## -> Mention Header like Content-type:application/json to need to pass in Map<String, Object) format.
I am write my Apache Flink(1.10) to update records real time like this:
public class WalletConsumeRealtimeHandler {
public static void main(String[] args) throws Exception {
walletConsumeHandler();
}
public static void walletConsumeHandler() throws Exception {
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
FlinkUtil.initMQ();
FlinkUtil.initEnv(env);
DataStream<String> dataStreamSource = env.addSource(FlinkUtil.initDatasource("wallet.consume.report.realtime"));
DataStream<ReportWalletConsumeRecord> consumeRecord =
dataStreamSource.map(new MapFunction<String, ReportWalletConsumeRecord>() {
#Override
public ReportWalletConsumeRecord map(String value) throws Exception {
ObjectMapper mapper = new ObjectMapper();
ReportWalletConsumeRecord consumeRecord = mapper.readValue(value, ReportWalletConsumeRecord.class);
consumeRecord.setMergedRecordCount(1);
return consumeRecord;
}
}).assignTimestampsAndWatermarks(new BoundedOutOfOrdernessGenerator());
consumeRecord.keyBy(
new KeySelector<ReportWalletConsumeRecord, Tuple2<String, Long>>() {
#Override
public Tuple2<String, Long> getKey(ReportWalletConsumeRecord value) throws Exception {
return Tuple2.of(value.getConsumeItem(), value.getTenantId());
}
})
.timeWindow(Time.seconds(5))
.reduce(new SumField(), new CollectionWindow())
.addSink(new SinkFunction<List<ReportWalletConsumeRecord>>() {
#Override
public void invoke(List<ReportWalletConsumeRecord> reportPumps, Context context) throws Exception {
WalletConsumeRealtimeHandler.invoke(reportPumps);
}
});
env.execute(WalletConsumeRealtimeHandler.class.getName());
}
private static class CollectionWindow extends ProcessWindowFunction<ReportWalletConsumeRecord,
List<ReportWalletConsumeRecord>,
Tuple2<String, Long>,
TimeWindow> {
public void process(Tuple2<String, Long> key,
Context context,
Iterable<ReportWalletConsumeRecord> minReadings,
Collector<List<ReportWalletConsumeRecord>> out) throws Exception {
ArrayList<ReportWalletConsumeRecord> employees = Lists.newArrayList(minReadings);
if (employees.size() > 0) {
out.collect(employees);
}
}
}
private static class SumField implements ReduceFunction<ReportWalletConsumeRecord> {
public ReportWalletConsumeRecord reduce(ReportWalletConsumeRecord d1, ReportWalletConsumeRecord d2) {
Integer merged1 = d1.getMergedRecordCount() == null ? 1 : d1.getMergedRecordCount();
Integer merged2 = d2.getMergedRecordCount() == null ? 1 : d2.getMergedRecordCount();
d1.setMergedRecordCount(merged1 + merged2);
d1.setConsumeNum(d1.getConsumeNum() + d2.getConsumeNum());
return d1;
}
}
public static void invoke(List<ReportWalletConsumeRecord> records) {
WalletConsumeService service = FlinkUtil.InitRetrofit().create(WalletConsumeService.class);
Call<ResponseBody> call = service.saveRecords(records);
call.enqueue(new Callback<ResponseBody>() {
#Override
public void onResponse(Call<ResponseBody> call, Response<ResponseBody> response) {
}
#Override
public void onFailure(Call<ResponseBody> call, Throwable t) {
t.printStackTrace();
}
});
}
}
and now I found the Flink task only receive at least 2 records to trigger sink, is the reduce action need this?
You need two records to trigger the window. Flink only knows when to close a window (and fire subsequent calculation) when it receives a watermark that is larger than the configured value of the end of the window.
In your case, you use BoundedOutOfOrdernessGenerator, which updates the watermark according to the incoming records. So it generates a second watermark only after having seen the second record.
You can use a different watermark generator. In the troubleshooting training there is a watermark generator that also generates watermarks on timeout.
I am trying to test a Camel route (polling messages from an SQS queue) containing
.bean("messageParserProcessor")
where messageParserProcessor is a Processor.
The test:
public class SomeTest extends CamelTestSupport {
private final String queueName = ...;
private final String producerTemplateUri = "aws-sqs://" + queueName + ...;
private static final String MESSAGE_PARSER_PROCESSOR_MOCK_ENDPOINT = "mock:messageParserProcessor";
#EndpointInject(uri = MESSAGE_PARSER_PROCESSOR_MOCK_ENDPOINT)
protected MockEndpoint messageParserProcessor;
#Override
public boolean isUseAdviceWith() {
return true;
}
#Before
public void setUpContext() throws Exception {
context.getRouteDefinitions().get(0).adviceWith(context, new AdviceWithRouteBuilder() {
#Override
public void configure() throws Exception {
interceptSendToEndpoint("bean:messageParserProcessor")
.skipSendToOriginalEndpoint()
.process(MESSAGE_PARSER_PROCESSOR_MOCK_ENDPOINT);
}
});
}
#Test
public void testParser() throws Exception {
context.start();
String expectedBody = "test";
messageParserProcessor.expectedBodiesReceived(expectedBody);
ProducerTemplate template = context.createProducerTemplate();
template.sendBody(producerTemplateUri, expectedBody);
messageParserProcessor.assertIsSatisfied();
context.stop();
}
}
When I run the test I get this error:
org.apache.camel.FailedToCreateRouteException:
Failed to create route route1 at:
>>> InterceptSendToEndpoint[bean:messageParserProcessor -> [process[ref:mock:messageParserProcessor]]] <<< in route: Route(route1)[[From[aws-sqs://xxx...
because of No bean could be found in the registry for: mock:messageParserProcessor of type: org.apache.camel.Processor
Same error if I replace interceptSendToEndpoint(...) with mockEndpointsAndSkip("bean:messageParserProcessor")
The test can be executed (but obviously doesn't pass) when I don't use a mock:
interceptSendToEndpoint("bean:messageParserProcessor")
.skipSendToOriginalEndpoint()
.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {}
});
So the problem is the mock that is not found, what is wrong in the way I create it?
So I found a workaround to retrieve mocks from the registry:
interceptSendToEndpoint("bean:messageParserProcessor")
.skipSendToOriginalEndpoint()
.bean(getMockEndpoint(MESSAGE_PARSER_PROCESSOR_MOCK_ENDPOINT));
// Instead of
// .process(MESSAGE_PARSER_PROCESSOR_MOCK_ENDPOINT);
But I still don't understand why using .process("mock:someBean") doesn't work...
All modifications of onRedelivery's processor is reset in next redelivery. Is there any way to make the modifications becomes permanent?
Properties are kept at each redelivery. You can use them to store information that you want to use after.
Code :
public class OnRedeliveryTest extends CamelTestSupport {
public static final String PROP_TEST = "PROP_TEST";
#Produce(uri = "direct:start")
ProducerTemplate producerTemplate;
#Override
public RouteBuilder createRouteBuilder() {
return new RouteBuilder() {
#Override
public void configure() throws Exception {
onException(Exception.class)
.onRedelivery(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
final String current = (String) exchange.getProperty(PROP_TEST);
exchange.setProperty(PROP_TEST, "property" + current);
System.out.println((String) exchange.getProperty(PROP_TEST));
}
})
.maximumRedeliveries(3).redeliveryDelay(0)
.handled(true)
.end();
from("direct:start")
.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
}
})
.throwException(new Exception("BOOM"))
.to("mock:end");
}
};
}
#Test
public void smokeTest() throws Exception {
producerTemplate.sendBody("1");
}
}
In output, you will have :
propertynull
propertypropertynull
propertypropertypropertynull
I'm trying to use the RecipientList pattern in Camel but I think I may be missing the point. The following code only displays one entry to the screen:
#Override
protected RouteBuilder createRouteBuilder() {
return new RouteBuilder() {
public void configure() {
from("direct:start").recipientList(bean(MyBean.class, "buildEndpoint"))
.streaming()
.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
System.out.println(exchange.getExchangeId());
}
});
}
};
}
public static class MyBean {
public static String[] buildEndpoint() {
return new String[] { "exec:ls?args=-la", "exec:find?args=."};
}
}
I also tried just returning a comma-delimited string from the buildEndpoint() method and using tokenize(",") in the expression of the recipientList() component definition but I still got the same result. What am I missing?
That is expected, the recipient list sends a copy of the same message to X recipients. The processor you do afterwards is doing after the recipient lists is done, and therefore is only executed once.