Camel 3.17.0 stop an outer splitter from an inner splitter on exception - apache-camel

I have the following Apache Camel routes which have an outer and an inner splitters, however when error happens on an inner splitter I can see that an inner splitter stops, however an outer splitter does not, why?
While debugging I also see that on an inner and on the outer aggregators onCompletion exchange has an error (exchange.getException()), however outer splitter does not stop and proceeds till it ends. And My doTry ... doCatch block catches and exception only after an outer splitter ends. How to make so, that on an inner or an outer splitter error both would stop processing current exchanges and would wait for next exchanges?
from ("direct:outer-split")
.routeId("outer-split")
.doTry()
.split(body(), new OuterAggregator())
.stopOnException()
.shareUnitOfWork()
.streaming().process(s -> s.getMessage())
.to("direct:inner-split")
.end()
.to("direct:success-outer")
.endDoTry()
.doCatch(Exception.class)
.process(s -> System.out.println("124334"))
.stop()
.endDoTry();
from("direct:inner-split")
.routeId("inner-split")
.onException(TestException.class)
.onRedelivery(this::registerError)
.useExponentialBackOff()
.backOffMultiplier(1.5)
.maximumRedeliveryDelay(40000)
.maximumRedeliveries(-1)
.handled(true)
.stop()
.end()
.split(body(), new InnerAggregator())
.stopOnException()
.shareUnitOfWork()
.to("direct:doProcessing")
.end()
.to("direct:endSuccess");
from("direct:doProcessing")
.routeId("doProcessing")
.errorHandler(noErrorHandler())
.process(this::performAction);
public void performAction(Exchange exchange){
throw new RuntimeException();
}
The content of both of my aggregators is the following
#Override
public Exchange aggregate(Exchange oldExchange, Exchange newExchange) {
if (oldExchange == null) {
List<MyDto> dtos = new ArrayList<>();
MyDto MyDto = newExchange.getIn()
.getBody(MyDto.class);
dtos.add(MyDto);
newExchange.getIn().setBody(dtos);
return newExchange;
}
List<MyDto> dtos = castList(MyDto.class,
oldExchange.getIn().getBody(List.class));
MyDto dto = newExchange.getIn()
.getBody(MyDto.class);
dtos.add(dto);
oldExchange.getIn().setBody(dtos);
if (newExchange.getException() != null) {
oldExchange.setException(newExchange.getException());
}
return oldExchange;
}
#Override
public void onCompletion(Exchange exchange) {
LOG.info("test");
/* if (exchange != null && exchange.getException() != null) {
exchange.setProperty(Exchange.EXCEPTION_CAUGHT, exchange.getException());
}*/
}
Update
Came to a conclusion that it would work if I removed streaming() from the outer splitter. Sad as it is not an option for me.

Related

How to handle exception while parsing JSON in Flink

I am reading data from Kafka using flink 1.4.2 and parsing them to ObjectNode using JSONDeserializationSchema. If the incoming record is not a valid JSON then my Flink job fails. I would like to skip the broken record instead of failing the job.
FlinkKafkaConsumer010<ObjectNode> kafkaConsumer =
new FlinkKafkaConsumer010<>(TOPIC, new JSONDeserializationSchema(), consumerProperties);
DataStream<ObjectNode> messageStream = env.addSource(kafkaConsumer);
messageStream.print();
I am getting the following exception if the data in Kafka is not a valid JSON.
Job execution switched to status FAILING.
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.core.JsonParseException: Unrecognized token 'This': was expecting ('true', 'false' or 'null')
at [Source: [B#4f522623; line: 1, column: 6]
Job execution switched to status FAILED.
Exception in thread "main" org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
The easiest solution is to implement your own DeserializationSchema and wrap JSONDeserializationSchema. You can then catch the exception and either ignore it or perform custom action.
As suggested by #twalthr, I implemented my own DeserializationSchema by copying JSONDeserializationSchema and added exception handling.
import org.apache.flink.api.common.serialization.AbstractDeserializationSchema;
import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ObjectMapper;
import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.node.ObjectNode;
import java.io.IOException;
public class CustomJSONDeserializationSchema extends AbstractDeserializationSchema<ObjectNode> {
private ObjectMapper mapper;
#Override
public ObjectNode deserialize(byte[] message) throws IOException {
if (mapper == null) {
mapper = new ObjectMapper();
}
ObjectNode objectNode;
try {
objectNode = mapper.readValue(message, ObjectNode.class);
} catch (Exception e) {
ObjectMapper errorMapper = new ObjectMapper();
ObjectNode errorObjectNode = errorMapper.createObjectNode();
errorObjectNode.put("jsonParseError", new String(message));
objectNode = errorObjectNode;
}
return objectNode;
}
#Override
public boolean isEndOfStream(ObjectNode nextElement) {
return false;
}
}
In my streaming job.
messageStream
.filter((event) -> {
if(event.has("jsonParseError")) {
LOG.warn("JsonParseException was handled: " + event.get("jsonParseError").asText());
return false;
}
return true;
}).print();
Flink has improved null record handling for FlinkKafkaConsumer
There are two possible design choices when the DeserializationSchema encounters a corrupted message. It can either throw an IOException which causes the pipeline to be restarted, or it can return null where the Flink Kafka consumer will silently skip the corrupted message.
For more details, you can see this link.

Hazelcast-- Apache Camel integration

I am new to hazelcast and camel.
While creating a map-loader using camel, I am calling camel route within the the "load" method. Although the container shows "dataSyncLoad" container is present, but still while loading data on startup of the application, it gives error listed below.
Hazelcast Maploader
#Produce(uri = "direct:dataSyncLoad")
private ProducerTemplate dataSyncLoad;
#Override
public synchronized DataSyncServiceRequest load(CellDataSyncKey key) {
DataSyncServiceRequest dataSynchTemplateVO = dataSyncLoad.requestBody("direct:dataSyncLoad",key , DataSyncServiceRequest.class);
if (null != dataSynchTemplateVO) {
LOGGER.info("Cache Loaded: {}", key);
} else {
LOGGER.info("No data found for the key : {}", key);
}
return dataSynchTemplateVO;
}
Map Configuration
<hz:map name="getsToolCcaDatasyncMap" backup-count="0" max-size="100" eviction-percentage="25" eviction-policy="LRU"
read-backup-data="0">
<hz:map-store enabled="true" initial-mode="EAGER" write-delay-seconds="0" implementation="getsToolCcaDatasyncMapLoader"
/>
</hz:map>
Could not load keys from map store
org.apache.camel.CamelExecutionException: Exception occurred during
execution on the exchange: Exchange[Message: CellDataSyncKey
[locoid=20695, dataTemplate=3, deviceName=CCA]] at
org.apache.camel.util.ObjectHelper.wrapCamelExecutionException(ObjectHelper.java:1379)[142:org.apache.camel.camel-core:2.12.0.redhat-610379]
at
org.apache.camel.util.ExchangeHelper.extractResultBody(ExchangeHelper.java:622)[142:org.apache.camel.camel-core:2.12.0.redhat-610379]
at
org.apache.camel.impl.DefaultProducerTemplate.extractResultBody(DefaultProducerTemplate.java:467)[142:org.apache.camel.camel-core:2.12.0.redhat-610379]
at
org.apache.camel.impl.DefaultProducerTemplate.sendBody(DefaultProducerTemplate.java:133)[142:org.apache.camel.camel-core:2.12.0.redhat-610379]
at
org.apache.camel.impl.DefaultProducerTemplate.sendBody(DefaultProducerTemplate.java:149)[142:org.apache.camel.camel-core:2.12.0.redhat-610379]
at
org.apache.camel.impl.DefaultProducerTemplate.requestBody(DefaultProducerTemplate.java:297)[142:org.apache.camel.camel-core:2.12.0.redhat-610379]
at
org.apache.camel.impl.DefaultProducerTemplate.requestBody(DefaultProducerTemplate.java:327)[142:org.apache.camel.camel-core:2.12.0.redhat-610379]
at
com.ge.trans.loader.cell.cache.mapstore.GetsToolCcaDatasyncMap.load(GetsToolCcaDatasyncMap.java:40)[295:cell-cache-service:2.0.0]
at
com.ge.trans.loader.cell.cache.mapstore.GetsToolCcaDatasyncMap.loadAll(GetsToolCcaDatasyncMap.java:57)[295:cell-cache-service:2.0.0]
at
com.hazelcast.map.impl.MapStoreWrapper.loadAll(MapStoreWrapper.java:143)[276:com.hazelcast:3.6.5]
at
com.hazelcast.map.impl.mapstore.AbstractMapDataStore.loadAll(AbstractMapDataStore.java:56)[276:com.hazelcast:3.6.5]
at
com.hazelcast.map.impl.recordstore.BasicRecordStoreLoader.loadAndGet(BasicRecordStoreLoader.java:161)[276:com.hazelcast:3.6.5]
at
com.hazelcast.map.impl.recordstore.BasicRecordStoreLoader.doBatchLoad(BasicRecordStoreLoader.java:134)[276:com.hazelcast:3.6.5]
at
com.hazelcast.map.impl.recordstore.BasicRecordStoreLoader.loadValuesInternal(BasicRecordStoreLoader.java:120)[276:com.hazelcast:3.6.5]
at
com.hazelcast.map.impl.recordstore.BasicRecordStoreLoader.access$100(BasicRecordStoreLoader.java:54)[276:com.hazelcast:3.6.5]
at
com.hazelcast.map.impl.recordstore.BasicRecordStoreLoader$GivenKeysLoaderTask.call(BasicRecordStoreLoader.java:107)[276:com.hazelcast:3.6.5]
at
com.hazelcast.util.executor.CompletableFutureTask.run(CompletableFutureTask.java:67)[276:com.hazelcast:3.6.5]
at
com.hazelcast.util.executor.CachedExecutorServiceDelegate$Worker.run(CachedExecutorServiceDelegate.java:212)[276:com.hazelcast:3.6.5]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)[:1.7.0_67]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)[:1.7.0_67]
at java.lang.Thread.run(Thread.java:745)[:1.7.0_67] at
com.hazelcast.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:76)[276:com.hazelcast:3.6.5]
at
com.hazelcast.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:92)[276:com.hazelcast:3.6.5]
Caused by:
org.apache.camel.component.direct.DirectConsumerNotAvailableException:
No consumers available on endpoint: Endpoint[direct://dataSyncLoad].
Exchange[Message: CellDataSyncKey [locoid=20695, dataTemplate=3,
deviceName=CCA]] at
org.apache.camel.component.direct.DirectProducer.process(DirectProducer.java:47)[142:org.apache.camel.camel-core:2.12.0.redhat-610379]
at
org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:191)[142:org.apache.camel.camel-core:2.12.0.redhat-610379]
at
org.apache.camel.processor.UnitOfWorkProducer.process(UnitOfWorkProducer.java:73)[142:org.apache.camel.camel-core:2.12.0.redhat-610379]
at
org.apache.camel.impl.ProducerCache$2.doInProducer(ProducerCache.java:378)[142:org.apache.camel.camel-core:2.12.0.redhat-610379]
at
org.apache.camel.impl.ProducerCache$2.doInProducer(ProducerCache.java:346)[142:org.apache.camel.camel-core:2.12.0.redhat-610379]
at
org.apache.camel.impl.ProducerCache.doInProducer(ProducerCache.java:242)[142:org.apache.camel.camel-core:2.12.0.redhat-610379]
at
org.apache.camel.impl.ProducerCache.sendExchange(ProducerCache.java:346)[142:org.apache.camel.camel-core:2.12.0.redhat-610379]
at
org.apache.camel.impl.ProducerCache.send(ProducerCache.java:201)[142:org.apache.camel.camel-core:2.12.0.redhat-610379]
at
org.apache.camel.impl.DefaultProducerTemplate.send(DefaultProducerTemplate.java:128)[142:org.apache.camel.camel-core:2.12.0.redhat-610379]
at
org.apache.camel.impl.DefaultProducerTemplate.sendBody(DefaultProducerTemplate.java:132)[142:org.apache.camel.camel-core:2.12.0.redhat-610379]
... 19 more

Apache Flink JDBC InputFormat throwing java.net.SocketException: Socket closed

I am querying oracle database using Flink DataSet API. For this I have customised Flink JDBCInputFormat to return java.sql.Resultset. As I need to perform further operation on resultset using Flink operators.
public static void main(String[] args) throws Exception {
ExecutionEnvironment environment = ExecutionEnvironment.getExecutionEnvironment();
environment.setParallelism(1);
#SuppressWarnings("unchecked")
DataSource<ResultSet> source
= environment.createInput(JDBCInputFormat.buildJDBCInputFormat()
.setUsername("username")
.setPassword("password")
.setDrivername("driver_name")
.setDBUrl("jdbcUrl")
.setQuery("query")
.finish(),
new GenericTypeInfo<ResultSet>(ResultSet.class)
);
source.print();
environment.execute();
}
Following is the customised JDBCInputFormat:
public class JDBCInputFormat extends RichInputFormat<ResultSet, InputSplit> implements ResultTypeQueryable {
#Override
public void open(InputSplit inputSplit) throws IOException {
Class.forName(drivername);
dbConn = DriverManager.getConnection(dbURL, username, password);
statement = dbConn.prepareStatement(queryTemplate, resultSetType, resultSetConcurrency);
resultSet = statement.executeQuery();
}
#Override
public void close() throws IOException {
if(statement != null) {
statement.close();
}
if(resultSet != null)
resultSet.close();
if(dbConn != null) {
dbConn.close();
}
}
#Override
public boolean reachedEnd() throws IOException {
isLastRecord = resultSet.isLast();
return isLastRecord;
}
#Override
public ResultSet nextRecord(ResultSet row) throws IOException{
if(!isLastRecord){
resultSet.next();
}
return resultSet;
}
}
This works with below query having limit in the row fetched:
SELECT a,b,c from xyz where rownum <= 10;
but when I try to fetch all the rows having approx 1 million of data, I am getting the below exception after fetching random number of rows:
java.sql.SQLRecoverableException: Io exception: Socket closed
at oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:101)
at oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:133)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:199)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:263)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:521)
at oracle.jdbc.driver.T4CPreparedStatement.fetch(T4CPreparedStatement.java:1024)
at oracle.jdbc.driver.OracleResultSetImpl.close_or_fetch_from_next(OracleResultSetImpl.java:314)
at oracle.jdbc.driver.OracleResultSetImpl.next(OracleResultSetImpl.java:228)
at oracle.jdbc.driver.ScrollableResultSet.cacheRowAt(ScrollableResultSet.java:1839)
at oracle.jdbc.driver.ScrollableResultSet.isValidRow(ScrollableResultSet.java:1823)
at oracle.jdbc.driver.ScrollableResultSet.isLast(ScrollableResultSet.java:349)
at JDBCInputFormat.reachedEnd(JDBCInputFormat.java:98)
at org.apache.flink.runtime.operators.DataSourceTask.invoke(DataSourceTask.java:173)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:559)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.SocketException: Socket closed
at java.net.SocketOutputStream.socketWrite0(Native Method)
So for my case, how i can solve this issue?
I don't think it is possible to ship a ResultSet like a regular record. This is a stateful object that internally maintains a connection to the database server. Using a ResultSet as a record that is transferred between Flink operators means that it can be serialized, shipped over the via the network to another machine, deserialized, and handed to a different thread in a different JVM process. That does not work.
Depending on the connection a ResultSet might as well stay on the same machine in the same thread, which might be the case that worked for you. If you want to query a database from within an operator, you could implement the function as a RichMapPartitionFunction. Otherwise, I'd read the ResultSet in the data source and forward the resulting rows.

How to set receiveTimeout and connection timeout for cxfEndpoint

I am trying to set receiveTimeout and connection timeout for cxfEndpoint in below code .. I got so many spring dsl related answer but i am using camel dsl specifically.
I am trying to set receiveTimeout and connection timeout for cxfEndpoint in below code .. I got so many spring dsl related answer but i am using camel dsl specifically.
I am trying to set receiveTimeout and connection timeout for cxfEndpoint in below code .. I got so many spring dsl related answer but i am using camel dsl specifically.
I am trying to set receiveTimeout and connection timeout for cxfEndpoint in below code .. I got so many spring dsl related answer but i am using camel dsl specifically.
void configure() throws Exception {
super.configure()
CamelContext context=getContext()
String version=context.resolvePropertyPlaceholders('{{'+ CommonConstants.VERSION_PROPERTY+ '}}')
String region=context.resolvePropertyPlaceholders('{{'+ CommonConstants.REGION_PROPERTY + '}}')
String getContextRoot=context.resolvePropertyPlaceholders('{{' + CommonConstants.CONTEXT_ROOT_PROPERTY + '}}')
boolean validateResponse=getContextRoot
//main route exposing a GET
rest("/$version/$region/")
.get("/$getContextRoot")
.produces('application/json')\
.to('direct:validate')
from('direct:validate')
.routeId('validate')
.bean(ValidatorSubRouteHelper.class,'validate')
.to('direct:get-deviceIdentification')
from('direct:get-deviceIdentification')
.routeId('get-deviceIdentification')
//pre-processing closure
.process {
it.out.body = [ it.properties[MessageReferenceConstants.USER_AGENT_HEADER], new CallContext() ]
it.in.headers[CxfConstants.OPERATION_NAME] = context.resolvePropertyPlaceholders('{{'+MessageReferenceConstants.PROPERTY_OPERATION_NAME+'}}')
it.in.headers[Exchange.SOAP_ACTION] = context.resolvePropertyPlaceholders('{{'+MessageReferenceConstants.PROPERTY_SOAP_ACTION+'}}')
Map<String, Object> reqCtx = new HashMap<String, Object>();
HTTPClientPolicy clientHttpPolicy = new HTTPClientPolicy();
clientHttpPolicy.setReceiveTimeout(10000);
reqCtx.put(HTTPClientPolicy.class.getName(), clientHttpPolicy)
it.in.headers[Client.REQUEST_CONTEXT]=reqCtx
}
.to(getEndpointURL())
//In case of SOAPFault from device, handling the exception in processSOAPResponse
.onException(SoapFault.class)
.bean(ProcessResponseExceptionHelper.class,"processSOAPResponse")
.end()
//post-processing closure
.process {
log.info("processing the response retrieved from device service")
MessageContentsList li = it.in.getBody(MessageContentsList.class)
DeviceFamily deviceFamily = (DeviceFamily) li.get(0)
log.debug('device type is '+deviceFamily.deviceType.value)
it.properties[MessageReferenceConstants.PROPERTY_RESPONSE_BODY] = deviceFamily.deviceType.value
}.to('direct:transform')
from('direct:transform')
.routeId('transform')
//transform closure
.process {
log.info("Entering the FilterTransformSubRoute(transform)")
Device device=new Device()
log.debug('device type '+it.properties[MessageReferenceConstants.PROPERTY_RESPONSE_BODY])
device.familyName = it.properties[MessageReferenceConstants.PROPERTY_RESPONSE_BODY]
it.out.body=device
}
.choice()
.when(simple('{{validateResponse}}'))
.to('direct:validateResponse')
if(validateResponse) {
from('direct:validateResponse')
.bean(DataValidator.getInstance('device.json'))
}
}
/**
* Constructs the endpoint url.
* Formatting end point URL for device identification service call
* #return the endpoint url
*/
private String getEndpointURL() {
CamelContext context=getContext()
def serviceURL=context.resolvePropertyPlaceholders('{{'+MessageReferenceConstants.SERVICE_URL+'}}')
def wsdlURL=context.resolvePropertyPlaceholders('{{'+MessageReferenceConstants.WSDL_URL+'}}')
boolean isGZipEnable=CommonConstants.TRUE.equalsIgnoreCase(context.resolvePropertyPlaceholders('{{'+MessageReferenceConstants.GZIP_ENABLED+'}}'))
def serviceClass = context.resolvePropertyPlaceholders('{{'+MessageReferenceConstants.PROPERTY_SERVICE_CLASS+'}}')
def serviceName = context.resolvePropertyPlaceholders('{{'+MessageReferenceConstants.PROPERTY_SERVICE_NAME+'}}')
def url="cxf:$serviceURL?"+
"wsdlURL=$wsdlURL"+
"&serviceClass=$serviceClass"+
"&serviceName=$serviceName"
if(isGZipEnable) {
url+= "&cxfEndpointConfigurer=#deviceIdentificationServiceCxfConfigurer"
}
log.debug("endpoint url is " + url)
url
}
You already find a the cxfEndpointConfigurer option for it.
Now you just need implement the configurer interface like this:
public static class MyCxfEndpointConfigurer implements CxfEndpointConfigurer {
#Override
public void configure(AbstractWSDLBasedEndpointFactory factoryBean) {
// Do nothing here
}
#Override
public void configureClient(Client client) {
// reset the timeout option to override the spring configuration one
HTTPConduit conduit = (HTTPConduit) client.getConduit();
HTTPClientPolicy policy = new HTTPClientPolicy();
// You can setup the timeout option here
policy.setReceiveTimeout(60000);
policy.setConnectionTimeout(30000);
conduit.setClient(policy);
}
#Override
public void configureServer(Server server) {
// Do nothing here
}
}

Setting a header based on an XQuery filter

I have a route that's set to run in batched mode, polling several thousand XML files. Each is timestamped inside the XML structure and this dateTime element is used to determine whether the XML should be included in the batch's further processing (an XQuery transform). As this is a batch route it self-terminates after execution.
Because the route needs to close itself I have to ensure that it also closes if every message is filtered out, which is why I don't use a filter but a .choice() statement instead and set a custom header on the exchange which is later used in a bean that groups matches and prepares a single source document for the XQuery.
However, my current approach requires a second route that both branches of the .choice() forward to. This is necessary because I can't seem to force both paths to simply continue. So my question is: how can get rid of this second route? One approach is setting the filter header in a bean instead but I'm worried about the overhead involved. I assume the XQuery filter inside Camel would greatly outperform a POJO that builds an XML document from a string and runs an XQuery against it.
from(sourcePath + "?noop=true" + "&include=.*.xml")
.choice()
.when()
.xquery("[XQuery Filter]")
.setHeader("Filtered", constant(false))
.to("direct:continue")
.otherwise()
.setHeader("Filtered", constant(true))
.to("direct:continue")
.end();
from("direct:continue")
.routeId(forwarderRouteID)
.aggregate(aggregationExpression)
.completionFromBatchConsumer()
.completionTimeout(DEF_COMPLETION_TIMEOUT)
.groupExchanges()
.bean(new FastQueryMerger(), "group")
.to("xquery:" + xqueryPath)
.bean(new FileModifier(interval), "setFileName")
.to(targetPath)
.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
new RouteTerminator(routeID, exchange.getContext()).start();
new RouteTerminator(forwarderRouteID, exchange.getContext()).start();
}
})
.end();
Wouldn't .end() help here?
I mean the following:
from(sourcePath + "?noop=true" + "&include=.*.xml")
.choice()
.when()
.xquery("[XQuery Filter]")
.setHeader("Filtered", constant(false)).end()
.otherwise()
.setHeader("Filtered", constant(true)).end()
.aggregate(aggregationExpression)
.completionFromBatchConsumer()
.completionTimeout(DEF_COMPLETION_TIMEOUT)
.groupExchanges()
.bean(new FastQueryMerger(), "group")
.to("xquery:" + xqueryPath)
.bean(new FileModifier(interval), "setFileName")
.to(targetPath)
.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
new RouteTerminator(routeID, exchange.getContext()).start();
new RouteTerminator(forwarderRouteID, exchange.getContext()).start();
}
});
just quickly tested the following one and it worked:
#Produce(uri = "direct:test")
protected ProducerTemplate testProducer;
#EndpointInject(uri = "mock:test-first")
protected MockEndpoint testFirst;
#EndpointInject(uri = "mock:test-therest")
protected MockEndpoint testTheRest;
#EndpointInject(uri = "mock:test-check")
protected MockEndpoint testCheck;
#Test
public void test() {
final String first = "first";
final String second = "second";
testFirst.setExpectedMessageCount(1);
testTheRest.setExpectedMessageCount(1);
testCheck.setExpectedMessageCount(2);
testProducer.sendBody(first);
testProducer.sendBody(second);
try {
testFirst.assertIsSatisfied();
testTheRest.assertIsSatisfied();
testCheck.assertIsSatisfied();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
#Override
protected RouteBuilder createRouteBuilder() {
return new RouteBuilder() {
public void configure() {
from("direct:test")
.choice()
.when(body().isEqualTo("first")).to("mock:test-first")
.otherwise().to("mock:test-therest").end()
.to("mock:test-check");
}
};
}

Resources