The following sample code works fine with all versions of Camel, excluding 2.18.x
from("direct:process")
.process(new Processor() {
public void process(Exchange exchange) {
List<String> alist = new ArrayList<String>();
alist.add("1");
alist.add("99");
exchange.getIn().setHeader("ITEMS", alist);
exchange.getIn().setHeader("TOTAL_LOOPS", alist.size());
}
})
.loop(simple("${header.TOTAL_LOOPS}", Integer.class))
.setHeader("item", simple("${header.ITEMS[${property.CamelLoopIndex}]}", String.class))
.log(LoggingLevel.INFO, LOG_CLASS_NAME, simple("item = ${header.item} and TOTAL_MAPS = ${header.TOTAL_LOOPS}").getText())
.end()
.end();
However with 2.18.x, the following exception gets thrown:
2017-02-03 21:13:31 ERROR DefaultErrorHandler:204 - Failed delivery
for (MessageId: ID-CATL0W10D4DG4R1-55822-1486174410756-0-1 on
ExchangeId: ID-CATL0W10D4DG4R1-55822-1486174410756-0-2). Exhausted
after delivery attempt: 1 caught:
org.apache.camel.language.bean.RuntimeBeanExpressionException: Failed
to invoke method: [${property.CamelLoopIndex}] on java.util.ArrayList
due to: java.lang.IndexOutOfBoundsException: Key:
${property.CamelLoopIndex} not found in bean: [1, 99] of type:
java.util.ArrayList using OGNL path [[${property.CamelLoopIndex}]]
Related
Hi I have complex camel route and in between the route I am sending mesage to MQ using Bean.
.bean("{{tp.mqservice}}")
application.yaml
mqservice: bean:mqService
application-test.yaml
mqservice: mock:result
Below is my PortfolioRouteTest
#ActiveProfiles("test")
#RunWith(CamelSpringBootRunner.class)
#SpringBootTest(classes = MainApplication.class, webEnvironment = WebEnvironment.RANDOM_PORT)
#MockEndpoints
public class PortfolioTncRouteTest {
#EndpointInject(value = "{{trade-publisher.portfolio-tnc.source-endpoint}}")
private ProducerTemplate producerTemplate;
#EndpointInject(value = "{{trade-publisher.mqservice}}")
private MockEndpoint mock;
}
Junit
#Test
public void portfolioTncRouteTest() throws InterruptedException {
data = ...
Mockito.when(service.search(Mockito.any(....class))).thenReturn(...);
producerTemplate.sendBody(data);
mock.expectedMessageCount(1);
mock.assertIsSatisfied(30000);
}
however when I run the test I am getting below error. Am I missing something?
Stacktrace
Caused by: org.apache.camel.NoSuchBeanException: No bean could be found in the registry for: mock:result
at org.apache.camel.component.bean.RegistryBean.getBean(RegistryBean.java:92)
at org.apache.camel.component.bean.RegistryBean.createCacheHolder(RegistryBean.java:67)
at org.apache.camel.reifier.BeanReifier.createProcessor(BeanReifier.java:57)
at org.apache.camel.reifier.ProcessorReifier.createProcessor(ProcessorReifier.java:485)
at org.apache.camel.reifier.ProcessorReifier.createOutputsProcessorImpl(ProcessorReifier.java:448)
at org.apache.camel.reifier.ProcessorReifier.createOutputsProcessor(ProcessorReifier.java:415)
at org.apache.camel.reifier.ProcessorReifier.createOutputsProcessor(ProcessorReifier.java:212)
at org.apache.camel.reifier.ExpressionReifier.createFilterProcessor(ExpressionReifier.java:39)
at org.apache.camel.reifier.WhenReifier.createProcessor(WhenReifier.java:32)
at org.apache.camel.reifier.WhenReifier.createProcessor(WhenReifier.java:24)
at org.apache.camel.reifier.ProcessorReifier.createProcessor(ProcessorReifier.java:485)
at org.apache.camel.reifier.ChoiceReifier.createProcessor(ChoiceReifier.java:54)
at org.apache.camel.reifier.ProcessorReifier.createProcessor(ProcessorReifier.java:485)
at org.apache.camel.reifier.ProcessorReifier.createOutputsProcessorImpl(ProcessorReifier.java:448)
at org.apache.camel.reifier.ProcessorReifier.createOutputsProcessor(ProcessorReifier.java:415)
at org.apache.camel.reifier.TryReifier.createProcessor(TryReifier.java:38)
at org.apache.camel.reifier.ProcessorReifier.createProcessor(ProcessorReifier.java:485)
at org.apache.camel.reifier.ProcessorReifier.createOutputsProcessorImpl(ProcessorReifier.java:448)
at org.apache.camel.reifier.ProcessorReifier.createOutputsProcessor(ProcessorReifier.java:415)
at org.apache.camel.reifier.ProcessorReifier.createOutputsProcessor(ProcessorReifier.java:212)
at org.apache.camel.reifier.ProcessorReifier.createChildProcessor(ProcessorReifier.java:231)
at org.apache.camel.reifier.SplitReifier.createProcessor(SplitReifier.java:42)
at org.apache.camel.reifier.ProcessorReifier.makeProcessorImpl(ProcessorReifier.java:536)
at org.apache.camel.reifier.ProcessorReifier.makeProcessor(ProcessorReifier.java:497)
at org.apache.camel.reifier.ProcessorReifier.addRoutes(ProcessorReifier.java:241)
at org.apache.camel.reifier.RouteReifier.addRoutes(RouteReifier.java:358)
... 56 more
Use .to instead of .bean so its sending to a Camel endpoint, then you can send to the mock endpoint. When using .bean then its for calling a POJO Java bean only.
I am reading data from Kafka using flink 1.4.2 and parsing them to ObjectNode using JSONDeserializationSchema. If the incoming record is not a valid JSON then my Flink job fails. I would like to skip the broken record instead of failing the job.
FlinkKafkaConsumer010<ObjectNode> kafkaConsumer =
new FlinkKafkaConsumer010<>(TOPIC, new JSONDeserializationSchema(), consumerProperties);
DataStream<ObjectNode> messageStream = env.addSource(kafkaConsumer);
messageStream.print();
I am getting the following exception if the data in Kafka is not a valid JSON.
Job execution switched to status FAILING.
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.core.JsonParseException: Unrecognized token 'This': was expecting ('true', 'false' or 'null')
at [Source: [B#4f522623; line: 1, column: 6]
Job execution switched to status FAILED.
Exception in thread "main" org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
The easiest solution is to implement your own DeserializationSchema and wrap JSONDeserializationSchema. You can then catch the exception and either ignore it or perform custom action.
As suggested by #twalthr, I implemented my own DeserializationSchema by copying JSONDeserializationSchema and added exception handling.
import org.apache.flink.api.common.serialization.AbstractDeserializationSchema;
import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ObjectMapper;
import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.node.ObjectNode;
import java.io.IOException;
public class CustomJSONDeserializationSchema extends AbstractDeserializationSchema<ObjectNode> {
private ObjectMapper mapper;
#Override
public ObjectNode deserialize(byte[] message) throws IOException {
if (mapper == null) {
mapper = new ObjectMapper();
}
ObjectNode objectNode;
try {
objectNode = mapper.readValue(message, ObjectNode.class);
} catch (Exception e) {
ObjectMapper errorMapper = new ObjectMapper();
ObjectNode errorObjectNode = errorMapper.createObjectNode();
errorObjectNode.put("jsonParseError", new String(message));
objectNode = errorObjectNode;
}
return objectNode;
}
#Override
public boolean isEndOfStream(ObjectNode nextElement) {
return false;
}
}
In my streaming job.
messageStream
.filter((event) -> {
if(event.has("jsonParseError")) {
LOG.warn("JsonParseException was handled: " + event.get("jsonParseError").asText());
return false;
}
return true;
}).print();
Flink has improved null record handling for FlinkKafkaConsumer
There are two possible design choices when the DeserializationSchema encounters a corrupted message. It can either throw an IOException which causes the pipeline to be restarted, or it can return null where the Flink Kafka consumer will silently skip the corrupted message.
For more details, you can see this link.
I want to print only constraint message but apachecamel printing complete message like
Bean Code part
#NotNull(message="Validation Error name value Missing.")
private String name;
Router Code
onException(BeanValidationException.class)
.handled(true)
.process( new FailedResponseProcessor() );
Processor code
public void process(Exchange exchange) throws Exception {
Exception e = exchange.getProperty(Exchange.EXCEPTION_CAUGHT, Exception.class);
Response response = new Response();
response.setRequestStatus("Failed");
response.setRequestMessage(e.getMessage());
Following is response received
<response>
<requestStatus>Failed</requestStatus>
<requestMessage>Validation failed for: org.my.Request#1b8a0be3 errors: [property: name; value: null; constraint: Validation Error name value Missing.; ]. Exchange[ID-WCB00073679-49595-1507251546181-0-1]</requestMessage>
</response>
Below works for me
BeanValidationException bve = (BeanValidationException) exchange.getProperty(Exchange.EXCEPTION_CAUGHT);
Set<ConstraintViolation<Object>> constraintViolations = bve.getConstraintViolations();
ConstraintViolation<Object> constraintViolation = constraintViolations.iterator().next();
System.out.println( constraintViolation.getMessage() );
I am new to hazelcast and camel.
While creating a map-loader using camel, I am calling camel route within the the "load" method. Although the container shows "dataSyncLoad" container is present, but still while loading data on startup of the application, it gives error listed below.
Hazelcast Maploader
#Produce(uri = "direct:dataSyncLoad")
private ProducerTemplate dataSyncLoad;
#Override
public synchronized DataSyncServiceRequest load(CellDataSyncKey key) {
DataSyncServiceRequest dataSynchTemplateVO = dataSyncLoad.requestBody("direct:dataSyncLoad",key , DataSyncServiceRequest.class);
if (null != dataSynchTemplateVO) {
LOGGER.info("Cache Loaded: {}", key);
} else {
LOGGER.info("No data found for the key : {}", key);
}
return dataSynchTemplateVO;
}
Map Configuration
<hz:map name="getsToolCcaDatasyncMap" backup-count="0" max-size="100" eviction-percentage="25" eviction-policy="LRU"
read-backup-data="0">
<hz:map-store enabled="true" initial-mode="EAGER" write-delay-seconds="0" implementation="getsToolCcaDatasyncMapLoader"
/>
</hz:map>
Could not load keys from map store
org.apache.camel.CamelExecutionException: Exception occurred during
execution on the exchange: Exchange[Message: CellDataSyncKey
[locoid=20695, dataTemplate=3, deviceName=CCA]] at
org.apache.camel.util.ObjectHelper.wrapCamelExecutionException(ObjectHelper.java:1379)[142:org.apache.camel.camel-core:2.12.0.redhat-610379]
at
org.apache.camel.util.ExchangeHelper.extractResultBody(ExchangeHelper.java:622)[142:org.apache.camel.camel-core:2.12.0.redhat-610379]
at
org.apache.camel.impl.DefaultProducerTemplate.extractResultBody(DefaultProducerTemplate.java:467)[142:org.apache.camel.camel-core:2.12.0.redhat-610379]
at
org.apache.camel.impl.DefaultProducerTemplate.sendBody(DefaultProducerTemplate.java:133)[142:org.apache.camel.camel-core:2.12.0.redhat-610379]
at
org.apache.camel.impl.DefaultProducerTemplate.sendBody(DefaultProducerTemplate.java:149)[142:org.apache.camel.camel-core:2.12.0.redhat-610379]
at
org.apache.camel.impl.DefaultProducerTemplate.requestBody(DefaultProducerTemplate.java:297)[142:org.apache.camel.camel-core:2.12.0.redhat-610379]
at
org.apache.camel.impl.DefaultProducerTemplate.requestBody(DefaultProducerTemplate.java:327)[142:org.apache.camel.camel-core:2.12.0.redhat-610379]
at
com.ge.trans.loader.cell.cache.mapstore.GetsToolCcaDatasyncMap.load(GetsToolCcaDatasyncMap.java:40)[295:cell-cache-service:2.0.0]
at
com.ge.trans.loader.cell.cache.mapstore.GetsToolCcaDatasyncMap.loadAll(GetsToolCcaDatasyncMap.java:57)[295:cell-cache-service:2.0.0]
at
com.hazelcast.map.impl.MapStoreWrapper.loadAll(MapStoreWrapper.java:143)[276:com.hazelcast:3.6.5]
at
com.hazelcast.map.impl.mapstore.AbstractMapDataStore.loadAll(AbstractMapDataStore.java:56)[276:com.hazelcast:3.6.5]
at
com.hazelcast.map.impl.recordstore.BasicRecordStoreLoader.loadAndGet(BasicRecordStoreLoader.java:161)[276:com.hazelcast:3.6.5]
at
com.hazelcast.map.impl.recordstore.BasicRecordStoreLoader.doBatchLoad(BasicRecordStoreLoader.java:134)[276:com.hazelcast:3.6.5]
at
com.hazelcast.map.impl.recordstore.BasicRecordStoreLoader.loadValuesInternal(BasicRecordStoreLoader.java:120)[276:com.hazelcast:3.6.5]
at
com.hazelcast.map.impl.recordstore.BasicRecordStoreLoader.access$100(BasicRecordStoreLoader.java:54)[276:com.hazelcast:3.6.5]
at
com.hazelcast.map.impl.recordstore.BasicRecordStoreLoader$GivenKeysLoaderTask.call(BasicRecordStoreLoader.java:107)[276:com.hazelcast:3.6.5]
at
com.hazelcast.util.executor.CompletableFutureTask.run(CompletableFutureTask.java:67)[276:com.hazelcast:3.6.5]
at
com.hazelcast.util.executor.CachedExecutorServiceDelegate$Worker.run(CachedExecutorServiceDelegate.java:212)[276:com.hazelcast:3.6.5]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)[:1.7.0_67]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)[:1.7.0_67]
at java.lang.Thread.run(Thread.java:745)[:1.7.0_67] at
com.hazelcast.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:76)[276:com.hazelcast:3.6.5]
at
com.hazelcast.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:92)[276:com.hazelcast:3.6.5]
Caused by:
org.apache.camel.component.direct.DirectConsumerNotAvailableException:
No consumers available on endpoint: Endpoint[direct://dataSyncLoad].
Exchange[Message: CellDataSyncKey [locoid=20695, dataTemplate=3,
deviceName=CCA]] at
org.apache.camel.component.direct.DirectProducer.process(DirectProducer.java:47)[142:org.apache.camel.camel-core:2.12.0.redhat-610379]
at
org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:191)[142:org.apache.camel.camel-core:2.12.0.redhat-610379]
at
org.apache.camel.processor.UnitOfWorkProducer.process(UnitOfWorkProducer.java:73)[142:org.apache.camel.camel-core:2.12.0.redhat-610379]
at
org.apache.camel.impl.ProducerCache$2.doInProducer(ProducerCache.java:378)[142:org.apache.camel.camel-core:2.12.0.redhat-610379]
at
org.apache.camel.impl.ProducerCache$2.doInProducer(ProducerCache.java:346)[142:org.apache.camel.camel-core:2.12.0.redhat-610379]
at
org.apache.camel.impl.ProducerCache.doInProducer(ProducerCache.java:242)[142:org.apache.camel.camel-core:2.12.0.redhat-610379]
at
org.apache.camel.impl.ProducerCache.sendExchange(ProducerCache.java:346)[142:org.apache.camel.camel-core:2.12.0.redhat-610379]
at
org.apache.camel.impl.ProducerCache.send(ProducerCache.java:201)[142:org.apache.camel.camel-core:2.12.0.redhat-610379]
at
org.apache.camel.impl.DefaultProducerTemplate.send(DefaultProducerTemplate.java:128)[142:org.apache.camel.camel-core:2.12.0.redhat-610379]
at
org.apache.camel.impl.DefaultProducerTemplate.sendBody(DefaultProducerTemplate.java:132)[142:org.apache.camel.camel-core:2.12.0.redhat-610379]
... 19 more
I have a route that's set to run in batched mode, polling several thousand XML files. Each is timestamped inside the XML structure and this dateTime element is used to determine whether the XML should be included in the batch's further processing (an XQuery transform). As this is a batch route it self-terminates after execution.
Because the route needs to close itself I have to ensure that it also closes if every message is filtered out, which is why I don't use a filter but a .choice() statement instead and set a custom header on the exchange which is later used in a bean that groups matches and prepares a single source document for the XQuery.
However, my current approach requires a second route that both branches of the .choice() forward to. This is necessary because I can't seem to force both paths to simply continue. So my question is: how can get rid of this second route? One approach is setting the filter header in a bean instead but I'm worried about the overhead involved. I assume the XQuery filter inside Camel would greatly outperform a POJO that builds an XML document from a string and runs an XQuery against it.
from(sourcePath + "?noop=true" + "&include=.*.xml")
.choice()
.when()
.xquery("[XQuery Filter]")
.setHeader("Filtered", constant(false))
.to("direct:continue")
.otherwise()
.setHeader("Filtered", constant(true))
.to("direct:continue")
.end();
from("direct:continue")
.routeId(forwarderRouteID)
.aggregate(aggregationExpression)
.completionFromBatchConsumer()
.completionTimeout(DEF_COMPLETION_TIMEOUT)
.groupExchanges()
.bean(new FastQueryMerger(), "group")
.to("xquery:" + xqueryPath)
.bean(new FileModifier(interval), "setFileName")
.to(targetPath)
.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
new RouteTerminator(routeID, exchange.getContext()).start();
new RouteTerminator(forwarderRouteID, exchange.getContext()).start();
}
})
.end();
Wouldn't .end() help here?
I mean the following:
from(sourcePath + "?noop=true" + "&include=.*.xml")
.choice()
.when()
.xquery("[XQuery Filter]")
.setHeader("Filtered", constant(false)).end()
.otherwise()
.setHeader("Filtered", constant(true)).end()
.aggregate(aggregationExpression)
.completionFromBatchConsumer()
.completionTimeout(DEF_COMPLETION_TIMEOUT)
.groupExchanges()
.bean(new FastQueryMerger(), "group")
.to("xquery:" + xqueryPath)
.bean(new FileModifier(interval), "setFileName")
.to(targetPath)
.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
new RouteTerminator(routeID, exchange.getContext()).start();
new RouteTerminator(forwarderRouteID, exchange.getContext()).start();
}
});
just quickly tested the following one and it worked:
#Produce(uri = "direct:test")
protected ProducerTemplate testProducer;
#EndpointInject(uri = "mock:test-first")
protected MockEndpoint testFirst;
#EndpointInject(uri = "mock:test-therest")
protected MockEndpoint testTheRest;
#EndpointInject(uri = "mock:test-check")
protected MockEndpoint testCheck;
#Test
public void test() {
final String first = "first";
final String second = "second";
testFirst.setExpectedMessageCount(1);
testTheRest.setExpectedMessageCount(1);
testCheck.setExpectedMessageCount(2);
testProducer.sendBody(first);
testProducer.sendBody(second);
try {
testFirst.assertIsSatisfied();
testTheRest.assertIsSatisfied();
testCheck.assertIsSatisfied();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
#Override
protected RouteBuilder createRouteBuilder() {
return new RouteBuilder() {
public void configure() {
from("direct:test")
.choice()
.when(body().isEqualTo("first")).to("mock:test-first")
.otherwise().to("mock:test-therest").end()
.to("mock:test-check");
}
};
}