I've written a small flink application. I has some input, and enriches it with data from an external source. It's an RichAsyncFunction and within the open method I construct a http client to be used for the enrichment.
Now I want to write an integration test for my job. But since the http client is created within the open method I have no means to provide it, and mock it in my integration test. I've tried to refactor it providing it within the constructor, but I'm always getting serialisation errors.
This is the example I'm working from:
https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/stream/operators/asyncio.html
Thanks in advance :)
This question was posted over a year ago but I'll post the answer in-case anyone stumbles upon this in the future.
The serialization exception you are seeing is likely this
Exception encountered when invoking run on a nested suite. *** ABORTED *** (610 milliseconds)
java.lang.NullPointerException:
at java.util.Objects.requireNonNull(Objects.java:203)
at org.apache.flink.streaming.runtime.streamrecord.StreamElementSerializer.<init>(StreamElementSerializer.java:64)
at org.apache.flink.streaming.api.operators.async.AsyncWaitOperator.setup(AsyncWaitOperator.java:136)
at org.apache.flink.streaming.api.operators.SimpleOperatorFactory.createStreamOperator(SimpleOperatorFactory.java:77)
at org.apache.flink.streaming.api.operators.StreamOperatorFactoryUtil.createOperator(StreamOperatorFactoryUtil.java:70)
at org.apache.flink.streaming.util.AbstractStreamOperatorTestHarness.setup(AbstractStreamOperatorTestHarness.java:366)
at org.apache.flink.streaming.util.OneInputStreamOperatorTestHarness.setup(OneInputStreamOperatorTestHarness.java:165)
...
The reason is that your test operator needs to know how to deserialize the DataStream input type. The only way to provide this is by supplying it directly while initializing the testHarness and then passing it to the setup() method call.
So to test the example from the Flink docs you linked you can do something like this (my implementation is in Scala but you can adapt it to Java as well)
import org.apache.flink.api.common.ExecutionConfig
import org.apache.flink.api.java.typeutils.TypeExtractor
import org.apache.flink.configuration.Configuration
import org.apache.flink.streaming.api.datastream.AsyncDataStream.OutputMode
import org.apache.flink.streaming.api.operators.async.AsyncWaitOperator
import org.apache.flink.streaming.runtime.tasks.{StreamTaskActionExecutor, TestProcessingTimeService}
import org.apache.flink.streaming.runtime.tasks.mailbox.{MailboxExecutorImpl, TaskMailboxImpl}
import org.apache.flink.streaming.util.OneInputStreamOperatorTestHarness
import org.scalatest.{BeforeAndAfter, FunSuite, Matchers}
/**
This test case is written using Flink 1.11+.
Older versions likely have a simpler constructor definition for [[AsyncWaitOperator]] so you might have to remove the last two arguments (processingTimeService and mailboxExecutor)
*/
class AsyncDatabaseRequestSuite extends FunSuite with BeforeAndAfter with Matchers {
var testHarness: OneInputStreamOperatorTestHarness[String, (String, String)] = _
val TIMEOUT = 1000
val CAPACITY = 1000
val MAILBOX_PRIORITY = 0
def createTestHarness: Unit = {
val operator = new AsyncWaitOperator[String, (String, String)](
new AsyncDatabaseRequest {
override def open(configuration: Configuration): Unit = {
client = new MockDatabaseClient(host, post, credentials); // put your mock DatabaseClient object here
}
},
TIMEOUT,
CAPACITY,
OutputMode.UNORDERED,
new TestProcessingTimeService,
new MailboxExecutorImpl(
new TaskMailboxImpl,
MAILBOX_PRIORITY,
StreamTaskActionExecutor.IMMEDIATE
)
)
// supply the TypeSerializer for the "input" type of the operator
testHarness = new OneInputStreamOperatorTestHarness[String, (String, String)](
operator,
TypeExtractor.getForClass(classOf[String]).createSerializer(new ExecutionConfig)
)
// supply the TypeSerializer for the "output" type of the operator to the setup() call
testHarness.setup(
TypeExtractor.getForClass(classOf[(String, String)]).createSerializer(new ExecutionConfig)
)
testHarness.open()
}
before {
createTestHarness
}
after {
testHarness.close()
}
test("Your test case goes here") {
// fill in your test case here
}
}
Here is the solution in Java
class TestingClass {
#InjectMocks
ClassUnderTest cut;
private static OneInputStreamOperatorTestHarness<IN, OUT> testHarness; // replace IN, OUT with your asyncFunction's
private static long TIMEOUT = 1000;
private static int CAPACITY = 1000;
private static int MAILBOX_PRIORITY = 0;
private long UNUSED_TIME = 0L;
Driver driverRef;
public void createTestHarness() throws Exception {
cut = new ClassUnderTest() {
#Override
public void open(Configuration parameters) throws Exception {
driver = mock(Driver.class); // mock your driver (external data source here).
driverRef = driver; // create external ref to driver to refer to in test
}
};
MailboxExecutorImpl mailboxExecutorImpl = new MailboxExecutorImpl(
new TaskMailboxImpl(), MAILBOX_PRIORITY, StreamTaskActionExecutor.IMMEDIATE
);
AsyncWaitOperator operator = new AsyncWaitOperator<>(
gatewayEnrichment,
TIMEOUT,
CAPACITY,
ORDERED,
new TestProcessingTimeService(),
mailboxExecutorImpl
);
testHarness = new OneInputStreamOperatorTestHarness<IN, OUT>(
operator,
TypeExtractor.getForClass(IN.class).createSerializer(new ExecutionConfig())
);
testHarness.setup(TypeExtractor.getForClass(OUT.class).createSerializer(new ExecutionConfig()));
testHarness.open();
}
#BeforeEach()
void setUp() throws Exception {
createTestHarness();
MockitoAnnotations.openMocks(this);
}
#AfterEach
void tearDown() throws Exception {
testHarness.close();
}
#Test
public void test_yourTestCase() throws Exception {
}
}
Related
I would like to convert a parameter and then call second method with this parameter.
The convention would be that there is always the same method overloaded with the specific type. The idea is to solve it with Spring AOP.
#Component
public class ExampleAspect {
#Around( "#annotation(Example)" )
public Object test( final ProceedingJoinPoint joinPoint ) throws Throwable {
final MethodSignature signature = (MethodSignature) joinPoint.getSignature();
final Method method = signature.getMethod();
final Example example = method.getAnnotation( Example.class );
final Object[] args = joinPoint.getArgs();
final String test = args[example.value()].toString();
final Bar bar = convertToBar(test)
args[example.value()] = bar;
//ReflectionUtils?
// call getBar(Bar bar)
//return joinPoint.proceed( args );
}
}
Here is the service
#Example(0)
public Object getBar(String test) {}
public Object getBar(Bar test) {}
Are there any better options or ideas?
EDIT:
Cannot inject the target bean, because this AOP should be used by more than specific target bean.
1 possible solution not sure if there is a smarter solution
#Around("#annotation(Example)")
public Object test(final ProceedingJoinPoint joinPoint) throws Throwable {
final MethodSignature signature = (MethodSignature) joinPoint.getSignature();
final Method method = signature.getMethod();
final Example example = method.getAnnotation(Example.class);
final Object[] args = joinPoint.getArgs();
final String bar = args[example.value()].toString();
final Bar aspectModelUrn = convertFromStringToBar(bar);
args[example.value()] = bar;
final Class<?>[] parameterTypes = method.getParameterTypes();
parameterTypes[example.value()] = Bar.class;
final Method newMethod = ReflectionUtils.findMethod(joinPoint.getTarget().getClass(), method.getName(), parameterTypes);
if (newMethod == null) {
throw new IllegalArgumentException("There is no method blubb. Have you forget to create the delegate method");
}
return newMethod.invoke(joinPoint.getTarget(), args);
}
Following code would provide a handle to the annotation and the target bean (for example , here TestComponent)
A call to the TestComponent.getBar() annotated with #Example would be intercepted and advised.
#Aspect
#Component
public class ExampleAspect {
#Around("#annotation(example) && target(bean)")
public Object test(final ProceedingJoinPoint joinPoint,Example example,TestComponent bean) throws Throwable {
String value = String.valueOf(example.value());
Bar bar = convertToBar(value);
bean.getBar(bar);
return joinPoint.proceed();
}
}
Do go through Spring AOP documentation : Passing Parameters to Advice for more details.
Note : For better performance it is a good idea to limit the scope of the expression as follows.
#Around("#annotation(example) && within(com.xyz.service..*) && target(bean)")
where com.xyz.service..* will limit the expression scope only to the beans with in the package com.xyz.service..* and its sub-packages.
I am implementing Circuit breaker using Hystrix in my Spring boot application, my code is something like below:
#service
public class MyServiceHandler {
#HystrixCommand(fallbackMethod="fallback")
public String callService() {
// if(remote service is not reachable
// throw ServiceException
}
public String fallback() {
// return default response
}
}
// In application.properties, I have below properties defined:
hystrix.command.default.execution.isolation.thread.timeoutInMilliseconds=10000
hystrix.command.default.circuitBreaker.requestVolumeThreshold=3
hystrix.command.default.circuitBreaker.sleepWindowInMilliseconds=30000
hystrix.threadpool.default.coreSize=4
hystrix.threadpool.default.metrics.rollingStats.timeInMilliseconds=200000
I see that the fallback() is getting called with each failure of callService(). However, the circuit is not opening after 3 failures. After 3 failures, I was expecting that it will directly call fallback() and skip callService(). But this is not happening. Can someone advise what I am doing wrong here?
Thanks,
B Jagan
Edited on 26th July to add more details below:
Below is the actual code. I played a bit further with this. I see that the Circuit opens as expected on repeated failured when I call the remote service directly in the RegistrationHystrix.registerSeller() method. But, when I wrap the remote service call within Spring retry template, it keeps going into fallback method, but circuit never opens.
#Service
public class RegistrationHystrix {
Logger logger = LoggerFactory.getLogger(RegistrationHystrix.class);
private RestTemplate restTemplate;
private RetryTemplate retryTemplate;
public RegistrationHystrix(RestTemplate restTemplate) {
this.restTemplate = restTemplate;
retryTemplate = new RetryTemplate();
FixedBackOffPolicy fixedBackOffPolicy = new FixedBackOffPolicy();
fixedBackOffPolicy.setBackOffPeriod(1000l);
retryTemplate.setBackOffPolicy(fixedBackOffPolicy);
SimpleRetryPolicy retryPolicy = new SimpleRetryPolicy();
retryPolicy.setMaxAttempts(3);
retryTemplate.setRetryPolicy(retryPolicy);
}
#HystrixCommand(fallbackMethod = "fallbackForRegisterSeller", commandKey = "ordermanagement")
public String registerSeller(SellerDto sellerDto) throws Exception {
String response = retryTemplate.execute(new RetryCallback<String, Exception>() {
#Override
public String doWithRetry(RetryContext context) {
logger.info(String.format("Retry count %d", context.getRetryCount()));
return restTemplate.postForObject("/addSeller", sellerDto, String.class);
}
});
return response;
}
public List<SellerDto> getSellersList() {
return restTemplate.getForObject("/sellersList", List.class);
}
public String fallbackForRegisterSeller(SellerDto sellerDto, Throwable t) {
logger.error("Inside fall back, cause - {}", t.toString());
return "Inside fallback method. Some error occured while calling service for seller registration";
}
}
Below is the service class which in turn calls the above Hystrix wrapped service. This class in turn is invoked by a controller.
#Service
public class RegistrationServiceImpl implements RegistrationService {
Logger logger = LoggerFactory.getLogger(RegistrationServiceImpl.class);
private RegistrationHystrix registrationHystrix;
public RegistrationServiceImpl(RegistrationHystrix registrationHystrix) {
this.registrationHystrix = registrationHystrix;
}
#Override
public String registerSeller(SellerDto sellerDto) throws Exception {
long start = System.currentTimeMillis();
String registerSeller = registrationHystrix.registerSeller(sellerDto);
logger.info("add seller call returned in - {}", System.currentTimeMillis() - start);
return registerSeller;
}
So, I am trying to understand why the Circuit breaker is not working as expected when using it along with Spring RetryTemplate.
You should be using metrics.healthSnapshot.intervalInMilliseconds while testing. I guess you are executing all 3 request within default 500 ms and hence the circuit isn't getting open. You can either decrease this interval or you may put a sleep between the 3 requests.
Earlier I asked about a simple hello world example for Flink. This gave me some good examples!
However I would like to ask for a more ‘streaming’ example where we generate an input value every second. This would ideally be random, but even just the same value each time would be fine.
The objective is to get a stream that ‘moves’ with no/minimal external touch.
Hence my question:
How to show Flink actually streaming data without external dependencies?
I found how to show this with generating data externally and writing to Kafka, or listening to a public source, however I am trying to solve it with minimal dependence (like starting with GenerateFlowFile in Nifi).
Here's an example. This was constructed as an example of how to make your sources and sinks pluggable. The idea being that in development you might use a random source and print the results, for tests you might use a hardwired list of input events and collect the results in a list, and in production you'd use the real sources and sinks.
Here's the job:
/*
* Example showing how to make sources and sinks pluggable in your application code so
* you can inject special test sources and test sinks in your tests.
*/
public class TestableStreamingJob {
private SourceFunction<Long> source;
private SinkFunction<Long> sink;
public TestableStreamingJob(SourceFunction<Long> source, SinkFunction<Long> sink) {
this.source = source;
this.sink = sink;
}
public void execute() throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
DataStream<Long> LongStream =
env.addSource(source)
.returns(TypeInformation.of(Long.class));
LongStream
.map(new IncrementMapFunction())
.addSink(sink);
env.execute();
}
public static void main(String[] args) throws Exception {
TestableStreamingJob job = new TestableStreamingJob(new RandomLongSource(), new PrintSinkFunction<>());
job.execute();
}
// While it's tempting for something this simple, avoid using anonymous classes or lambdas
// for any business logic you might want to unit test.
public class IncrementMapFunction implements MapFunction<Long, Long> {
#Override
public Long map(Long record) throws Exception {
return record + 1 ;
}
}
}
Here's the RandomLongSource:
public class RandomLongSource extends RichParallelSourceFunction<Long> {
private volatile boolean cancelled = false;
private Random random;
#Override
public void open(Configuration parameters) throws Exception {
super.open(parameters);
random = new Random();
}
#Override
public void run(SourceContext<Long> ctx) throws Exception {
while (!cancelled) {
Long nextLong = random.nextLong();
synchronized (ctx.getCheckpointLock()) {
ctx.collect(nextLong);
}
}
}
#Override
public void cancel() {
cancelled = true;
}
}
I am trying to use event time in my Flink job, and using BoundedOutOfOrdernessTimestampExtractor to extract timestamp and generate watermark.
But I have some input Kafka having sparse stream, it can have no data for a long time, which makes the getResult in AggregateFunction not called at all. I can see data going into add function.
I have set getEnv().getConfig().setAutoWatermarkInterval(1000L);
I tried
eventsWithKey
.keyBy(entry -> (String) entry.get(key))
.window(TumblingEventTimeWindows.of(Time.minutes(windowInMinutes)))
.allowedLateness(WINDOW_LATENESS)
.aggregate(new CountTask(basicMetricTags, windowInMinutes))
also session window
eventsWithKey
.keyBy(entry -> (String) entry.get(key))
.window(EventTimeSessionWindows.withGap(Time.seconds(30)))
.aggregate(new CountTask(basicMetricTags, windowInMinutes))
All the watermark metics shows No Watermark
How can I let Flink to ignore that no watermark thing?
FYI, this is commonly referred to as the "idle source" problem. This occurs because whenever a Flink operator has two or more inputs, its watermark is the minimum of the watermarks from its inputs. If one of those inputs stalls, its watermark no longer advances.
Note that Flink does not have per-key watermarking -- a given operator is typically multiplexed across events for many keys. So long as some events are flowing through a given task's input streams, its watermark will advance, and event time timers for idle keys will still fire. For this "idle source" problem to occur, a task has to have an input stream that has become completely idle.
If you can arrange for it, the best solution is to have your data sources include keepalive events. This will allow you to advance your watermarks with confidence, knowing that the source is simply idle, rather than, for example, offline.
If that's not possible, and if you have some sources that aren't idle, then you could put a rebalance() in front of the BoundedOutOfOrdernessTimestampExtractor (and before the keyBy), so that every instance continues to receive some events and can advance its watermark. This comes at the expense of an extra network shuffle.
Perhaps the most commonly used solution is to use a watermark generator that detects idleness and artificially advances the watermark based on a processing time timer. ProcessingTimeTrailingBoundedOutOfOrdernessTimestampExtractor is an example of that.
A new watermark with idleness capability has been introduced. Flink will ignore these idle watermarks while calculating the minimum so the single partition with the data will be considered.
https://ci.apache.org/projects/flink/flink-docs-release-1.11/api/java/org/apache/flink/api/common/eventtime/WatermarksWithIdleness.html
I have the same issue - a src that may be inactive for a long time.
The solution below is based on WatermarksWithIdleness.
It is a standalone Flink job that demonstrate the concept.
package com.demo.playground.flink.sleepysrc;
import org.apache.flink.api.common.eventtime.WatermarkStrategy;
import org.apache.flink.api.common.eventtime.WatermarksWithIdleness;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.datastream.KeyedStream;
import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
import org.apache.flink.streaming.api.datastream.WindowedStream;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.api.functions.source.SourceFunction;
import org.apache.flink.streaming.api.functions.windowing.ProcessWindowFunction;
import org.apache.flink.streaming.api.windowing.assigners.EventTimeSessionWindows;
import org.apache.flink.streaming.api.windowing.time.Time;
import org.apache.flink.streaming.api.windowing.windows.TimeWindow;
import org.apache.flink.util.Collector;
import java.time.Duration;
public class SleepyJob {
public static void main(String[] args) throws Exception {
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
final EventGenerator eventGenerator = new EventGenerator();
WatermarkStrategy<Event> strategy = WatermarkStrategy.
<Event>forBoundedOutOfOrderness(Duration.ofSeconds(5)).
withIdleness(Duration.ofSeconds(Constants.IDLE_TIME_SEC)).
withTimestampAssigner((event, timestamp) -> event.timestamp);
final DataStream<Event> events = env.addSource(eventGenerator).assignTimestampsAndWatermarks(strategy);
KeyedStream<Event, String> eventStringKeyedStream = events.keyBy((Event event) -> event.id);
WindowedStream<Event, String, TimeWindow> windowedStream = eventStringKeyedStream.window(EventTimeSessionWindows.withGap(Time.milliseconds(Constants.SESSION_WINDOW_GAP)));
windowedStream.allowedLateness(Time.milliseconds(1000));
SingleOutputStreamOperator<Object> result = windowedStream.process(new ProcessWindowFunction<Event, Object, String, TimeWindow>() {
#Override
public void process(String s, Context context, Iterable<Event> events, Collector<Object> collector) {
int counter = 0;
for (Event e : events) {
Utils.print(++counter + ") inside process: " + e);
}
Utils.print("--- Process Done ----");
}
});
result.print();
env.execute("Sleepy flink src demo");
}
private static class Event {
public Event(String id) {
this.timestamp = System.currentTimeMillis();
this.eventData = "not_important_" + this.timestamp;
this.id = id;
}
#Override
public String toString() {
return "Event{" +
"id=" + id +
", timestamp=" + timestamp +
", eventData='" + eventData + '\'' +
'}';
}
public String id;
public long timestamp;
public String eventData;
}
private static class EventGenerator implements SourceFunction<Event> {
#Override
public void run(SourceContext<Event> ctx) throws Exception {
/**
* Here is the sleepy src - after NUM_OF_EVENTS events are collected , the code goes to a SHORT_SLEEP_TIME sleep
* We would like to detect this inactivity and FIRE the window
*/
int counter = 0;
while (running) {
String id = Long.toString(System.currentTimeMillis());
Utils.print(String.format("Generating %d events with id %s", 2 * Constants.NUM_OF_EVENTS, id));
while (counter < Constants.NUM_OF_EVENTS) {
Event event = new Event(id);
ctx.collect(event);
counter++;
Thread.sleep(Constants.VERY_SHORT_SLEEP_TIME);
}
// here we create a delay:
// a time of inactivity where
// we would like to FIRE the window
Thread.sleep(Constants.SHORT_SLEEP_TIME);
counter = 0;
while (counter < Constants.NUM_OF_EVENTS) {
Event event = new Event(id);
ctx.collect(event);
counter++;
Thread.sleep(Constants.VERY_SHORT_SLEEP_TIME);
}
Thread.sleep(Constants.LONG_SLEEP_TIME);
}
}
#Override
public void cancel() {
this.running = false;
}
private volatile boolean running = true;
}
private static final class Constants {
public static final int VERY_SHORT_SLEEP_TIME = 300;
public static final int SHORT_SLEEP_TIME = 8000;
public static final int IDLE_TIME_SEC = 5;
public static final int LONG_SLEEP_TIME = SHORT_SLEEP_TIME * 5;
public static final long SESSION_WINDOW_GAP = 60 * 1000;
public static final int NUM_OF_EVENTS = 4;
}
private static final class Utils {
public static void print(Object obj) {
System.out.println(new java.util.Date() + " > " + obj);
}
}
}
For others, make sure there's data coming out of all your topics' partitions if you're using Kafka
I know it sounds dumb, but in my case I had a single source and the problem was still happening, because I was testing with very little data in a single Kafka topic (single source) that had 10 partitions. The dataset was so small that some of the topic's partitions did not have anything to give and, although I had only one source (the one topic), Flink did not increase the Watermark.
The moment I switched my source to a topic with a single partition the Watermark started to advance.
We build camel routes dynamically based on configuration parameters stored in the database. We have generic class that builds all camel routes. Some of the parameters just presented as raw Camel Spring XML. Class that builds camel routes extends RouteBuilder and here is portion of the code constructing Camel route:
#Override
public void configure() throws Exception {
RouteDefinition route = from(inputFile);
configureSpringXMLActivity(0, route, convertBodyXml);
configureSpringXMLActivity(5, route, setHeaderXml);
}
void configureSpringXMLActivity(final Integer seq, final RouteDefinition route, final String xmlConfig)
throws Exception {
ActivityIdentifier identifier = new CamelActivityIdentifier(seq);
route.process(new ActivityHandoverProcessor(identifier));
final ChoiceDefinition choice = route.choice().when(new ActivityPredicate(identifier));
RouteContext routeContext = new DefaultRouteContext(camelContext, route, route.getInputs().get(0),
camelContext.getRoutes());
final StringReader reader = new StringReader(xmlConfig);
Object result = esb.getConfigUnmarshaller().unmarshal(reader);
if (result != null) {
ProcessorDefinition<?> processorDefinition = (ProcessorDefinition<?>) result;
Processor processor = processorDefinition.createProcessor(routeContext);
choice.process(processor);
}
where
/** setHeader xml. */
private final String setHeaderXml = "<setHeader headerName=\"extractFileName\" xmlns=\"http://camel.apache.org/schema/spring\"><simple>${body}</simple></setHeader>";
/** convertBody xml. */
private final String convertBodyXml = "<convertBodyTo xmlns=\"http://camel.apache.org/schema/spring\" type=\"java.lang.String\"/>";
When we start camel context it produces exception on creating setHeader processor.
java.lang.ClassCastException: org.apache.camel.model.ProcessDefinition cannot be cast to org.apache.camel.model.SetHeaderDefinition
at org.apache.camel.management.DefaultManagementObjectStrategy.getManagedObjectForProcessor(DefaultManagementObjectStrategy.java:355)
at org.apache.camel.management.DefaultManagementLifecycleStrategy.getManagedObjectForProcessor(DefaultManagementLifecycleStrategy.java:515)
at org.apache.camel.management.DefaultManagementLifecycleStrategy.getManagedObjectForService(DefaultManagementLifecycleStrategy.java:467)
at org.apache.camel.management.DefaultManagementLifecycleStrategy.onServiceAdd(DefaultManagementLifecycleStrategy.java:378)
at org.apache.camel.impl.RouteService.startChildService(RouteService.java:338)
at org.apache.camel.impl.RouteService.warmUp(RouteService.java:182)
at org.apache.camel.impl.DefaultCamelContext.doWarmUpRoutes(DefaultCamelContext.java:3496)
at org.apache.camel.impl.DefaultCamelContext.safelyStartRouteServices(DefaultCamelContext.java:3426)
at org.apache.camel.impl.DefaultCamelContext.doStartOrResumeRoutes(DefaultCamelContext.java:3203)
at org.apache.camel.impl.DefaultCamelContext.doStartCamel(DefaultCamelContext.java:3059)
at org.apache.camel.impl.DefaultCamelContext.access$000(DefaultCamelContext.java:175)
at org.apache.camel.impl.DefaultCamelContext$2.call(DefaultCamelContext.java:2854)
at org.apache.camel.impl.DefaultCamelContext$2.call(DefaultCamelContext.java:2850)
at org.apache.camel.impl.DefaultCamelContext.doWithDefinedClassLoader(DefaultCamelContext.java:2873)
at org.apache.camel.impl.DefaultCamelContext.doStart(DefaultCamelContext.java:2850)
at org.apache.camel.support.ServiceSupport.start(ServiceSupport.java:61)
at org.apache.camel.impl.DefaultCamelContext.start(DefaultCamelContext.java:2819)
In Camel (version 2.17.0) source code - class DefaultManagementObjectStrategy:
..........
} else if(target1 instanceof SetHeaderProcessor) {
answer = new ManagedSetHeader(context, (SetHeaderProcessor)target1, (SetHeaderDefinition)definition);
.........
it fails when casting (SetHeaderDefinition)definition
However, there is no issue with my first activity - convertBodyXml. In the same Camel class:
if(target1 instanceof ConvertBodyProcessor) {
answer = new ManagedConvertBody(context, (ConvertBodyProcessor)target1, definition);
In this case Camel code did not need to cast definition to create managed object: ..., definition)
DefaultManagementObjectStrategy class does casting to specific definition when creating some managed object but not on the others.
Could you please recommend how to get around ClassCastException, but still build route from generic ProcessorDefinition objects.
Thanks in advance.
Instead of constructing Processor from ProcessorDefinition and calling process method on ChoiceDefinition, it is possible to just call addOutput method with ProcessorDefinition parameter on ChoiceDefinition directly.
Basically, choice.addOutput(processorDefinition);
Here is updated snippet:
#Override
public void configure() throws Exception {
RouteDefinition route = from(inputFile);
configureSpringXMLActivity(0, route, convertBodyXml);
configureSpringXMLActivity(5, route, setHeaderXml);
}
void configureSpringXMLActivity(final Integer seq, final RouteDefinition route, final String xmlConfig)
throws Exception {
ActivityIdentifier identifier = new CamelActivityIdentifier(seq);
route.process(new ActivityHandoverProcessor(identifier));
final ChoiceDefinition choice = route.choice().when(new ActivityPredicate(identifier));
final StringReader reader = new StringReader(xmlConfig);
Object result = esb.getConfigUnmarshaller().unmarshal(reader);
if (result != null) {
ProcessorDefinition<?> processorDefinition = (ProcessorDefinition<?>) result;
choice.addOutput(result);
}
I hope it will help someone