Apache Camel timeout synchronous route - apache-camel

I was trwing to construct a synchronous route with timeout using Apache Camel, and I couldn't find anything in the framework with resolve it.
So I decided to build a process with make it for me.
public class TimeOutProcessor implements Processor {
private String route;
private Integer timeout;
public TimeOutProcessor(String route, Integer timeout) {
this.route = route;
this.timeout = timeout;
}
#Override
public void process(Exchange exchange) throws Exception {
ExecutorService executor = Executors.newSingleThreadExecutor();
Future<Exchange> future = executor.submit(new Callable<Exchange>() {
public Exchange call() {
// Check for field rating
ProducerTemplate producerTemplate = exchange.getFromEndpoint().getCamelContext().createProducerTemplate();
return producerTemplate.send(route, exchange);
}
});
try {
exchange.getIn().setBody(future.get(
timeout,
TimeUnit.SECONDS));
} catch (TimeoutException e) {
throw new TimeoutException("a timeout problem occurred");
}
executor.shutdownNow();
}
And I call this process this way:
.process(new TimeOutProcessor("direct:myRoute",
Integer.valueOf(this.getContext().resolvePropertyPlaceholders("{{timeout}}")))
I wanna know if my way is the recomended way to do it, if it is not, what is the best way for build a synchronous route with timeout?

I want to thanks the people who answer me.
That is my final code:
public class TimeOutProcessor implements Processor {
private String route;
private Integer timeout;
public TimeOutProcessor(String route, Integer timeout) {
this.route = route;
this.timeout = timeout;
}
#Override
public void process(Exchange exchange) throws Exception {
Future<Exchange> future = null;
ProducerTemplate producerTemplate = exchange.getFromEndpoint().getCamelContext().createProducerTemplate();
try {
future = producerTemplate.asyncSend(route, exchange);
exchange.getIn().setBody(future.get(
timeout,
TimeUnit.SECONDS));
producerTemplate.stop();
future.cancel(true);
} catch (TimeoutException e) {
producerTemplate.stop();
future.cancel(true);
throw new TimeoutException("a timeout problem occurred");
}
}
}

Related

BroadcastProcessFunction Processing Delay

I'm fairly new to Flink and would be grateful for any advice with this issue.
I wrote a job that receives some input events and compares them with some rules before forwarding them on to kafka topics based on whatever rules match. I implemented this using a flatMap and found it worked well, with one downside: I was loading the rules just once, during application startup, by calling an API from my main() method, and passing the result of this API call into the flatMap function. This worked, but it means that if there are any changes to the rules I have to restart the application, so I wanted to improve it.
I found this page in the documentation which seems to be an appropriate solution to the problem. I wrote a custom source to poll my Rules API every few minutes, and then used a BroadcastProcessFunction, with the Rules added to to the broadcast state using processBroadcastElement and the events processed by processElement.
The solution is working, but with one problem. My first approach using a FlatMap would process the events almost instantly. Now that I changed to a BroadcastProcessFunction each event takes 60 seconds to process, and it seems to be more or less exactly 60 seconds every time with almost no variation. I made no changes to the rule matching logic itself.
I've had a look through the documentation and I can't seem to find a reason for this, so I'd appreciate if anyone more experienced in flink could offer a suggestion as to what might cause this delay.
The job:
public static void main(String[] args) throws Exception {
// set up the streaming execution environment
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setStreamTimeCharacteristic(TimeCharacteristic.IngestionTime);
// read the input from Kafka
DataStream<KafkaEvent> documentStream = env.addSource(
createKafkaSource(getSourceTopic(), getSourceProperties())).name("Kafka[" + getSourceTopic() + "]");
// Configure the Rules data stream
DataStream<RulesEvent> ruleStream = env.addSource(
new RulesApiHttpSource(
getApiRulesSubdomain(),
getApiBearerToken(),
DataType.DataTypeName.LOGS,
getRulesApiCacheDuration()) // Currently set to 120000
);
MapStateDescriptor<String, RulesEvent> ruleStateDescriptor = new MapStateDescriptor<>(
"RulesBroadcastState",
BasicTypeInfo.STRING_TYPE_INFO,
TypeInformation.of(new TypeHint<RulesEvent>() {
}));
// broadcast the rules and create the broadcast state
BroadcastStream<RulesEvent> ruleBroadcastStream = ruleStream
.broadcast(ruleStateDescriptor);
// extract the resources and attributes
documentStream
.connect(ruleBroadcastStream)
.process(new FanOutLogsRuleMapper()).name("FanOut Stream")
.addSink(createKafkaSink(getDestinationProperties()))
.name("FanOut Sink");
// run the job
env.execute(FanOutJob.class.getName());
}
The custom HTTP source which gets the rules
public class RulesApiHttpSource extends RichSourceFunction<RulesEvent> {
private static final Logger LOGGER = LoggerFactory.getLogger(RulesApiHttpSource.class);
private final long pollIntervalMillis;
private final String endpoint;
private final String bearerToken;
private final DataType.DataTypeName dataType;
private final RulesApiCaller caller;
private volatile boolean running = true;
public RulesApiHttpSource(String endpoint, String bearerToken, DataType.DataTypeName dataType, long pollIntervalMillis) {
this.pollIntervalMillis = pollIntervalMillis;
this.endpoint = endpoint;
this.bearerToken = bearerToken;
this.dataType = dataType;
this.caller = new RulesApiCaller(this.endpoint, this.bearerToken);
}
#Override
public void open(Configuration configuration) throws Exception {
// do nothing
}
#Override
public void close() throws IOException {
// do nothing
}
#Override
public void run(SourceContext<RulesEvent> ctx) throws IOException {
while (running) {
if (pollIntervalMillis > 0) {
try {
RulesEvent event = new RulesEvent();
event.setRules(getCurrentRulesList());
event.setDataType(this.dataType);
event.setRetrievedAt(Instant.now());
ctx.collect(event);
Thread.sleep(pollIntervalMillis);
} catch (InterruptedException e) {
running = false;
}
} else if (pollIntervalMillis <= 0) {
cancel();
}
}
}
public List<Rule> getCurrentRulesList() throws IOException {
// call API and get rulles
}
#Override
public void cancel() {
running = false;
}
}
The BroadcastProcessFunction
public abstract class FanOutRuleMapper extends BroadcastProcessFunction<KafkaEvent, RulesEvent, KafkaEvent> {
protected final String RULES_EVENT_NAME = "rulesEvent";
protected final MapStateDescriptor<String, RulesEvent> ruleStateDescriptor = new MapStateDescriptor<>(
"RulesBroadcastState",
BasicTypeInfo.STRING_TYPE_INFO,
TypeInformation.of(new TypeHint<RulesEvent>() {
}));
#Override
public void processBroadcastElement(RulesEvent rulesEvent, BroadcastProcessFunction<KafkaEvent, RulesEvent, KafkaEvent>.Context ctx, Collector<KafkaEvent> out) throws Exception {
ctx.getBroadcastState(ruleStateDescriptor).put(RULES_EVENT_NAME, rulesEvent);
LOGGER.debug("Added to broadcast state {}", rulesEvent.toString());
}
// omitted rules matching logic
}
public class FanOutLogsRuleMapper extends FanOutRuleMapper {
public FanOutLogsJobRuleMapper() {
super();
}
#Override
public void processElement(KafkaEvent in, BroadcastProcessFunction<KafkaEvent, RulesEvent, KafkaEvent>.ReadOnlyContext ctx, Collector<KafkaEvent> out) throws Exception {
RulesEvent rulesEvent = ctx.getBroadcastState(ruleStateDescriptor).get(RULES_EVENT_NAME);
ExportLogsServiceRequest otlpLog = extractOtlpMessageFromJsonPayload(in);
for (Rule rule : rulesEvent.getRules()) {
boolean match = false;
// omitted rules matching logic
if (match) {
for (RuleDestination ruleDestination : rule.getRulesDestinations()) {
out.collect(fillInTheEvent(in, rule, ruleDestination, otlpLog));
}
}
}
}
}
Maybe you can give the complete code of the FanOutLogsRuleMapper class, currently the match variable is always false

Can I write sync code in RichAsyncFunction

When I need to work with I/O (Query DB, Call to the third API,...), I can use RichAsyncFunction. But I need to interact with Google Sheet via GG Sheet API: https://developers.google.com/sheets/api/quickstart/java. This API is sync. I wrote below code snippet:
public class SendGGSheetFunction extends RichAsyncFunction<Obj, String> {
#Override
public void asyncInvoke(Obj message, final ResultFuture<String> resultFuture) {
CompletableFuture.supplyAsync(() -> {
syncSendToGGSheet(message);
return "";
}).thenAccept((String result) -> {
resultFuture.complete(Collections.singleton(result));
});
}
}
But I found that message send to GGSheet very slow, It seems to send by synchronous.
Most of the code executed by users in AsyncIO is sync originally. You just need to ensure, it's actually executed in a separate thread. Most commonly a (statically shared) ExecutorService is used.
private class SendGGSheetFunction extends RichAsyncFunction<Obj, String> {
private transient ExecutorService executorService;
#Override
public void open(Configuration parameters) throws Exception {
super.open(parameters);
executorService = Executors.newFixedThreadPool(30);
}
#Override
public void close() throws Exception {
super.close();
executorService.shutdownNow();
}
#Override
public void asyncInvoke(final Obj message, final ResultFuture<String> resultFuture) {
executorService.submit(() -> {
try {
resultFuture.complete(syncSendToGGSheet(message));
} catch (SQLException e) {
resultFuture.completeExceptionally(e);
}
});
}
}
Here are some considerations on how to tune AsyncIO to increase throughput: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-Async-IO-operator-tuning-micro-benchmarks-td35858.html

How to browse messages from a queue using Apache Camel?

I need to browse messages from an active mq using Camel route without consuming the messages.
The messages in the JMS queue are to be read(only browsed and not consumed) and moved to a database while ensuring that the original queue remains intact.
public class CamelStarter {
private static CamelContext camelContext;
public static void main(String[] args) throws Exception {
camelContext = new DefaultCamelContext();
ConnectionFactory connectionFactory = new ActiveMQConnectionFactory(ActiveMQConnectionFactory.DEFAULT_BROKER_URL);
camelContext.addComponent("jms", JmsComponent.jmsComponent(connectionFactory));
camelContext.addRoutes(new RouteBuilder() {
#Override
public void configure() throws Exception {
from("jms:queue:testQueue").to("browse:orderReceived") .to("jms:queue:testQueue1");
}
}
);
camelContext.start();
Thread.sleep(1000);
inspectReceivedOrders();
camelContext.stop();
}
public static void inspectReceivedOrders() {
BrowsableEndpoint browse = camelContext.getEndpoint("browse:orderReceived", BrowsableEndpoint.class);
List<Exchange> exchanges = browse.getExchanges();
System.out.println("Browsing queue: "+ browse.getEndpointUri() + " size: " + exchanges.size());
for (Exchange exchange : exchanges) {
String payload = exchange.getIn().getBody(String.class);
String msgId = exchange.getIn().getHeader("JMSMessageID", String.class);
System.out.println(msgId + "=" +payload);
}
As far as I know, not possible in Camel to read (without consuming !) JMS messages...
The only workaround I found (in a JEE app) was to define a startup EJB with a timer, holding a QueueBrowser, and delegating the msg processing to a Camel route:
#Singleton
#Startup
public class MyQueueBrowser {
private TimerService timerService;
#Resource(mappedName="java:/jms/queue/com.company.myqueue")
private Queue sourceQueue;
#Inject
#JMSConnectionFactory("java:/ConnectionFactory")
private JMSContext jmsContext;
#Inject
#Uri("direct:readMessage")
private ProducerTemplate camelEndpoint;
#PostConstruct
private void init() {
TimerConfig timerConfig = new TimerConfig(null, false);
ScheduleExpression se = new ScheduleExpression().hour("*").minute("*/"+frequencyInMin);
timerService.createCalendarTimer(se, timerConfig);
}
#Timeout
public void scheduledExecution(Timer timer) throws Exception {
QueueBrowser browser = null;
try {
browser = jmsContext.createBrowser(sourceQueue);
Enumeration<Message> msgs = browser.getEnumeration();
while ( msgs.hasMoreElements() ) {
Message jmsMsg = msgs.nextElement();
// + here: read body and/or properties of jmsMsg
camelEndpoint.sendBodyAndHeader(body, myHeaderName, myHeaderValue);
}
} catch (JMSRuntimeException jmsException) {
...
} finally {
browser.close();
}
}
}
Apache camel browse component is exactly designed for that. Check here for the documentation.
Can't say more since you have not provided any other information.
Let's asssume you have a route like this
from("activemq:somequeue).to("bean:someBean")
or
from("activemq:somequeue).process(exchange -> {})
All you got to do it put a browse endpoint in between like this
from("activemq:somequeue).to("browse:someHandler").to("bean:someBean")
Then write a class like this
#Component
public class BrowseQueue {
#Autowired
CamelContext camelContext;
public void inspect() {
BrowsableEndpoint browse = camelContext.getEndpoint("browse:someHandler", BrowsableEndpoint.class);
List<Exchange> exchanges = browse.getExchanges();
for (Exchange exchange : exchanges) {
......
}
}
}

Mock not found in the registry for Camel route test

I am trying to test a Camel route (polling messages from an SQS queue) containing
.bean("messageParserProcessor")
where messageParserProcessor is a Processor.
The test:
public class SomeTest extends CamelTestSupport {
private final String queueName = ...;
private final String producerTemplateUri = "aws-sqs://" + queueName + ...;
private static final String MESSAGE_PARSER_PROCESSOR_MOCK_ENDPOINT = "mock:messageParserProcessor";
#EndpointInject(uri = MESSAGE_PARSER_PROCESSOR_MOCK_ENDPOINT)
protected MockEndpoint messageParserProcessor;
#Override
public boolean isUseAdviceWith() {
return true;
}
#Before
public void setUpContext() throws Exception {
context.getRouteDefinitions().get(0).adviceWith(context, new AdviceWithRouteBuilder() {
#Override
public void configure() throws Exception {
interceptSendToEndpoint("bean:messageParserProcessor")
.skipSendToOriginalEndpoint()
.process(MESSAGE_PARSER_PROCESSOR_MOCK_ENDPOINT);
}
});
}
#Test
public void testParser() throws Exception {
context.start();
String expectedBody = "test";
messageParserProcessor.expectedBodiesReceived(expectedBody);
ProducerTemplate template = context.createProducerTemplate();
template.sendBody(producerTemplateUri, expectedBody);
messageParserProcessor.assertIsSatisfied();
context.stop();
}
}
When I run the test I get this error:
org.apache.camel.FailedToCreateRouteException:
Failed to create route route1 at:
>>> InterceptSendToEndpoint[bean:messageParserProcessor -> [process[ref:mock:messageParserProcessor]]] <<< in route: Route(route1)[[From[aws-sqs://xxx...
because of No bean could be found in the registry for: mock:messageParserProcessor of type: org.apache.camel.Processor
Same error if I replace interceptSendToEndpoint(...) with mockEndpointsAndSkip("bean:messageParserProcessor")
The test can be executed (but obviously doesn't pass) when I don't use a mock:
interceptSendToEndpoint("bean:messageParserProcessor")
.skipSendToOriginalEndpoint()
.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {}
});
So the problem is the mock that is not found, what is wrong in the way I create it?
So I found a workaround to retrieve mocks from the registry:
interceptSendToEndpoint("bean:messageParserProcessor")
.skipSendToOriginalEndpoint()
.bean(getMockEndpoint(MESSAGE_PARSER_PROCESSOR_MOCK_ENDPOINT));
// Instead of
// .process(MESSAGE_PARSER_PROCESSOR_MOCK_ENDPOINT);
But I still don't understand why using .process("mock:someBean") doesn't work...

Stop/Remove Route Dynamically/Programmatically Doesn't Remove the Corresponding Thread

During the processing of an Exchange received from JMS I'm creating dynamically a route that fetches a file from FTP to the file system and when the batch is done I need to remove that same route. The following code fragment shows how I do this:
public void execute() {
try {
context.addRoutes(createFetchIndexRoute(routeId()));
} catch (Exception e) {
throw Throwables.propagate(e);
}
}
private RouteBuilder createFetchIndexRoute(final String routeId) {
return new RouteBuilder() {
#Override
public void configure() throws Exception {
from("ftp://" + getRemoteQuarterDirectory() +
"?fileName=" + location.getFileName() +
"&binary=true" +
"&localWorkDirectory=" + localWorkDirectory)
.to("file://" + getLocalQuarterDirectory())
.process(new Processor() {
RouteTerminator terminator;
#Override
public void process(Exchange exchange) throws Exception {
if (camelBatchComplete(exchange)) {
terminator = new RouteTerminator(routeId,
exchange.getContext());
terminator.start();
}
}
})
.routeId(routeId);
}
};
}
I'm Using a thread to stop a route from a route, which is an approach recommended in the Camel Documentation - How can I stop a route from a route
public class RouteTerminator extends Thread {
private String routeId;
private CamelContext camelContext;
public RouteTerminator(String routeId, CamelContext camelContext) {
this.routeId = routeId;
this.camelContext = camelContext;
}
#Override
public void run() {
try {
camelContext.stopRoute(routeId);
camelContext.removeRoute(routeId);
} catch (Exception e) {
throw Throwables.propagate(e);
}
}
}
In result the route does stop. But what I see in the jconsole is that the thread that corresponds to the route isn't removed. Thus in time these abandoned threads just keep accumulating.
Is there a way to properly stop/remove a route dynamically/programmatically and also to release the route's thread, so that they don't accumulate through time?
This is fixed in the next Camel release 2.9.2 and 2.10. Fixed by this ticket:
https://issues.apache.org/jira/browse/CAMEL-5072

Resources