Camel ApacheMQ -> AHC behaviour (blocking?) - apache-camel

I just started using Apache Camel and I'm curious about the seemingly counter-intuitive default behaviour of asynchronous http client (AHC). While consuming messages from ActiveMQ, I can't get it to act in a non-blocking fashion.
My route looks like this:
#Component
public class Broadcaster extends RouteBuilder {
#Override
public void configure() throws Exception {
errorHandler(deadLetterChannel("activemq:failed.messages"));
from("activemq:outbound.messages")
.setExchangePattern(ExchangePattern.InOnly)
.recipientList(simple("ahc:${in.header[PublishDestination]}"))
.end();
}
}
I enqueued several messages, half of which I sent to a delayed web server, and the other half to a normal one. I expected to see all the normal messages consumed immediately by the fast server, and the slow messages gradually over time. However, this was the behaviour observed on the fast web server:
00:24:02.585, <hello>World</hello>
00:24:03.622, <hello>World</hello>
00:24:04.640, <hello>World</hello>
00:24:05.658, <hello>World</hello>
As you can see there is exactly one second between each logged request that corresponds to the artificial 1 second delay on the slow server. Based on the route timings, it looks like the JMS consumer is waiting for AHC to complete before it consumes the next message off the queue:
Processor Elapsed (ms)
[activemq://outbound.messages ] [ 1020]
[setExchangePattern[InOnly] ] [ 0]
[ahc:${in.header[PublishDestination]}} ] [ 1018]
Am I supposed to explicitly use async producers and write callback handlers in these cases, or is there something else I'm missing? Thank you!

Well, case of a RTFM I guess, although I ActiveMQ page leaves a lot to be desired in terms of properties available for endpoint configuration. There should probably be a note to say most (all?) JMS config options are also available for ActiveMQ component. In any case, the solution is to define the consumer as follows:
from("activemq:outbound.messages?asyncConsumer=true")

Related

How to manually ack/nack a PubSub message in Camel Route

I am setting up a Camel Route with ackMode=NONE meaning acknowlegements are not done automatically. How do I explicitly acknowledge the message in the route?
In my Camel Route definition I've set ackMode to NONE. According to the documentation, I should be able to manually acknowledge the message downstream:
https://github.com/apache/camel/blob/master/components/camel-google-pubsub/src/main/docs/google-pubsub-component.adoc
"AUTO = exchange gets ack’ed/nack’ed on completion. NONE = downstream process has to ack/nack explicitly"
However I cannot figure out how to send the ack.
from("google-pubsub:<project>:<subscription>?concurrentConsumers=1&maxMessagesPerPoll=1&ackMode=NONE")
.bean("processingBean");
My PubSub subscription has an acknowledgement deadline of 10 seconds and so my message keeps getting re-sent every 10 seconds due to ackMode=NONE. This is as expected. However I cannot find a way to manually acknowledge the message once processing is complete and stop the re-deliveries.
I was able to dig through the Camel components and figure out how it is done. First I created a GooglePubSubConnectionFactory bean:
#Bean
public GooglePubsubConnectionFactory googlePubsubConnectionFactory() {
GooglePubsubConnectionFactory connectionFactory = new GooglePubsubConnectionFactory();
connectionFactory.setCredentialsFileLocation(pubsubKey);
return connectionFactory;
}
Then I was able to reference the ack id of the message from the header:
#Header(GooglePubsubConstants.ACK_ID) String ackId
Then I used the following code to acknowledge the message:
List<String > ackIdList = new ArrayList<>();
ackIdList.add(ackId);
AcknowledgeRequest ackRequest = new AcknowledgeRequest().setAckIds(ackIdList);
Pubsub pubsub = googlePubsubConnectionFactory.getDefaultClient();
pubsub.projects().subscriptions().acknowledge("projects/<my project>/subscriptions/<my subscription>", ackRequest).execute();
I think it is best if you look how the Camel component does it with ackMode=AUTO. Have a look at this class (method acknowledge)
But why do you want to do this extra work? Camel is your fried to simplify integration by abstracting away low level code.
So when you use ackMode=AUTO Camel automatically commits your successfully processed messages (when the message has successfully passed the whole route) and rolls back your not processable messages.

How to schedule JMS consuming in Apache Camel?

I need to consume JMS messages with Camel everyday at 9pm (or from 9pm to 10pm to give it the time to consume all the messages).
I can't see any "scheduler" option for URIs "cMQConnectionFactory:queue:myQueue" while it exists for "file://" or "ftp://" URIs.
If I put a cTimer before it will send an empty message to the queue, not schedule the consumer.
You can use a route policy where you can setup for example a cron expression to tell when the route is started and when its stopped.
http://camel.apache.org/scheduledroutepolicy.html
Other alternatives is to start/stop the route via the Java API or JMX etc and have some other logic that knows when to do that according to the clock.
This is something that has caused me a significant amount of trouble. There are a number of ways of skinning this cat, and none of them are great as far as I can see.
On is to set the route not to start automatically, and use a schedule to start the route and then stop it again after a short time using the controlbus EIP. http://camel.apache.org/controlbus.html
I didn't like this approach because I didn't trust that it would drain the queue completely once and only once per trigger.
Another is to use a pollEnrich to query the queue, but that only seems to pick up one item from the queue, but I wanted to completely drain it (only once).
I wrote a custom bean that uses consumer and producer templates to read all the entries in a queue with a specified time-out.
I found an example on the internet somewhere, but it took me a long time to find, and quickly searching again I can't find it now.
So what I have is:
from("timer:myTimer...")
.beanRef( "myConsumerBean", "pollConsumer" )
from("direct:myProcessingRoute")
.to("whatever");
And a simple pollConsumer method:
public void pollConsumer() throws Exception {
if ( consumerEndpoint == null ) consumerEndpoint = consumer.getCamelContext().getEndpoint( endpointUri );
consumer.start();
producer.start();
while ( true ) {
Exchange exchange = consumer.receive( consumerEndpoint, 1000 );
if ( exchange == null ) break;
producer.send( exchange );
consumer.doneUoW( exchange );
}
producer.stop();
consumer.stop();
}
where the producer is a DefaultProducerTemplate, consumer is a DefaultConsumerTemplate, and these are configured in the bean configuration.
This seems to work for me, but if anyone gives you a better answer I'll be very interested to see how it can be done better.

Camel RaabitMQ Acknowledgement

I am using Camel for my messaging application. In my use case I have a producer (which is RabbitMQ here), and the Consumer is a bean.
from("rabbitmq://127.0.0.1:5672/exDemo?queue=testQueue&username=guest&password=guest&autoAck=false&durable=true&exchangeType=direct&autoDelete=false")
.throttle(100).timePeriodMillis(10000)
.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
MyCustomConsumer.consume(exchange.getIn().getBody())
}
});
Apparently, when autoAck is false, acknowledgement is sent when the process() execution is finished (please correct me if I am wrong here)
Now I don't want to acknowledge when the process() execution is finished, I want to do it at a later stage. I have a BlockingQueue in my MyCustomConsumer where consume() is putting messages, and MyCustomConsumer has different mechanism to process them. I want to acknowledge message only when MyCustomConsumer finishes processing messages from BlockingQueue. How can I achieve this?
You can consider to use the camel AsyncProcessor API to call the callback done once you processing the message from BlockingQueue.
I bumped into the same issue.
The Camel RabbitMQConsumer.RabbitConsumer implementation does
consumer.getProcessor().process(exchange);
long deliveryTag = envelope.getDeliveryTag();
if (!consumer.endpoint.isAutoAck()) {
log.trace("Acknowledging receipt [delivery_tag={}]", deliveryTag);
channel.basicAck(deliveryTag, false);
}
So it's just expecting a synchronous processor.
If you bind this to a seda route for instance, the process method returns immediately and you're pretty much back to the autoAck situation.
My understanding is that we need to make our own RabbitMQ component to do something like
consumer.getAsyncProcessor().process(exchange, new AsynCallback() {
public void done(doneSync) {
if (!consumer.endpoint.isAutoAck()) {
long deliveryTag = envelope.getDeliveryTag();
log.trace("Acknowledging receipt [delivery_tag={}]", deliveryTag);
channel.basicAck(deliveryTag, false);
}
}
});
Even then, the semantics of the "doneSync" parameter is not clear to me. I think it's merely a marker to identify whether we're dealing with a real async processor or a synchronous processor that was automatically wrapped into an async one.
Maybe someone can validate or invalidate this solution?
Is there a lighter/faster/stronger alternative?
Or could this be suggested as the default implementation for the RabbitMQConsumer?

How to broadcast a message within the same JVM process with Camel?

I am consuming a directory and I would like to broadcast a message to listeners within the same JVM process. I don't know who the interested parties are because they register themselves when they come up: the set of services within my JVM process depends on configuration.
Multicast does not seem to be what I want because I don't know at route build time where to send messages.
Besides using a queuing solution (ActiveMQ, RabbitMQ), are there other solutions?
Beside the queuing solutions (JMS/ActiveMQ and RabbitMQ), you could use the VM component for intra JVM communication. VM is an extension of the SEDA component. In contrast to SEDA that can only be used for communication between different routes in a single Camel context, VM can be used for communication between routes running in different contexts.
Sending a message:
final ProducerTemplate template = context.createProducerTemplate();
template.sendBody("vm:start", "World!");
With multipleConsumers=true it is possible to simulate Publish-Subscribe messaging, i.e. it is possible to configure more than one consumer:
from("vm:start?multipleConsumers=true")
.log("********** Hello: 1 ************");
from("vm:start?multipleConsumers=true")
.log("********** Hello: 2 ************");
This prints:
route1 INFO ********** Hello: 1 ************
route2 INFO ********** Hello: 2 ************
However, in contrast to JMS/ActiveMQ and RabbitMQ, the messages can not leave the JVM. And the messages are not persisted. That means that the messages are lost, a) if no consumer has been started when the message is sent, b) if the JVM crashes before the messages are consumed.
use the recipient list pattern as it resolves the destination endpoints at runtime...
for example, you could implement a method to dynamically determine the recipients, etc...
from("direct:test").recipientList().method(MessageRouter.class, "routeTo");
public class MessageRouter {
public String[] routeTo() {
return new String[] {
"direct:a", "direct:b"
};
}
}
The Camel SEDA component can give you this. However, it's only valid inside of the current Camel context. If that restriction works for you it's the way to go. It'll handle both a queue style messaging system or pub/sub.
Camel SEDA Component

How can I improve the performance of a Seda queue?

Take this example:
from("seda:data").log("data added to queue")
.setHeader("CamelHttpMethod", constant("POST"))
.setHeader(Exchange.CONTENT_TYPE, constant("application/json"))
.process(new Processor() {
public void process(Exchange exchange) throws Exception {
exchange.setProperty(Exchange.CHARSET_NAME, "UTF-8");
}
})
.recipientList(header(RECIPIENT_LIST))
.ignoreInvalidEndpoints().parallelProcessing();
Assume the RECIPENT_LIST header contains only one http endpoint. For a given http endpoint, messages should be processed in order, but two messages for different end points can be processed in parallel.
Basically, I want to know if there is anything be done to improve performance. For example, would using concurrentConsumers help?
SEDA with concurrentConsumers > 1 would absolutely help with throughput because it would allow multiple threads to run in parallel...but you'll need to implement your own locking mechanism to make sure only a single thread is hitting a given http endpoint at a given time
otherwise, here is an overview of your options: http://camel.apache.org/parallel-processing-and-ordering.html
in short, if you can use JMS, then consider using ActiveMQ message groups as its trivial to use and is designed for exactly this use case (parallel processing, but single threaded by groups of messages, etc).

Resources