Please help me in finalizing architecture for Asynchronous delivery support using Apache Camel and ActiveMQ and below I have explained point by point basis about my requirement.
I have Jetty Server receiving incoming messages and ActiveMQ to store it in disk using Kaha DB.
Active MQ sends ack once it stores in kaha DB back to client.
I have Spring AbstractPollingMessageListenerContainer JMS Message listener which picks up the message from activemq queue every 1 second and dispatch to Camel HTTP endpoints and then finally sent to actual remote receivers.
Once Dispatcher thread gets response from remote receivers it deletes message from ActiveMQ.
Assume that I have many slow remote receivers in that case my Dispatcher thread created by AbstractPollingMessageListenerContainer remains blocked until I get response from remote receivers. This results to creation of new Dispatcher threads since already created Dispatcher threads are not able to dispatch new messages from ActiveMQ queue.
Now creation of many Dispatcher threads result into more CPU usage which impacts overall performance.
Now my requirement is I want Dispatcher thread only to dispatch messages from ActiveMQ queue to HTTP endpoint and forget and also not do acknowledgement so that message is still in queue.
Also I will not let Dispatcher thread to wait till I get response so I have thought to handle response using separate thread and this same thread will only delete message from ActiveMQ queue.
So my current architecture is like below:
Camel Jetty Server ----> ActiveMQ queue ----> Dispatcher Thread ---> Camel Direct endpoint ----> Camel HTTP endpoint ---> remote receivers sending response back ---> response ---> Dispatcher Thread (sends ack to delete messages from ActiveMQ queue) ----> ActiveMQ Queue.
Here I feel since we are using Direct endpoint which is synchronous so Dispatcher thread remains active till it gets response and so same dispatcher thread is not able to process further new message from ActiveMQ queue.
Please suggest if some thing else I can use here to avoid Direct endpoint.
I used SEDA endpoint but drawback is it processes 1 message using 1 thread and also gets blocked till it gets response from receivers.
In this approach previously Dispatcher thread gets blocked but now Seda consumer threads gets blocked and could not dispatch new messages from in memory queue of SEDA towards remote receivers.
I feel some kind of design which helps me in keep on sending message to remote receivers and only when response comes back some daemeon thread gets notified and it will handle acknowledgement towards activeMQ. Also I thought to use NIO framework implementation like Camel netty/netty4-http component but could not find exact usage and how to fit it in current architecture.
Modified architecture should be like below:
Camel Jetty Server ----> ActiveMQ queue ----> Dispatcher Thread--->Unknown Stuff ----> Camel HTTP Endpoint ---> remote receivers sending response back--->Unknown Stuff (sends ack to delete messages from ActiveMQ queue) ----> ActiveMQ Queue
Please help me in finalising Unknown Stuff and I am posting my query after doing enough R & D.
Also new ideas are welcome and please give me idea with a restriction that I must persist the message and delete it only after getting success response from remote receiver. Also I have to design architecture only using Apache Camel routes.
Route Definitions:
1. Dispatcher Route:
from(fromUri)to(toUris);
fromUri:
[ActiveMQueue.http1270018081testEndpoint1:queue:ActiveMQueue?maxConcurrentConsumers=15&concurrentConsumers=3&maxMessagesPerTask=10&messageListenerContainerFactoryRef=AbstractPollingMessageListenerContainer ]
ToUris:
[ActiveMQ.DLQ:queue:ActiveMQ.DLQ, direct:http1270018081testEndpoint1]
2.Remote Receiver Proxy Route:
fromUri:direct:http1270018081testEndpoint1
from(fromUri).to(toUri).process(responseProcessor)
toUri:http://127.0.0.1:8081/testEndpoint1?bridgeEndpoint=true
responseProcessor: To process response received by remote receiver.
Overall Route looks like below:
Dispatcher Route---> Remote Receiver Route---> Remote Server
JMS message acknowledgement is done under the covers, so your only way to really "send the acknowledgement back to the queue" is to use a JMS transaction (doesn't need to be XA)
It sounds like a LLR-style transaction would be useful and drastically simplify things for you. If you consume the message from the queue using a JMS-local transaction, and only have one other endpoint, the message will only be acknowledged and removed from the queue when the http send is completed-- even though HTTP doesn't support transactions. You can then have a number of concurrent consumers to run in parallel and combine with throttling to help with rate limiting.
from: amq:queue:INPUT.REQUESTS?.. concurrentConsumers.. and transacted enabled
throttle
to: http://url
Related
I have a simple Camel route consuming messages from ActiveMQ, processing and forwarding them to Rest webservices:
from("activemq:MyQueue").process("MyProcessor").to("http4:uri");
I configure concurrentConsumers=100 in the connectionfactory from activemq-component.
In the documentation:
if asyncConsumer is disabled(default) then the Exchange is fully processed before the JmsConsumer will pickup the next message from the JMS queue
Question:
In my route, when is the exchange of each message is fully processed? After the http-callee receives http response? If that is the case, I assume, my route configuration means:
At beginning, 1 message is consumed from each consumers and forwarded to the http
Each of these 100 consumers is waiting and will only consume again if the current http call gets http response from the current message.
Another question:
I found out that the default value of http4 component option connectionsPerRoute=20. As I have 100 consumers, should I set connectionsPerRoute=100?
Thank you,
Hadi
each jms thread runs simultaneously without knowing each other. in your example 100 threads are processed at the same time without getting blocked.You do not need to play in the number of threads of the http component, as this is done through jms threads from start to finish.
Bit of a Camel newbie but here goes.
I have the following route:
from("activemq:queue:outputQueue").inputType(HelloWorld.class)
.to("log:stream")
.marshal().json(JsonLibrary.Jackson, HelloWorld.class)
.to("http:localhost:5000/messageForYouSir?bridgeEndpoint=true");
This retrieves messages from the queue and sends them to the HTTP endpoint as JSON. Fine.
But what if there is an error? Say a HTTP error code of 400? Then I want the message to stay on the queue. I have tried looking into not acknowledging the message, but have not been able to make it work.
Also I have made an Exception handler
onException(HttpOperationFailedException.class)
.handled(false)
.setBody().constant("Vi fekk ein feil");
But still the messages are gone from the queue. Is there some magic spell that can make Camel not acknowledge the messages when there is some error?
You have to consume transacted from the queue to be able to do a rollback. This is configured on the connection configuration to the broker.
Take a look at the Camel JMS docs (the ActiveMQ component extends the JMS component), especially the sections about cache levels and transacted consumption.
The most simple setup is using broker transactions by simply set transacted = true and lazyCreateTransactionManager = false on the JmsConfiguration. This way no Spring TX manager is required.
If transacted consumption is in place and the HTTP server returns an error (basically if an Exception occurs in the Camel Route), Camel does automatically a rollback (if you don't catch the error).
I am trying to setup a camel context that polls a weblogic jms queue, consumes the message and sends it to a webservice endpoint. Incase any error occurs in the transaction or the target system is unavailable, I need to redeliver the same message without losing sequence/ordering.
I have set up a camel jms route with single consumer and enabled transacted attribute as per https://camel.apache.org/transactional-client.html and set the redelivery as unlimited.
When the transaction fails with messageA, the jms message consumption from weblogic queue is rollbacked and the messageA is marked for redelivery (state string is marked delayed) in weblogic. But during this time if another message reaches the weblogic queue, the camel route picks the messageB and forwards it to the target endpoint although the messageA is still in retrying mode. This distorts the whole ordering of the messages.
The transaction client is used to ensure that messages are not lost while the application is shutdown during redelivery.
I expect that there are no message loss and the messages are always sent in the correct order as per generated into the weblogic queue to the target endpoint.
That a newly arrived message outpaces an existing message that has to be redelivered, sounds like a broker (Weblogic) issue or feature.
I never saw this behavior with ActiveMQ. A failed message that is redelivered, is redelivered immediately by the Apache broker.
It sounds like the message is "put aside" internally to redeliver it later. This can absolutely make sense to avoid blocking message processing.
Is there something like a "redelivery delay" set on Weblogic you can configure? I could imagine that a delayed redelivery is like an internal error queue with a scheduled consumer.
We have a camel route where we read a message from an input queue, process it, set some JMS Header( using Exchange.getIn().setHeader(...) ) and then route the message to some output queue. During MQ Failover scenario, the message is redelivered. However, while the message is redelivered the JMS Headers which I put earlier are lost.
Is there any way to preserve the JMS Headers even after redelivery ?
JMS redelivery
No, not if the message is redelivered from the input queue. Simply because it is the same original message you received before. The JMS broker does not know anything about the modifications you did in the Camel route.
However, this is normally not a problem. Because on a redelivery, the same consumer consumes the message again and does the same modifications again on the message.
As soon as you reach a "transaction boundary" in your route (that means, something has done that cannot be repeated or would yield a different result), you should put the modified message on another queue to "save" its current state.
From there you can continue with another consumer and so forth. If you build a processing chain like this, your system is a Pipes and Filter EIP.
Camel redelivery
Another possibility is to use the Camel ErrorHandler. It handles errors on a single route-step level. It can also do retries, but then you have to make sure the message is correctly handled if all Camel retries fail (for example send the message to an error queue).
As long as the broker redelivery is the last resort for your message, you should build your system with potential redelivery in mind.
I have this problem, too. i'm forced to convert my objects to json and save in headers of jms. and after i convert again to object. it's worked for me!
In my Apache Camel application, I have a very simple route:
from("aws-sqs://...")
.aggregate(constant(true), new AggregationStrategy())
.completionSize(100)
.to("SEND_AGGREGATE_VIA_HTTP");
That is, it takes messages from AWS SQS, groups them in batches of 100, and sends them via HTTP somewhere.
Exchanges with messages from SQS are completed successfully on getting into the aggregate stage, and SqsConsumer deletes them from the queue at this point.
The problem is that something might happen with an aggregated exchange (it might be delivered with an error), and messages will be lost. I would really like these original exchanges to be completed successfully (messages to be deleted from a queue) only when an aggregated exchange they're in is also completed successfully (a batch of messages is delivered). Is there a way to do this?
Thank you.
You could set deleteAfterRead to false and manually delete the messages after you've sent them to you HTTP endpoint; You could use a bean or a processor and send the proper SQS delete requests through the AWS SDK library. It's a workaround, granted, but I don't see a better way of doing it.