JMS, how to implement request-reply pattern using two message brokers - apache-camel

Is it possible to implement asynchronous request-reply pattern using two JMS message broker instances? Such that service A sends message to queue A, and gets the response from queue B (different broker instance)?
Does JMS API (or Apache Camel) provide some complete mechanism to achieve this transparently with correlation identifiers? What is the necessary configuration?
Bonus question:
To shuffle deck even more, I would like to cluster the queues. Could this be achieved transparently as per specification? Basically I have multiple Spring boot applications (services) with ActiveMQ broker embedded in the Spring context. Each broker acts as an one-way channel for the service, and each service excepts a response for a specific message to other service in it's own broker. Now, I would like run multiple instances of each services and retain the correlation between messages.

Since your request sender (A) and response receiver (B) are two different services - it becomes your responsiblity to store context of original request somewhere.
Probably the easiest is to use a dedicated queue for this purpose (let's call it echo).
(A) could store original request message with some specific correlation Id, for example taken from message Id of request sent. The same correlation Id is supposed to be set for response message, which will be taken by (B). So once response is recieved by (B), it is able to get the original request from echo queue, using selector with the same correlation Id.

Related

Apache Camel: complete exchanges when an aggregated exchange is completed

In my Apache Camel application, I have a very simple route:
from("aws-sqs://...")
.aggregate(constant(true), new AggregationStrategy())
.completionSize(100)
.to("SEND_AGGREGATE_VIA_HTTP");
That is, it takes messages from AWS SQS, groups them in batches of 100, and sends them via HTTP somewhere.
Exchanges with messages from SQS are completed successfully on getting into the aggregate stage, and SqsConsumer deletes them from the queue at this point.
The problem is that something might happen with an aggregated exchange (it might be delivered with an error), and messages will be lost. I would really like these original exchanges to be completed successfully (messages to be deleted from a queue) only when an aggregated exchange they're in is also completed successfully (a batch of messages is delivered). Is there a way to do this?
Thank you.
You could set deleteAfterRead to false and manually delete the messages after you've sent them to you HTTP endpoint; You could use a bean or a processor and send the proper SQS delete requests through the AWS SDK library. It's a workaround, granted, but I don't see a better way of doing it.

User Destinations with Stomp, Spring Websockets, an External Broker with an External Consumer

My Question centers around this slide from one of Rossen Stoyanchev webinars.
When using a simpleBroker I can send messages to individual users with the /user/** destination format that is picked up in UserDestination and converted. I can also use it to send to a specific session, or all sessions of a specific user.
This is also possible when using an External Broker like ActiveMQ or RabbitMQ as long as the sender is also able to use /user/** or its helper annotations #SentToUser etc.
But, if I am not processing these messages locally and I have another consumer connected to the External Message Broker (Apache Camel for example) How do handle User specific messages and also reply at a user and session level?
If the other consumer is in the same JVM you can have the "brokerMessagingTemplate" bean injected and use it to send messages to user-prefixed destinations.
For 4.2 we plan to support user destinations in a deployment with multiple web application servers connected to an External broker (see https://jira.spring.io/browse/SPR-11620). So if the other consumer is in a different JVM, then you could declare the #EnableWebSocketMessageBroker setup in that JVM as well or you could simple extend AbstractMessageBrokerConfiguration if you don't need the WebSocket client bits.
HTH

Pushing data across App Engine instances

Let's say we have several clients connected to App Engine using Channel API. Each client sends messages, which should be propagated to other conntected clients according to some rules. The tricky part is that clients may not be to the same App Engine instance.
Is there any way to push data from one instance to the others?
(Yes, I know about Memcache, but this would require some kind of polling.)
You're asking two questions here.
a. Can you push data from one instance to another without the use of polling. The answer is generally no.
b. Can one client send messages to the server that can be propagated to other clients? Yes, and this does not require propagating messages to other server-side instances.
Consider the Channel API as a service. Clients are connected to the Channel API service; they are not connected to any particular instance. Therefore any instance can send messages to any client.
You'll need to store the Channel tokens of your clients in the datastore, in some way that's queryable to match your rules.
Your client makes an HTTP request to send a message to your server.
The handler on the server queries for channel tokens that it needs to propagate the message to (either from memcache or datastore).
The handler on the server sends messages to all the clients.
If the list of destination clients is extremely large, you might want to do steps 3/4 in a task queue where the operation can run longer.
It does not matter what instance a client is connected to, that's hidden from you by the API.
Clients can only "reply" to message via standard HTTP commands, they don't actually have any way to respond via the channel API directly.
So Client A on server A1 wants to sent a message to client B on server B1.
Client A posts to a handler. That might be instance A1 or B1. It does not matter which as the server now passes the message on to client B whatever server client B is connected to via the Channel API.
The real point is that no App Engine instance has any data at all, in general. So it does not matter which instance you connect to, it might be the 99th instance or the very first to start up. So you have to design your application so that it's irrelevant what instance is in use.
Client sends message to server via HTTP.
Server sends message to N clients via the channel API.
Channel API does not make a fixed frontend-instance-to-client connection. Any frontend instance can push message to channel if it knows the channel ID.
What you need to do is pass messages cross-channel.
User one sends message normally to server (e.g. via GET)
Server looks up channel ID of second user and pushes the message
Repeat procedure in other direction: second user to first user.

Google Channel API sends a message to all clients

I created a working Google Channel AP and now I would like to send a message to all clients.
I have two servlets. The first creates the channel and tells the clients the userid and token. The second one is called by an http post and should send the message.
To send a message to a client, I use:
channelService.sendMessage(new ChannelMessage(channelUserId, "This is a server message!"));
This sends the message just to one client. How could I send this to all?
Have I to store every Id which I use to create a channel and send the message for every id? How could I pass the Ids to the second servlet?
Using Channel API it is not possible to create one channel and then having many subscribers to it. The server creates a unique channel for individual JavaScript clients, so if you have the same Client ID the messages will be received only by one.
If you want to send the same message to multiple clients, in short, you will have to keep a track of active clients and send the same message to all of them.
If that approach sounds scary and messy, consider using PubNub for your push notification messages, where you can easily create one channel and have many subscribers. To make it run on Google App Engine is not that hard, since they support almost any platform or device.
I know this is an old question, but I just finished an open source project that uses the Channel API to implement a publish/subscribe model, i.e. you can have multiple users subscribe to a single topic, and then all those subscribers will be notified when anyone publishes a message to the topic. It also has some nice features like automatic message persistence if desired, and "return receipts", where a subscriber can be notified whenever OTHER subscribers receive that message. See https://github.com/adevine/gaewebpubsub#gae-web-pubsub. Licensed under Apache 2.0 license.

Distributed ActiveMQ with Camel

I am in the process of learning ActiveMQ and Camel, with the goal to create a little prototype system that works something like this:
(source: paulstovell.com)
(big)
When an order is placed in the Orders system, a message is sent out to any subscribers (a pub/sub system), and they can play their part in processing the order. The Orders, Shipping and Invoicing applications have their own ActiveMQ installations, so that if any of the three systems are offline, the others can continue to function. Something takes care of moving messages between the ActiveMQ installs.
Getting Apache Camel to move messages from one queue to another via routes is quite easy, if they are on the same ActiveMQ instance. So this works for managing the subscription queues.
The next challenge is pushing messages from one ActiveMQ instance to another, and it's the bit where I am not sure what to look at next.
Can Camel route between different ActiveMQ installations? (I can't figure out what the JMI endpoint URI would be if they are on different machines).
I understand ActiveMQ has store and forward capabilities. Is this what I would use to move messages between Orders and Shipping/Invoicing?
Or is this what Apache ServiceMix is meant to solve?
This is a pretty straightforward asynchronous, event-driven application that is well-suited for ActiveMQ and Camel.
Actually you do not move messages explicitly from one ActiveMQ instance to another. The way it works is using what's known as a network of brokers. In your case, you'd have three brokers: ActiveMQ-purple, ActiveMQ-green and ActiveMQ-blue. ActiveMQ-purple creates a uni-directional broker network with ActiveMQ-green and ActiveMQ-blue. This allows ActiveMQ-purple to store-and-forward messages to ActiveMQ-green and ActiveMQ-blue based on consumer demand.
The Orders app accepts orders on the orders queue on ActiveMQ-purple. The Orders app uses Camel to consume and process a message to determine if it is an invoicing message or a shipping message. Camel routes the messages to either the invoicing queue or the shipping queue on ActiveMQ-purple.
Consumer demand comes from the Invoicing app and the Shipping app. The Invoicing uses Camel to consume messages from the invoicing queue on ActiveMQ-green. The Shipping app uses Camel to consume messages from the shipping queue on ActiveMQ-blue. Because of the broker network and because of the consumer demand on the ActiveMQ-green.invoicing queue and the ActiveMQ-blue.shipping queue, messages will be forwarded from ActiveMQ-purple to the appropriate broker and queue. There is no need to explicitly route messages to specific broker.
I hope this answers your questions. Let me know if you have anymore.
Bruce
Hmmmm, I've only dabbled at best, and not for a fair while, but I'll try and offer something.
ActiveMQ can route between different installations and just uses standard URIs to my knowledge so I'm not sure what the problem is here. I would think that using TCP you'd be fine. Using ServiceMix (you mention it later) you'd just specify a connectionFactory & then provide the URI in that. This link shows some examples http://servicemix.apache.org/servicemix-jms-new-endpoints.html.
Camel has support for Durable Subscriber if that's what you were after (http://camel.apache.org/durable-subscriber.html)? This pattern will ensure that if the subscriber is offline when the message is ready, it will be held until the subscriber is back online. This is also supported by ServiceMix (see link given above and look for 'subscriptionDurable'.

Resources