User Destinations with Stomp, Spring Websockets, an External Broker with an External Consumer - apache-camel

My Question centers around this slide from one of Rossen Stoyanchev webinars.
When using a simpleBroker I can send messages to individual users with the /user/** destination format that is picked up in UserDestination and converted. I can also use it to send to a specific session, or all sessions of a specific user.
This is also possible when using an External Broker like ActiveMQ or RabbitMQ as long as the sender is also able to use /user/** or its helper annotations #SentToUser etc.
But, if I am not processing these messages locally and I have another consumer connected to the External Message Broker (Apache Camel for example) How do handle User specific messages and also reply at a user and session level?

If the other consumer is in the same JVM you can have the "brokerMessagingTemplate" bean injected and use it to send messages to user-prefixed destinations.
For 4.2 we plan to support user destinations in a deployment with multiple web application servers connected to an External broker (see https://jira.spring.io/browse/SPR-11620). So if the other consumer is in a different JVM, then you could declare the #EnableWebSocketMessageBroker setup in that JVM as well or you could simple extend AbstractMessageBrokerConfiguration if you don't need the WebSocket client bits.
HTH

Related

JMS, how to implement request-reply pattern using two message brokers

Is it possible to implement asynchronous request-reply pattern using two JMS message broker instances? Such that service A sends message to queue A, and gets the response from queue B (different broker instance)?
Does JMS API (or Apache Camel) provide some complete mechanism to achieve this transparently with correlation identifiers? What is the necessary configuration?
Bonus question:
To shuffle deck even more, I would like to cluster the queues. Could this be achieved transparently as per specification? Basically I have multiple Spring boot applications (services) with ActiveMQ broker embedded in the Spring context. Each broker acts as an one-way channel for the service, and each service excepts a response for a specific message to other service in it's own broker. Now, I would like run multiple instances of each services and retain the correlation between messages.
Since your request sender (A) and response receiver (B) are two different services - it becomes your responsiblity to store context of original request somewhere.
Probably the easiest is to use a dedicated queue for this purpose (let's call it echo).
(A) could store original request message with some specific correlation Id, for example taken from message Id of request sent. The same correlation Id is supposed to be set for response message, which will be taken by (B). So once response is recieved by (B), it is able to get the original request from echo queue, using selector with the same correlation Id.

Websphere MQ - Topic Subscription with multiple consumers

I have a micro-service which subscribes to a topic in WebSphere MQ. The subscription is managed and durable. I explicitly set the subscription name, so that it can be used to connect back to the queue, after recovering from any micro service failure. The subscription works as expected.
But I might have to scale up the micro service and run multiple instances. In this case I will end up with having multiple consumers to the same topic. But here it fails with error 2429 : MQRC_SUBSCRIPTION_IN_USE. I am not able to run more than one consumer to the topic subscription. Note : A message should be sent only to one of the consumers.
Any thought ?
IBM Websphere Version : 7.5
I use the C-client API to connect to the MQ.
When using a subscriber what you describe is only supported via the IBM MQ Classes for JMS API. In v7.0 and later you can use Cloned subscriptions (this is a IBM extension to the JMS spec), in addition in MQ v8.0 and later you can alternately use Shared subscriptions which is part of the JMS 2.0 spec. With these two options, multiple subscribers can be connected to the same subscription and only one of them will receive each published message.
UPDATE 20170710
According to this APAR IV96489: XMS.NET DOESN'T ALLOW SHARED SUBSCRIPTIONS EVEN WHEN CLONESUP PROPERTY IS ENABLED, XMS.NET is also supposed to support Cloned subscriptions but due to a defect this is will not be supported until 8.0.0.8 or 9.0.0.2 or if you request the IFIX for the APAR above.
You can accomplish something similar with other APIs like C by converting your micro-service to get from a queue instead of subscribing to a topic.
To get the published messages to the queue you have two options:
Setup a administrative subscription on the queue manager. You can do this a few different ways. The example below would be using a MQSC command.
DEFINE SUB('XYZ') TOPICSTR('SOME/TOPIC') DEST(SOME.QUEUE)
Create a utility app that can open a queue and create a durable subscription with that provided queue, the only purpose of this app would be to subscribe and unsubscribe a provided queue, it would not be used to consume any of the published messages.
Using the above method, each published message can only be read (GET) from the queue by one process or thread.

Client Server program using messages queues

I am trying to design a Client Server kind of application in which my Server is a daemon that accepts client requests, send client's data over a serial channel to the other side(which is an MCU and its firmware will reply to the Server request over the same serial channel). My client can be a CLI application or any other system program.
My idea of design is -
Use message queues for communication between Client and Server since this is a local application and message queues are bidirectional and fast.
Implement a LIBRARY that acts as an interface between multiple clients and the server. This basically does the stuff of packetizing client data into a message(own defined protocol), create message queues, connect to server, send/receive data and then pass it to the respective client(using call backs). This library also exposes API that can be used by clients. Thus this library gives me the flexibility to add support for any new clients keeping the server program unchanged.
Server gets the data over serial from other side and passes it to the library over message queue. The library uses callbacks to send data to the client.
EDIT:
I am thinking of creating Message queues on the fly when any client requests arrive. If I do this, how does the Server daemon(which has already started at linux boot up) gets information about this message queue? Does the message queue has a name that is persistent across and used by other programs? I want to implement clients that will be blocked until it gets response from the server.
Could you guys please review this design and tell me whether my approach is correct. Please reply if you have any other recommendations.
Thanks in advance.

Apache Camel: Test if endpoints are up

Does Camel provide anything out of the box which tells if it is able to connect all endpoints?
These endpoints could be MQ, webservice etc.
If not then I have to write a servlet which will send test request to all the endpoints. I will be using multicast or splitter for this implementation.
From my experience Camel will only provide warning logs if a from() endpoint is not available since it is constantly trying to read from them. Every other endpoint won't be accessed until the exchange tries to use that endpoint. If your goal is to test if various resources are alive I believe you would need to create your own testing program. I don't think this will be implemented as a feature because typically applications build in error handling if a resource is down and definte appropriate behaviors.
If we're talking about producers, then no. If your route is sending messages to an amq or http4 endpoint for instance, camel with not automatically send TCP-packets on these connections for monitoring purposes. A common way to handle failure of external endpoints is by using "circuit breakers". Take a look at https://camel.apache.org/load-balancer.html. A more robust alternative, imho, is Netflix's Hystrix.
If you have a polling consumer, say a from:ftp://.. then the polling consumer will poll messages every n-th millisecond, and you'll get an error if the connection is broken.

Distributed ActiveMQ with Camel

I am in the process of learning ActiveMQ and Camel, with the goal to create a little prototype system that works something like this:
(source: paulstovell.com)
(big)
When an order is placed in the Orders system, a message is sent out to any subscribers (a pub/sub system), and they can play their part in processing the order. The Orders, Shipping and Invoicing applications have their own ActiveMQ installations, so that if any of the three systems are offline, the others can continue to function. Something takes care of moving messages between the ActiveMQ installs.
Getting Apache Camel to move messages from one queue to another via routes is quite easy, if they are on the same ActiveMQ instance. So this works for managing the subscription queues.
The next challenge is pushing messages from one ActiveMQ instance to another, and it's the bit where I am not sure what to look at next.
Can Camel route between different ActiveMQ installations? (I can't figure out what the JMI endpoint URI would be if they are on different machines).
I understand ActiveMQ has store and forward capabilities. Is this what I would use to move messages between Orders and Shipping/Invoicing?
Or is this what Apache ServiceMix is meant to solve?
This is a pretty straightforward asynchronous, event-driven application that is well-suited for ActiveMQ and Camel.
Actually you do not move messages explicitly from one ActiveMQ instance to another. The way it works is using what's known as a network of brokers. In your case, you'd have three brokers: ActiveMQ-purple, ActiveMQ-green and ActiveMQ-blue. ActiveMQ-purple creates a uni-directional broker network with ActiveMQ-green and ActiveMQ-blue. This allows ActiveMQ-purple to store-and-forward messages to ActiveMQ-green and ActiveMQ-blue based on consumer demand.
The Orders app accepts orders on the orders queue on ActiveMQ-purple. The Orders app uses Camel to consume and process a message to determine if it is an invoicing message or a shipping message. Camel routes the messages to either the invoicing queue or the shipping queue on ActiveMQ-purple.
Consumer demand comes from the Invoicing app and the Shipping app. The Invoicing uses Camel to consume messages from the invoicing queue on ActiveMQ-green. The Shipping app uses Camel to consume messages from the shipping queue on ActiveMQ-blue. Because of the broker network and because of the consumer demand on the ActiveMQ-green.invoicing queue and the ActiveMQ-blue.shipping queue, messages will be forwarded from ActiveMQ-purple to the appropriate broker and queue. There is no need to explicitly route messages to specific broker.
I hope this answers your questions. Let me know if you have anymore.
Bruce
Hmmmm, I've only dabbled at best, and not for a fair while, but I'll try and offer something.
ActiveMQ can route between different installations and just uses standard URIs to my knowledge so I'm not sure what the problem is here. I would think that using TCP you'd be fine. Using ServiceMix (you mention it later) you'd just specify a connectionFactory & then provide the URI in that. This link shows some examples http://servicemix.apache.org/servicemix-jms-new-endpoints.html.
Camel has support for Durable Subscriber if that's what you were after (http://camel.apache.org/durable-subscriber.html)? This pattern will ensure that if the subscriber is offline when the message is ready, it will be held until the subscriber is back online. This is also supported by ServiceMix (see link given above and look for 'subscriptionDurable'.

Resources