Akka Camel RabbitMQ Creating New Connections Every Time - apache-camel

I'm using akka-camel to subscribe to a rabbitmq exchange. There will be several of these actors created... one per requested routingKey. The exchange and queue doesn't change. Each time a new routingKey is requested I create a new actor and, instead of a new channel being created, a brand new connection is being created, which is undesirable. I don't quite understand why a new connection is being created each time the Consumer actor is being created.
Here's the actor code:
class CommandConsumer(routingKey: String)
extends Consumer with ActorLogging {
override def endpointUri = s"rabbitmq://localhost/hub_commands?exchangeType=topic&queue=test&autoDelete=false&routingKey=$routingKey"
override def receive: Receive = {
case msg: CamelMessage => {
log.debug(s"received {}", msg.bodyAs[String])
sender ! msg.bodyAs[String]
}
}
}
I'm creating the actor like this:
context.actorOf(CommandConsumer.props("my.routing.key", sender))
UPDATE
Here's exactly what I need to accomplish:
I'm writing a TCP/IP server that, when a client connection is accepted, needs to receive messages from other components in the back-end architecture. To do this, I'd like to use RabbitMQ. After a successful connection to my server, the client will send an id, which will be used as part of a routing key (e.g. command.<id>). The RabbitMQ connection and queue is created when the first client connects and the routing key would be something like command.first_id. When the next client connects I would like to add command.second_d routing key to the list of routing keys that are already accepted, without creating a new connection to RabbitMQ.

I believe that is expected. Each Akka Camel actor will have their own Camel Context which will be independent from others. This means that for each new actor you create, you will be creating a new camel context with a new RabbitMQ endpoint that will hold a new RabbitMQ connection and a new Channel.
If in your scenario the queue and exchange does not change but just the routing key, why don't you just have one Akka Camel actor consuming from the queue and another actor managing the bindings. Every time there is a new routing key that needs to be consumed, this actor will create a rabbitmq connection, channel and call queueBind() with the new routing key. Also, you can have the same actor unbinding the undesired routing keys.

Related

Full duplex TCP connections using Netty4 and Apache Camel

For an IoT project I am working on I am researching a next, enhanced, version of our “Socket Handler” which is over 5 years of age and has evolved in a big beast doing apart from handling socket connections with IoT devices also in thread processing that has become a real pain to manage.
For my total rewrite I am looking into Apache Camel as a routing and transformation toolkit and understand how this can help us split processing steps into micro-services, loosely coupled through message queues.
One thing that I have trouble understanding however is how I can implement the following logic “the Apache Camel way”:
An IoT device sends an initial message which contains its id, some extra headers and a message payload.
Apart from extracting the message payload and routing it to a channel, I also need to use the device Id to check a message queue, named after the device id, for any commands that have to go to the device over the same socket connection that received the initial message.
Although it seems that Netty4, which is included in Camel, can deal with synchronous duplex comms, I cannot see how the above logic can be implemented in the Camel Netty4 component. Camel Routing seems to be one way only.
Is there a correct way to do this or should I forget about using camel for this and just use Netty4 bare?
It is possible to reproduce a full duplex communication by using two Camel routes :
thanks by using reuseChannel property
As this full duplex communication will be realised thanks to two different Camel routes, sync property must be set to false.
Here the first route:
from("netty4:tcp://{{tcpAddress}}:{{tcpPort}}?decoders=#length-decoder,#string-decoder&encoders=#length-encoder,#bytearray-encoder&sync=false&reuseChannel=true") .bean("myMessageService", "receiveFromTCP").to("jms:queue:<name>")
This first route will create a TCP/IP consumer thanks to a server socket (it is also possible to use a client socket thanks to property clientMode)
As we want to reuse the just created connection, it is important to initialize decoders but also encoders that will be used later thanks to a bean (see further). This bean will be responsible of sending data by using the Channel created in this first route (Netty Channel contains a pipeline for decoding / encoding messages before receiving from / sending to TCP/IP.
Now, we want to send back some data to the parter connected to the consumer (from) endpoint of the first route. Since we are not able to do it thanks to classical producer endpoint (to), we use a bean object :
from("jsm:queue:<name>").bean("myMessageService", "sendToTCP");
Here the bean code :
public class MessageService {
private Channel openedChannel;
public void sendToTCP(final Exchange exchange) {
// opened channel will use encoders before writing on the socket already
// created in the first route
openedChannel.writeAndFlush(exchange.getIn().getBody());
}
public void receiveFromTCP(final Exchange exchange) {
// record the channel created in the first route.
this.openedChannel = exchange.getProperty(NettyConstants.NETTY_CHANNEL, Channel.class);
}
}
Of course,
the same bean instance is used in the two routes, you need to use a registry to do so :
SimpleRegistry simpleRegistry = new SimpleRegistry();
simpleRegistry.put("myMessageService", new MessageService());
Since the bean is used in two different asynchronous routes, you will have to deal with some unexected situations, by for example protecting the access of openedChannel member, or dealing with unexpected deconnexion.
This post helped me a lot to find this solution :
how-to-send-a-response-back-over-a-established-tcp-connection-in-async-mode-usin
reuseChannel property documentation
After end of camel route, exchange's body and headers will back to requester as response.

JMS, how to implement request-reply pattern using two message brokers

Is it possible to implement asynchronous request-reply pattern using two JMS message broker instances? Such that service A sends message to queue A, and gets the response from queue B (different broker instance)?
Does JMS API (or Apache Camel) provide some complete mechanism to achieve this transparently with correlation identifiers? What is the necessary configuration?
Bonus question:
To shuffle deck even more, I would like to cluster the queues. Could this be achieved transparently as per specification? Basically I have multiple Spring boot applications (services) with ActiveMQ broker embedded in the Spring context. Each broker acts as an one-way channel for the service, and each service excepts a response for a specific message to other service in it's own broker. Now, I would like run multiple instances of each services and retain the correlation between messages.
Since your request sender (A) and response receiver (B) are two different services - it becomes your responsiblity to store context of original request somewhere.
Probably the easiest is to use a dedicated queue for this purpose (let's call it echo).
(A) could store original request message with some specific correlation Id, for example taken from message Id of request sent. The same correlation Id is supposed to be set for response message, which will be taken by (B). So once response is recieved by (B), it is able to get the original request from echo queue, using selector with the same correlation Id.

Distinguish sender from receiver with Socket.io

I have built a web application using JavaScript stack (MongoDB, ExpressJS, AngularJS, NodeJS). The registration works, the authentication works, the chat is which is using Socket.io works but I need a way of distinguishing which client is sending and which client is receiving the message in order to perform further functions with the user's data.
P.S. Since this is a project that I can not publish there are no code snippets in my post, hopefully it is alright
The ultimate design will depend on what you are trying to achieve. Is is "a one-to-one chat" service or maybe a "one to many broadcast". Is the service anonymous? How do you want users find each other? How secure does it need to be?
As a starting point I would assign a unique identifier (UID) to each connection (client). This will allow the server to direct traffic by creating "conversation" pairings or perhaps a list of listeners (subscribers) and writers (publishers).
A connected user could then enter the UID of a second connected user and your service can post messages back and forth using the uid pairing.
conversation(user123,user0987)
user123 send to user0987
user0987 send to user123
or go bulletin board/chat room style:
create a "board" - just a destination that is a list of all text sent
user123 "joins" board "MiscTalk"
user0987 "joins board "MiscTalk"
each sends text to the server, server adds that text to the board and each client polls the board for changes.
Every Socket can send or recieve, your program must track "who" is connected on a socket and direct traffic between them.
I think a fine way to handle the clients is creating a Handler class, a Client object and create a clientList in the handler, this way is easier to distinguish the clients. Some months ago I built a simple open source one-to-one random chat using socket.io, and here are the handler and the client class.
I hope this example can help you.
1.) Create a global server variable and bind connections property to it and whenever the authentication is true ,store socket_id against the id(user_id etc) which you get after decoding your token.
global.server=http.createServer(app);
server.connections={};
If server.connection hasOwnProperty(id) then use socket emit to send your message ,
else store the socket_id against your id and then send your message.
In this way you just need to know the unique token of the target user to send the message.
2.) You can also use the concept of room
If authentication is true use
socket.room=id ; socket.join(id)
when sending message use client.in(id).emit("YOUR-EVENT-NAME",message)
Note: Make your own flow , this is just an overview of what I have implemented in the past.You should consider using Redis for storing socket_ids.

Set-up queue policy not to send expired message to Dead-Letter Queue on Camel endpoint

I have a small Camel route which just forward messages to another queue with an expiration time like this:
#Override
public void configure() throws Exception {
defaultOnException();
// Route all messages generated by system A (in OUTBOUND_A) to system B (INBOUND_B)
// #formatter:off
from("activemq:queue:OUTBOUND_A")
// ASpecificProcessor transform the coming message to another one.
.processor(new ASpecificProcessor())
.to("activemq:INBOUND_B?explicitQosEnabled=true&timeToLive={{b.inbound.message.ttl}}");
// #formatter:on
}
I need the messages posted in INBOUND_B to be persistent and by default the expired message goes to ActiveMQ.DLQ queue after expired.
I know I can modify the ActiveMQ configuration in the conf/activemq.xml with
<policyEntry queue="INBOUND_B">
<!--
Tell the dead letter strategy not to process expired messages
so that they will just be discarded instead of being sent to
the DLQ
-->
<deadLetterStrategy>
<sharedDeadLetterStrategy processExpired="false" />
</deadLetterStrategy>
</policyEntry>
But I would prefer not to change the ActiveMQ configuration (because it needs a restart) and I am wondering if it is possible to send such policy through the Camel endpoint configuration?
No, ActiveMQ broker side configuration cannot be updated via the client, that would lead to all sorts of security problems. You would need to update the broker configuration and possibly not need a restart if you use the runtime configuration plugin on the broker.

Google Channel API sends a message to all clients

I created a working Google Channel AP and now I would like to send a message to all clients.
I have two servlets. The first creates the channel and tells the clients the userid and token. The second one is called by an http post and should send the message.
To send a message to a client, I use:
channelService.sendMessage(new ChannelMessage(channelUserId, "This is a server message!"));
This sends the message just to one client. How could I send this to all?
Have I to store every Id which I use to create a channel and send the message for every id? How could I pass the Ids to the second servlet?
Using Channel API it is not possible to create one channel and then having many subscribers to it. The server creates a unique channel for individual JavaScript clients, so if you have the same Client ID the messages will be received only by one.
If you want to send the same message to multiple clients, in short, you will have to keep a track of active clients and send the same message to all of them.
If that approach sounds scary and messy, consider using PubNub for your push notification messages, where you can easily create one channel and have many subscribers. To make it run on Google App Engine is not that hard, since they support almost any platform or device.
I know this is an old question, but I just finished an open source project that uses the Channel API to implement a publish/subscribe model, i.e. you can have multiple users subscribe to a single topic, and then all those subscribers will be notified when anyone publishes a message to the topic. It also has some nice features like automatic message persistence if desired, and "return receipts", where a subscriber can be notified whenever OTHER subscribers receive that message. See https://github.com/adevine/gaewebpubsub#gae-web-pubsub. Licensed under Apache 2.0 license.

Resources