Akka Camel multiple consumers - apache-camel

I'm using akka + camel to consume message from activemq, and I'm trying to figure out how to deploy this consumer in multiple machines without duplicate the message. In this case I'm consuming message from a topic and the activemq should know I have one akka system in various machines, instead of various single independent systems.
I tried to accomplish that using akka cluster, but that example using a frontend that subscribe to a cluster of backend does not help since my "backend" actor is the activemq consumer itself and I can't tell activemq to subscribe to my cluster.
Any ideas?

JMS versions < 2.0 does not allow multiple nodes to share a topic subscription (not duplicating the message to each consumer). To cope with that ActiveMQ provides Virtual Topic (you can consume messages published to a topic from a Queue which allows for multiple consumers - load balancing).
It's all naming conventions. So you simply publish to the topic VirtualTopic.Orders and then consume from the queue Consumer.ClusterX.VirtualTopic.Orders. Naming conventions could be changed - see docs.
http://activemq.apache.org/virtual-destinations.html

Related

Understanding the difference between Apache camel and Kafka Stream

Being quite familiar with Apache Camel, I am a new bee in Kafka Streams. I am learning Kafka streams, but could not find any relevant answer for the below query,
Being a library both Camel and Kafka Streams can create pipelines to extract data, polishing/transforming and load into some sink using a processor. Camel also supports stream processing. I want to understand the
difference between these two since I feel Camel library to be more generic than Kafka Stream which is not relevant for systems where there is no Kafka broker (no sure if this is wrong)
which library is recommended for which type of use case
Thanks in advance.
Kafka Streams is a stream processing framework, that consumes messages from Kafka topics and writes them back to other Kafka topics. It brings support for stateful transformations such as aggregations to tables and similar, leveraging RocksDB, when necessary. You can provide Rest endpoints to such tables/stores, but that is already extending the Kafka Streams features.
Another possible extension is, to send messages somewhere else than Kafka. You will have to provide the client to do so yourself. With that regards, Kafka Streams' scope is much less versatile than Apache Camel. Because of that specialisation, it supports various Kafka specific features, such as parallel processing based on Kafka consumer groups, predefined message envelopes and exactly once semantics. One of the most important feature is the support of "stream time" in Kafka streams, which allows reprocessing of messages by their Kafka timestamps regardless of the Wall-Clock-Time.
You can have a look on KSQL, which is build on top of Kafka Streams, to get an idea, what is possible to build with Kafka Streams.
In short, if you have data in Kafka, that you want to process and write back to Kafka for other programs to consume, Kafka Streams is a very helpful framework. It even has a similar deployment model as Apache Camel. However, if you need to integrate different technologies with Kafka, you need to stay with Apache Camel. Note, there is Kafka Connect in the Apache Kafka family, that is geared towards the integration of data from other systems with Apache Kafka.

spring boot different instances of a microservice and data integrity

If several instance of a same microservice contain their own database,for scalability, how update all the databases when a create, update or delete operation is made ? which tool compatible with Eureka and Zuul spring propose for that ?
I would suggest you to use RabbitMQ
The basic architecture of a message queue is simple, there are client applications called producers that create messages and deliver them to the broker (the message queue). Other applications, called consumers, connects to the queue and subscribes to the messages to be processed. A software can be a producer, or consumer, or both a consumer and a producer of messages. Messages placed onto the queue are stored until the consumer retrieves them.
Why to use this RabbitMQ??
https://www.cloudamqp.com/blog/2015-05-18-part1-rabbitmq-for-beginners-what-is-rabbitmq.html
Official document for rabbitMQ....
https://www.rabbitmq.com/
How to install rabbitMQ:
https://www.journaldev.com/11655/spring-rabbitmq
Configuration in spring boot application:
https://spring.io/guides/gs/messaging-rabbitmq/
I would suggest you use an Event-based architecture where any service has done his work it produces the event and other services subscribe that event will also start his work.
you can use Kafka queue for same. also, read Distributed Sagas for Microservices
One more thing is that inter-communication use UDP instead of TCP.
Most of the databases offer replication these days with near 0 latency. Unless you use the other databases you can let the database do the synchronization for you.

How to run multiple JMS clients supporting multiple vendors and versions within a single service?

Looking to create a service composed of JMS clients that support multiple JMS vendors and JMS versions. Ideally running in a single service instance/JVM.
Mule, Apache Camel, IBM Websphere MQ and the like have done something similar. It's a feature available in ESB's, and I'm not keen on how they have architected multiple JMS clients running on a single platform. They may or may not have built it out on a single service instance.
What sort of knowledge and tooling is out there that can help guide me in this quest? Prefer JVM solutions.
For the three who closed due to "not about coding"... JMS, the "Java Message Service" is a Java messaging framework https://docs.oracle.com/javaee/6/tutorial/doc/bncdq.html ... the solution will certainly be code based.

How can I integrate my own XA transaction manager with Apache Camel?

I'm trying to create a router to integrate a number of JMS topics & Queues. I am constrained by the fact the client I am working for can't change the JMS implementation (TibCo EMS with some custom client libraries) and the fact that they have written their own XA transaction manager which doesn't quite conform with the JTA spec. It is very important that message delivery is guaranteed.
I've done a lot of reading and experimenting with Camel and I've realised that I probably need to write my own JMS component, as the standard JMS component is not going to integrate with the JMS client libraries or TM I have.
I need to be able to put hooks into the route lifecycle at the following points:
During the route startup, I need to identify all JMS connections and enlist them as XA resources with the TM implementation
When a message is received at the consumer, I need to start a transaction including all the JMS connections in the route
When a routing decision is made, I need to send the message to the producer and commit the transaction
Given the above, I think I can implement a very simplified version of the camel-jms component which strips out all the Spring parts and only contains the bare minimum required to interact with my JMS libraries.
Where would be the best place to initialise the transaction manager? I've been looking at DefaultCamelContext, RoutePolicy and RouteContext but I can't find a place where all the endpoints are resolved and initialised.
I solved this problem by implementing the UserTransaction and TransactionManager interfaces and creating a PlatformTransactionManager which the Camel JMS component uses to create the DefaultMessageListenerContainer.
One important point to note is that the transacted property on the Camel JMSComponent refers to local transactions, not XA transactions. If you set this property to true after passing a PlatformTransactionManager to the component, the DMLC will effectively try to commit your transaction twice, which won't work.
This leaves me with a nice working example consuming from one JMS broker and producing to another, but it is very slow - ~5 messages per second. Unfortunately Spring JMS does not support batching so it seems the best solution here is to adjust the JMS topic configurations such that routing only takes place between topics on the same broker.

Database Trigger And JMS

Cound I create a trigger to send a record message to JMS? If yes, how can I do?
Thanks in advance!
I would summarize your options as follows:
Databases Supporting JMS
Oracle is the only database that I am aware of that supports JMS natively in the form of Oracle Advanced Queueing. If your message receiver is not too keen on that JMS implementation, it is usually possible to find some sort of messaging bridge that will transform and forward messages from one JMS implementation to another. For example:
Apache Active MQ
JBoss Messaging
Databases Supporting Java
Some databases such as Oracle and DB2 have a built in Java Virtual Machine and support the loading of third party libraries (Jars) and custom classes that can be invoked by trigger code. Depending on the requirements of your JMS client, this be an issue on account of the version of Java supported (if you need Java 5+ but the DB only supports Java 3). Also keep in mind that threading in some of these embedded JVMs is not what you might expect it to be, but, one might also expect that the sending of JMS messages might be more forgiving of this than the receiving of the same.
Databases Supporting External Invocations (but not in Java)
Several databases support different means of triggering asynchronous events out to connected clients which can in turn forward JMS messages built from the payload of the event:
Oracle: DBMS_ALERT (synchronous), DBMS_PIPE, DCN
Postgres: SQLNotify
Some databases (all of the above and including SQLServer) allow you to send SMTP messages from database procedural code (which can be invoked by triggers). While it is not JMS, a mail listener could potentially listen for mail messages (that might conveniently have a JSON or XML message body) and forward this content as a JMS message.
A basic alternate of this is databases packages that allow HTTP posts to call out to external sources where you might have a servlet listening and forwarding the submitted content as a JMS message.
Other databases such as Postgres support non-java languages such as Perl, Python and Tcl where you might employ some clever scripting to send a message to an external message transformer that will forward as JMS. Active MQ (and therefore its message bridge) supports a multi-language JMS client that includes Python and Perl (and many others).
Lowest Common Denominator
Short of all that, your trigger can write an event to a table and an external client can poll the contents of the table, looking for new data and forwarding JMS messages when it finds it. The JMS message can either include the content, or simply indicate that content exists and what the PK is, and the consumer can come and get it.
This is a technique widely supported in Apache Camel which has adapters (technically called Components) specifically for polling databases:
Hibernate
JDBC
SQL
iBatis
Events read from a database table by Camel can then be transformed and routed to a variety of destinations including a JMS server (in the form of a JMS message). Implementing this Camel is fairly straight forward and well documented, so this is not a bad way to go.
I hope this was helpful.
IBM z and i series servers have db2 with native SQL function such as MQSEND to send to IBM MQ and also MQREAD to read from IBM MQ. Also these server you can create triggers that call a program.

Resources