I'm trying to configure camel to read from single JMS queue and publish to 2 different Kafka cluster.
My configuration is something like
from("jms:queueName?concurrentConsumers=1")
.transacted()
.to("kafka:host1:port1?topic=kafkaTopic")
.to("kafka:host2:port2?topic=kafkaTopic")
My problem is how to configure the process to be in-transaction, meaning if it fail to write to one of Kafka, it should rollback the the write from the other Kafka and the message stays in JMS queue.
In my integration test when I shutdown the second kafka and publish message to JMS queue, the message still arrives at the first kafka.
Related
I have a reliable acquisition flow where an inbound SFTP connector is polling and reading the file and publishing to JMS ActiveMq. I have an issue when ActiveMQ is down. JMS connector get in to reconnection mode lets say every minute but SFTP is actively reading the files and try to publish the file in ActiveMq. which causes the loss of message and the concept of deadletter queue also does not work as JMS is down.
Is there a way we can stop reading the file until the JMS successfully reconnect? also what if the message was in flight and JMS get down before message end up in the queue? will the message rollback?
I am using Mule 3.9
If you set the flow processing strategy to synchronous and use only 1 receiver thread in the connector configuration. Then it should not try to read a new file if the previous one wasn't processed. Without those changes it is not a correct implementation of the reliable pattern.
More information:
Documentation of reliable patterns: https://docs.mulesoft.com/mule-runtime/3.9/reliability-patterns.
KB Article: https://help.mulesoft.com/s/article/How-to-fetch-files-in-order-using-the-FTP-connector
Example:
<sftp:connector name="sftpConn">
<receiver-threading-profile maxThreadsActive="1" />
</sftp:connector>
<flow name="mainFlow" processingStrategy="synchronous">
...
I am trying to setup a camel context that polls a weblogic jms queue, consumes the message and sends it to a webservice endpoint. Incase any error occurs in the transaction or the target system is unavailable, I need to redeliver the same message without losing sequence/ordering.
I have set up a camel jms route with single consumer and enabled transacted attribute as per https://camel.apache.org/transactional-client.html and set the redelivery as unlimited.
When the transaction fails with messageA, the jms message consumption from weblogic queue is rollbacked and the messageA is marked for redelivery (state string is marked delayed) in weblogic. But during this time if another message reaches the weblogic queue, the camel route picks the messageB and forwards it to the target endpoint although the messageA is still in retrying mode. This distorts the whole ordering of the messages.
The transaction client is used to ensure that messages are not lost while the application is shutdown during redelivery.
I expect that there are no message loss and the messages are always sent in the correct order as per generated into the weblogic queue to the target endpoint.
That a newly arrived message outpaces an existing message that has to be redelivered, sounds like a broker (Weblogic) issue or feature.
I never saw this behavior with ActiveMQ. A failed message that is redelivered, is redelivered immediately by the Apache broker.
It sounds like the message is "put aside" internally to redeliver it later. This can absolutely make sense to avoid blocking message processing.
Is there something like a "redelivery delay" set on Weblogic you can configure? I could imagine that a delayed redelivery is like an internal error queue with a scheduled consumer.
Does Apache Camel ActiveMQ component guarantees delivery message to broker?
If i understand correctly (reading this doc) Camel has persisentDelivery configuration enabled by default for JMS and that guarantees consuming messages from broker.
But i don't understand: how it works on producing from app to broker (if yes, what kind of storage does it use)? If this kind of guarantee not supported by default, does Camel provide a simple way to implement it?
Thanks in advance
No only when the message is acknowledgede by the broker its safely sent to the broker, where its guaranteed. The persistent option just tells the broker to store the message in the storage instead of keeping it in memory only.
So if Camel cannot send the message to the broker due to networking issues etc, then the operation will fail, and you would need to deal with this error in Camel.
What you can do is to have a local ActiveMQ broker alongside your Camel apps and then connect these brokers in a network of brokers where the brokers will route the messages safely among each others.
Please help me in finalizing architecture for Asynchronous delivery support using Apache Camel and ActiveMQ and below I have explained point by point basis about my requirement.
I have Jetty Server receiving incoming messages and ActiveMQ to store it in disk using Kaha DB.
Active MQ sends ack once it stores in kaha DB back to client.
I have Spring AbstractPollingMessageListenerContainer JMS Message listener which picks up the message from activemq queue every 1 second and dispatch to Camel HTTP endpoints and then finally sent to actual remote receivers.
Once Dispatcher thread gets response from remote receivers it deletes message from ActiveMQ.
Assume that I have many slow remote receivers in that case my Dispatcher thread created by AbstractPollingMessageListenerContainer remains blocked until I get response from remote receivers. This results to creation of new Dispatcher threads since already created Dispatcher threads are not able to dispatch new messages from ActiveMQ queue.
Now creation of many Dispatcher threads result into more CPU usage which impacts overall performance.
Now my requirement is I want Dispatcher thread only to dispatch messages from ActiveMQ queue to HTTP endpoint and forget and also not do acknowledgement so that message is still in queue.
Also I will not let Dispatcher thread to wait till I get response so I have thought to handle response using separate thread and this same thread will only delete message from ActiveMQ queue.
So my current architecture is like below:
Camel Jetty Server ----> ActiveMQ queue ----> Dispatcher Thread ---> Camel Direct endpoint ----> Camel HTTP endpoint ---> remote receivers sending response back ---> response ---> Dispatcher Thread (sends ack to delete messages from ActiveMQ queue) ----> ActiveMQ Queue.
Here I feel since we are using Direct endpoint which is synchronous so Dispatcher thread remains active till it gets response and so same dispatcher thread is not able to process further new message from ActiveMQ queue.
Please suggest if some thing else I can use here to avoid Direct endpoint.
I used SEDA endpoint but drawback is it processes 1 message using 1 thread and also gets blocked till it gets response from receivers.
In this approach previously Dispatcher thread gets blocked but now Seda consumer threads gets blocked and could not dispatch new messages from in memory queue of SEDA towards remote receivers.
I feel some kind of design which helps me in keep on sending message to remote receivers and only when response comes back some daemeon thread gets notified and it will handle acknowledgement towards activeMQ. Also I thought to use NIO framework implementation like Camel netty/netty4-http component but could not find exact usage and how to fit it in current architecture.
Modified architecture should be like below:
Camel Jetty Server ----> ActiveMQ queue ----> Dispatcher Thread--->Unknown Stuff ----> Camel HTTP Endpoint ---> remote receivers sending response back--->Unknown Stuff (sends ack to delete messages from ActiveMQ queue) ----> ActiveMQ Queue
Please help me in finalising Unknown Stuff and I am posting my query after doing enough R & D.
Also new ideas are welcome and please give me idea with a restriction that I must persist the message and delete it only after getting success response from remote receiver. Also I have to design architecture only using Apache Camel routes.
Route Definitions:
1. Dispatcher Route:
from(fromUri)to(toUris);
fromUri:
[ActiveMQueue.http1270018081testEndpoint1:queue:ActiveMQueue?maxConcurrentConsumers=15&concurrentConsumers=3&maxMessagesPerTask=10&messageListenerContainerFactoryRef=AbstractPollingMessageListenerContainer ]
ToUris:
[ActiveMQ.DLQ:queue:ActiveMQ.DLQ, direct:http1270018081testEndpoint1]
2.Remote Receiver Proxy Route:
fromUri:direct:http1270018081testEndpoint1
from(fromUri).to(toUri).process(responseProcessor)
toUri:http://127.0.0.1:8081/testEndpoint1?bridgeEndpoint=true
responseProcessor: To process response received by remote receiver.
Overall Route looks like below:
Dispatcher Route---> Remote Receiver Route---> Remote Server
JMS message acknowledgement is done under the covers, so your only way to really "send the acknowledgement back to the queue" is to use a JMS transaction (doesn't need to be XA)
It sounds like a LLR-style transaction would be useful and drastically simplify things for you. If you consume the message from the queue using a JMS-local transaction, and only have one other endpoint, the message will only be acknowledged and removed from the queue when the http send is completed-- even though HTTP doesn't support transactions. You can then have a number of concurrent consumers to run in parallel and combine with throttling to help with rate limiting.
from: amq:queue:INPUT.REQUESTS?.. concurrentConsumers.. and transacted enabled
throttle
to: http://url
Trying to set up a route with transaction handling on a camel, this leads to connection to the activeMQ drop and reconnect every few milliseconds is this expected, is there a work around?
Logs showing repeatedly reconnecting to ActiveMQ server:
ActiveMQ FailoverTransport Successfully connected to ssl://serveraddress:61617
ActiveMQ FailoverTransport Successfully connected to ssl://serveraddress:61617
ActiveMQ FailoverTransport Successfully connected to ssl://serveraddress:61617
Changed connection factory to use CachingConnectionFactory, also tweaked configurations to incorporate caching connection.