Want to understand the context switches occurred while sending the dbus message?
Related
We have a camel route where we read a message from an input queue, process it, set some JMS Header( using Exchange.getIn().setHeader(...) ) and then route the message to some output queue. During MQ Failover scenario, the message is redelivered. However, while the message is redelivered the JMS Headers which I put earlier are lost.
Is there any way to preserve the JMS Headers even after redelivery ?
JMS redelivery
No, not if the message is redelivered from the input queue. Simply because it is the same original message you received before. The JMS broker does not know anything about the modifications you did in the Camel route.
However, this is normally not a problem. Because on a redelivery, the same consumer consumes the message again and does the same modifications again on the message.
As soon as you reach a "transaction boundary" in your route (that means, something has done that cannot be repeated or would yield a different result), you should put the modified message on another queue to "save" its current state.
From there you can continue with another consumer and so forth. If you build a processing chain like this, your system is a Pipes and Filter EIP.
Camel redelivery
Another possibility is to use the Camel ErrorHandler. It handles errors on a single route-step level. It can also do retries, but then you have to make sure the message is correctly handled if all Camel retries fail (for example send the message to an error queue).
As long as the broker redelivery is the last resort for your message, you should build your system with potential redelivery in mind.
I have this problem, too. i'm forced to convert my objects to json and save in headers of jms. and after i convert again to object. it's worked for me!
Please help me in finalizing architecture for Asynchronous delivery support using Apache Camel and ActiveMQ and below I have explained point by point basis about my requirement.
I have Jetty Server receiving incoming messages and ActiveMQ to store it in disk using Kaha DB.
Active MQ sends ack once it stores in kaha DB back to client.
I have Spring AbstractPollingMessageListenerContainer JMS Message listener which picks up the message from activemq queue every 1 second and dispatch to Camel HTTP endpoints and then finally sent to actual remote receivers.
Once Dispatcher thread gets response from remote receivers it deletes message from ActiveMQ.
Assume that I have many slow remote receivers in that case my Dispatcher thread created by AbstractPollingMessageListenerContainer remains blocked until I get response from remote receivers. This results to creation of new Dispatcher threads since already created Dispatcher threads are not able to dispatch new messages from ActiveMQ queue.
Now creation of many Dispatcher threads result into more CPU usage which impacts overall performance.
Now my requirement is I want Dispatcher thread only to dispatch messages from ActiveMQ queue to HTTP endpoint and forget and also not do acknowledgement so that message is still in queue.
Also I will not let Dispatcher thread to wait till I get response so I have thought to handle response using separate thread and this same thread will only delete message from ActiveMQ queue.
So my current architecture is like below:
Camel Jetty Server ----> ActiveMQ queue ----> Dispatcher Thread ---> Camel Direct endpoint ----> Camel HTTP endpoint ---> remote receivers sending response back ---> response ---> Dispatcher Thread (sends ack to delete messages from ActiveMQ queue) ----> ActiveMQ Queue.
Here I feel since we are using Direct endpoint which is synchronous so Dispatcher thread remains active till it gets response and so same dispatcher thread is not able to process further new message from ActiveMQ queue.
Please suggest if some thing else I can use here to avoid Direct endpoint.
I used SEDA endpoint but drawback is it processes 1 message using 1 thread and also gets blocked till it gets response from receivers.
In this approach previously Dispatcher thread gets blocked but now Seda consumer threads gets blocked and could not dispatch new messages from in memory queue of SEDA towards remote receivers.
I feel some kind of design which helps me in keep on sending message to remote receivers and only when response comes back some daemeon thread gets notified and it will handle acknowledgement towards activeMQ. Also I thought to use NIO framework implementation like Camel netty/netty4-http component but could not find exact usage and how to fit it in current architecture.
Modified architecture should be like below:
Camel Jetty Server ----> ActiveMQ queue ----> Dispatcher Thread--->Unknown Stuff ----> Camel HTTP Endpoint ---> remote receivers sending response back--->Unknown Stuff (sends ack to delete messages from ActiveMQ queue) ----> ActiveMQ Queue
Please help me in finalising Unknown Stuff and I am posting my query after doing enough R & D.
Also new ideas are welcome and please give me idea with a restriction that I must persist the message and delete it only after getting success response from remote receiver. Also I have to design architecture only using Apache Camel routes.
Route Definitions:
1. Dispatcher Route:
from(fromUri)to(toUris);
fromUri:
[ActiveMQueue.http1270018081testEndpoint1:queue:ActiveMQueue?maxConcurrentConsumers=15&concurrentConsumers=3&maxMessagesPerTask=10&messageListenerContainerFactoryRef=AbstractPollingMessageListenerContainer ]
ToUris:
[ActiveMQ.DLQ:queue:ActiveMQ.DLQ, direct:http1270018081testEndpoint1]
2.Remote Receiver Proxy Route:
fromUri:direct:http1270018081testEndpoint1
from(fromUri).to(toUri).process(responseProcessor)
toUri:http://127.0.0.1:8081/testEndpoint1?bridgeEndpoint=true
responseProcessor: To process response received by remote receiver.
Overall Route looks like below:
Dispatcher Route---> Remote Receiver Route---> Remote Server
JMS message acknowledgement is done under the covers, so your only way to really "send the acknowledgement back to the queue" is to use a JMS transaction (doesn't need to be XA)
It sounds like a LLR-style transaction would be useful and drastically simplify things for you. If you consume the message from the queue using a JMS-local transaction, and only have one other endpoint, the message will only be acknowledged and removed from the queue when the http send is completed-- even though HTTP doesn't support transactions. You can then have a number of concurrent consumers to run in parallel and combine with throttling to help with rate limiting.
from: amq:queue:INPUT.REQUESTS?.. concurrentConsumers.. and transacted enabled
throttle
to: http://url
We are exposing a SOAP/HTTP based camel-cxf web service where on receiving a request from 'Client-A', the route execution starts which involves calling one or more external web services lets say 'Server1', 'Server2' and 'Server3' in a sequentialmanner. In this case, we need to understand what happens to the execution of the route, when the original TCP connection with 'Client-A' is closed unexpectedly.
Will the route get executed successfully and an error is logged when it tries to send the final response?
Or will the route execution be stopped immediately as soon as the TCP connection is closed?
You can capture any errors during the route execution by camel's error handling mechanism and then define the policy on how you want to handle the exception, for example you can setup rules that state how many times to try redelivery, and the delay in between attempts, and so forth.
For more information you can refer to this link:
http://camel.apache.org/redeliverypolicy.html
I use nopoll (http://www.aspl.es/nopoll/) for my C application to communicate with Meteor.
Meteor send periodically some ping message.
When my application poll websocket, it replies with pong message : everything is find.
Next, to avoid polling, I replace it by a callback initialized with sigaction(SIGIO, ...).
Then, when ping is received, I send pong, but sometimes, server stop sending ping and no other message could be exchanged.
Is there any timeout between ping and the associated pong message.
Is there any mechanism to advertize myself of a connection loss, cause nopoll_conn_is_ok() and nopoll_conn_is ready() are always nopoll_true.
It is difficult to say why Meteor is stopping sending content. However, two points are interesting to be considering your case:
You don't have to send a PONG everytime you receive a PING when using noPoll because that's automatically done by noPoll's engine (see nopoll_conn_get_msg() implementation at nopoll_conn.c:2453). Maybe this is causing Meteor to fail.
About getting a connection close notification, use nopoll_conn_set_on_close (conn,handler,ptr) to get a notification when the connection is closed. See working examples here: https://dolphin.aspl.es/svn/publico/nopoll/trunk/test/nopoll-regression-client.c
Best Regards,
Today, I tried to simulate a scenario where in the camel "to" tag I supplied a mis-spelt queue name(which was not there) Camel or RabbitMq instead of throwing an exception back continued to finish the route flow.
Intrgigued I did write a sample program to send a message using "channel.basicPublish" with a wrong queue name. I never got any exception thrown back from rabbit mq client.
however if the exchange name was wrong, I did get an exception back. Is this expected behaviour?
I tried adding return listener, confirm listener,exception handler, etc., but none of them got invoked.
Any clues?
Messages are published to exchanges, so the exchange must be there when publishing messages. At publish time RabbitMQ doesn't care about queues, unless the mandatory flag is provided, or the channel is in confirm mode.