If you place a message onto a SQL service broker queue does the message have any kind of time to live value or will it just hang around forever?
I cannot seem to find anything about this. There is this post on SQL Server forums where it seems to be suggested that there is an effective timeout of 30 minutes. However, this is with message forwarding which I don't know is applicable to the above scenario.
It will remain on the queue. If you define a LIFETIME in the BEGIN CONVERSATION call, then broker will refuse to accept messages once that time is exceeded. I'm not sure what it does with messages on the queue it hasn't yet successfully sent once the lifetime has been passed though.
Related
Summary
I am looking for a statement or short TSQL script that will sleep wait for the next message NOT in a known conversation group so that a worker can be allocated to that conversation group and begin processing the messages in that group. In other words, to sleep until a new conversation is started. I am looking for a better method than polling as polling is inefficient and causes unnecessary latency.
Background
I am designing a service that is available to database applications via the SQL Server Service broker. For reasons that are not important here, each instance of the target side service has a lot of internal state; each message needs to be routed to the correct process. The system needs to be able to handle requests from multiple clients concurrently. Multiple requests on one conversation from any given client are related to each other and processed serially.
One conversation per worker process
Currently, each conversation initiated to the Target SB Service starts a dedicated worker process; the lifetime of that process and conversation are linked together. Most requests consist of the Initiator sending a message and immediately waiting for a response before continuing.
Each Target worker gets its own dedicated conversation_group_id to identify its interactions on the queue.
I use RECEIVE with the WHERE conversation_group_id = #group option in each worker process's message pump so that any given service instance only receives its own messages and does not conflict with other instances running on the same Service/Queue.
When a worker initiates its own conversation, it will assign the new dialog to the existing conversation group with the RELATED_CONVERSATION_GROUP clause.
If a worker encounters a fatal error, it uses END CONVERSATION ... WITH ERROR ... to terminate the session (and the process) with a system error message to the Initiator. The Initiator will propagate that error and close its end of the conversation.
If the Initiator ends the conversation (including by error) the Target worker closes its end and exits.
The dialog LIFETIME option is used to prevent a Target worker processes from being kept around for an excessive amount of time.
Better notification method needed to trigger new worker process creation
The system works great for the individual worker processes. The problem I am having is in synchronizing the parent process that launches new workers in response to new conversations.
RECEIVE is the only statement I know of that will block until a new message is available for processing on a queue with the WAITFOR clause. This is important to avoid polling.
RECEIVE appears to have exactly two possible WHERE conditions to limit the messages selected: to filter to exactly one known conversation endpoint or one known conversation endpoint group.
The simplest solution would be to pass a NOT IN or NOT EXISTS clause to the WHERE condition of RECEIVE that excludes conversation groups that already have a worker assigned. This does not appear to be supported by the syntax.
Using a direct RECEIVE without a WHERE can and will pick up messages meant for a specific worker that already exists; messages which cannot be handled by the parent process. Receiving the message into the wrong process presents new problems like the need to the message back with a ROLLBACK to undo the queue removal (potentially many times if it takes the worker a while to get to that message) and that method also increases the poison message counter unnecessarily.
The only solution I have at present is to do repeatedly SELECT sys.conversation_endpoints or the queue directly, once per second, looking for conversations that are in a new group without a worker. This introduces up to one extra second of latency and presumably has higher overhead than a simple blocking RECEIVE
Is there a better way to be notified of only NEW conversations to the Service or Queue?
We're sending messages to Apache Camel using RabbitMQ.
We have a "sender" and a Camel route that processes a RabbitMQ message sent by the sender.
We're having deployment issues regarding which end of the system comes up first.
Our system is low-volume. I am sending perhaps 100 messages at a time. The point of the message is to reduce 'temporal cohesion' between a thing happening in our primary database, and logging of same to a different database. We don't want our front-end to have to wait.
The "sender" will create an exchange if it does not exist.
The issue is causing deployment issues.
Here's what I see:
If I down the sender, down Camel, delete the exchange (clean slate), start the sender, then start Camel, and send 100 messages, the system works. (I think because the sender has to be run manually for testing, the Exchange is being created by the Camel Route...)
If I clean slate, and send a message, and then up Camel afterwards, I can see the messages land in RabbitMQ (using the web tool). No queues are bound. Once I start Camel, I can see its bound queue attached to the Exchange. But the messages have been lost to time and fate; they have apparently been dropped.
If, from the current state, I send more messages, they flow properly.
I think that if the messages that got dropped were persisted, I'd be ok. What am I missing?
For me it's hard to say what exactly is wrong, but I'll try and provide some pointers.
You should set up all exchanges and queues to be durable, and the messages persistent. You should never delete any of these entities (unless they are empty and you no longer use them) and maybe look at them as tables in a database. It's your infrastructure of sorts, and as with database, you wouldn't want that the first DB client to create a table that it needs (this of course applies to your use case, at least that's what it seems to me).
In the comments I mentioned flow state of the queue, but with 100 messages this will probably never happen.
Regarding message delivery - persistent or not, the broker (server) keeps them until they are consumed with acknowledgment that's sent back by the consumer (in lot's of APIs this is done automatically but it's actually one of the most important concepts).
If the exchange to which the messages were published is deleted, they are gone. If the server gets killed or restarted and the messages are persisted - again, they're gone. There may as well be some more scenarios in which messages get dropped (if I think of some I'll edit the answer).
If you don't have control over creating (declaring usually in the APIs) exchanges and queues, than (aside from the fact that's it's not the best thing IMHO) it can be tricky since declaring those entities is idempotent, i.e. you can't create a durable queue q1 , if a non durable queue with the same name already exists. This could also be a problem in your case, since you mention the which part of the system comes first thing - maybe something is not declared with same parameters on both sides...
I use PushSharp to send notifications for a few Apps.
PushSharp is great it really simplifies the work with push services, and I wonder what is the right way to work with it?
I haven't found examples/ explanations about that.
Now, when I have a message to send , I ...
create a PushSharp object
do a PushService.QueueNotification() for all devices
do a PushService.StopAllServices to send all queued messages
exits the method (and kill the PushService object).
Should I work this way, or keep this PushService object alive and call its methods when needed?
How should I use a PushService object to get the unregistered device ids? with a dedicated instance?
Any suggestion would be appreciated.
This is a question which frequently comes up.
The answer isn't necessarily one way or the other, but it depends on your situation. In most cases it would be absolutely fine to just create a PushBroker instance whenever you need it, since most platforms use HTTP based protocols for sending notifications. In the case of Apple, they state in their documentation that you should keep your connection to APNS open in order to minimize overhead of opening and closing secure connections.
However, in practice I think this means that they don't want you connecting and disconnecting VERY frequently (eg: they don't want you creating a new connection for every message you send). In reality, if you're sending batches of notifications every so often (let's say every 15 minutes or every hour) they probably won't have a problem with you opening a new connection for each batch and then closing it when done.
I've never heard of anyone being blocked from Apple's APNS servers for doing this. In fact in the very early days of working with push notifications, I had a bug that caused a new apns connection to be created for each notification. I sent thousands of notifications a day like this and never heard anything about it from Apple (eventually I identified it as a bug and fixed it of course).
As for collecting feedback, by default the ApplePushService will poll the feedback servers after 10 seconds of starting, and then every 10 minutes thereafter. If you want to disable this from happening you can simply set the ApplePushChannelSettings.FeedbackIntervalMinutes to <= 0. You can then use the FeedbackService class to poll for feedback whenever you need to, manually.
I'm going to do async auditing on my SQL Server 2008 as shown here: http://auoracle.blogspot.com/2010/02/service-broker-master-audit-database.html
What it does is:
a trigger sends a message to a queue in the service broker
another SP in other database receives the messages and process them
The possible problem I see is that it's using a single conversation to send all the messages in order, which is a requirement.
I'm just a little concerned about the fact it's using a single conversation, I guess it's not the common usage. Do you know if there's any problem on doing so?
Thanks!
There's nothing wrong with using a single conversation. Some people use conversation pooling with several pre-created conversations, but unless you're hitting a performance bottleneck, I wouldn't worry about it.
One thing that you should get right is error handling, closing the conversation and opening a new one in case of error.
I'm new to using the SQL Service 2005 Service Broker. I've created queues and successfully got conversations going, etc. However, I want to sort of "throttle" messages, and I'm not sure how to go about that.
Messages are sent by a stored proc which is called by a multi-user application. Say 20 users cause this proc to be called once each within a 30 second period of time, it only needs to be sent once. So I think I need some way from my proc to see if a message was sent within in the last 30 seconds? Is there a way to do that?
One idea I had was to send a message to a "response" queue that indicates if the request queue activation proc has been called. Then in my stored proc (called by user app) see if that particular message has been called recently. Problem is I don't want this to mess up the response queue. Can one peek at a queue (not receive) to see if a message exists in it?
Or is there a more simple way to accomplish what I'm after?
Yes you can peek at a queue to see if a message is in it before hand. Simply query the queue using SELECT instead of RECEIVE and you can look at the data.
A better bet would be to send the messages and have the stored procedure which receives the messages decide if the message should be tossed out or not.
I send hundreds of thousands of messages to service broker at a time without any sort of performance issue.
If you are seeing performance issues then try sending more than one message per conversation as that is the quickest and easiest way to improve Service Broker performance.
Not sure if you could do this in SB somehow, but could you just have a table with a timestamp field in it that was updated when a message is sent. The proc would check for a time diff of > 30sec and send.