How to reduce flooding a Service Broker queue? - sql-server

I'm new to using the SQL Service 2005 Service Broker. I've created queues and successfully got conversations going, etc. However, I want to sort of "throttle" messages, and I'm not sure how to go about that.
Messages are sent by a stored proc which is called by a multi-user application. Say 20 users cause this proc to be called once each within a 30 second period of time, it only needs to be sent once. So I think I need some way from my proc to see if a message was sent within in the last 30 seconds? Is there a way to do that?
One idea I had was to send a message to a "response" queue that indicates if the request queue activation proc has been called. Then in my stored proc (called by user app) see if that particular message has been called recently. Problem is I don't want this to mess up the response queue. Can one peek at a queue (not receive) to see if a message exists in it?
Or is there a more simple way to accomplish what I'm after?

Yes you can peek at a queue to see if a message is in it before hand. Simply query the queue using SELECT instead of RECEIVE and you can look at the data.
A better bet would be to send the messages and have the stored procedure which receives the messages decide if the message should be tossed out or not.
I send hundreds of thousands of messages to service broker at a time without any sort of performance issue.
If you are seeing performance issues then try sending more than one message per conversation as that is the quickest and easiest way to improve Service Broker performance.

Not sure if you could do this in SB somehow, but could you just have a table with a timestamp field in it that was updated when a message is sent. The proc would check for a time diff of > 30sec and send.

Related

Not persisting messages when the system comes up in the wrong order

We're sending messages to Apache Camel using RabbitMQ.
We have a "sender" and a Camel route that processes a RabbitMQ message sent by the sender.
We're having deployment issues regarding which end of the system comes up first.
Our system is low-volume. I am sending perhaps 100 messages at a time. The point of the message is to reduce 'temporal cohesion' between a thing happening in our primary database, and logging of same to a different database. We don't want our front-end to have to wait.
The "sender" will create an exchange if it does not exist.
The issue is causing deployment issues.
Here's what I see:
If I down the sender, down Camel, delete the exchange (clean slate), start the sender, then start Camel, and send 100 messages, the system works. (I think because the sender has to be run manually for testing, the Exchange is being created by the Camel Route...)
If I clean slate, and send a message, and then up Camel afterwards, I can see the messages land in RabbitMQ (using the web tool). No queues are bound. Once I start Camel, I can see its bound queue attached to the Exchange. But the messages have been lost to time and fate; they have apparently been dropped.
If, from the current state, I send more messages, they flow properly.
I think that if the messages that got dropped were persisted, I'd be ok. What am I missing?
For me it's hard to say what exactly is wrong, but I'll try and provide some pointers.
You should set up all exchanges and queues to be durable, and the messages persistent. You should never delete any of these entities (unless they are empty and you no longer use them) and maybe look at them as tables in a database. It's your infrastructure of sorts, and as with database, you wouldn't want that the first DB client to create a table that it needs (this of course applies to your use case, at least that's what it seems to me).
In the comments I mentioned flow state of the queue, but with 100 messages this will probably never happen.
Regarding message delivery - persistent or not, the broker (server) keeps them until they are consumed with acknowledgment that's sent back by the consumer (in lot's of APIs this is done automatically but it's actually one of the most important concepts).
If the exchange to which the messages were published is deleted, they are gone. If the server gets killed or restarted and the messages are persisted - again, they're gone. There may as well be some more scenarios in which messages get dropped (if I think of some I'll edit the answer).
If you don't have control over creating (declaring usually in the APIs) exchanges and queues, than (aside from the fact that's it's not the best thing IMHO) it can be tricky since declaring those entities is idempotent, i.e. you can't create a durable queue q1 , if a non durable queue with the same name already exists. This could also be a problem in your case, since you mention the which part of the system comes first thing - maybe something is not declared with same parameters on both sides...

Service bus queue in e-commerce application

We have an e-commerce application running on MS SQL.
Every now and then we have a flash sale, and once we start inserting all the orders into the database, our site's performance drops. We have it at the point where we can insert about 1,500 orders in a minute, but the site hangs for a few minutes after that. The site only hangs once the inserts start happening.
I have been looking into using Azure Service Bus queues mixed with SignalR to manage the order process, as this was suggested to me a while back. The way I see it happening is (broad overview):
Client calls a procedure on the server which inserts an order into a queue.
Client gets notified that they are in a queue.
We have a worker process which processes the order from the queue and inserts it into the database.
Server then notifies the client that the order is processed and moves them onto the payment page.
I am new to SignalR and queues in general so my questions are:
Will queues actually have a performance benefit. If so, why?
Are queues even the correct thing to use in this instance?
The overview you mention makes sense. It seems like you should be able to do it without SignalR since ServiceBus will let you know once it successfully inserted the message into the queue.
It is not that queues give you better performance for 1 request. Messages placed onto the queue will be stored until you are ready to process them. By doing this you will not suffer "peak" issues and you will be able to receive from the Queue at a speed that you know your system is able to sustain (Maybe 500 orders/minute or whatever number works for you).
So they will give you a much more stable latency per request without bringing down your system.

Service Broker message time to live

If you place a message onto a SQL service broker queue does the message have any kind of time to live value or will it just hang around forever?
I cannot seem to find anything about this. There is this post on SQL Server forums where it seems to be suggested that there is an effective timeout of 30 minutes. However, this is with message forwarding which I don't know is applicable to the above scenario.
It will remain on the queue. If you define a LIFETIME in the BEGIN CONVERSATION call, then broker will refuse to accept messages once that time is exceeded. I'm not sure what it does with messages on the queue it hasn't yet successfully sent once the lifetime has been passed though.

Message Queue or DataBase insert and select

I am designing an application and I have two ideas in mind (below). I have a process that collects data appx. 30 KB and this data will be collected every 5 minutes and needs to be updated on client (web side-- 100 users at any given time). Information collected does not need to be stored for future usage.
Options:
I can get data and insert into database every 5 minutes. And then client call will be made to DB and retrieve data and update UI.
Collect data and put it into Topic or Queue. Now multiple clients (consumers) can go to Queue and obtain data.
I am looking for option 2 as better solution because it is faster (no DB calls) and no redundancy of storage.
Can anyone suggest which would be ideal solution and why ?
I don't really understand the difference. The data has to be temporarily stored somewhere until the next update, right.
But all users can see it, not just the first person to get there, right? So a queue is not really an appropriate data structure from my interpretation of your system.
Whether the data is written to something persistent like a database or something less persistent like part of the web server or application server may be relevant here.
Also, you have tagged this as real-time, but I don't see how the web-clients are getting updates real-time without some kind of push/long-pull or whatever.
Seems to me that you need to use a queue and publisher/subscriber pattern.
This is an article about RabitMQ and Publish/Subscribe pattern.
I can get data and insert into database every 5 minutes. And then client call will be made to DB and retrieve data and update UI.
You can program your application to be event oriented. For ie, raise domain events and publish your message for your subscribers.
When you use a queue, the subscriber will dequeue the message addressed to him and, ofc, obeying the order (FIFO). In addition, there will be a guarantee of delivery, different from a database where the record can be delete, and yet not every 'subscriber' have gotten the message.
The pitfalls of using the database to accomplish this is:
Creation of indexes makes querying faster, but inserts slower;
Will have to control the delivery guarantee for every subscriber;
You'll need TTL (Time to Live) strategy for the records purge (considering delivery guarantee);

Single conversation in Service Broker

I'm going to do async auditing on my SQL Server 2008 as shown here: http://auoracle.blogspot.com/2010/02/service-broker-master-audit-database.html
What it does is:
a trigger sends a message to a queue in the service broker
another SP in other database receives the messages and process them
The possible problem I see is that it's using a single conversation to send all the messages in order, which is a requirement.
I'm just a little concerned about the fact it's using a single conversation, I guess it's not the common usage. Do you know if there's any problem on doing so?
Thanks!
There's nothing wrong with using a single conversation. Some people use conversation pooling with several pre-created conversations, but unless you're hitting a performance bottleneck, I wouldn't worry about it.
One thing that you should get right is error handling, closing the conversation and opening a new one in case of error.

Resources