Service bus queue in e-commerce application - sql-server

We have an e-commerce application running on MS SQL.
Every now and then we have a flash sale, and once we start inserting all the orders into the database, our site's performance drops. We have it at the point where we can insert about 1,500 orders in a minute, but the site hangs for a few minutes after that. The site only hangs once the inserts start happening.
I have been looking into using Azure Service Bus queues mixed with SignalR to manage the order process, as this was suggested to me a while back. The way I see it happening is (broad overview):
Client calls a procedure on the server which inserts an order into a queue.
Client gets notified that they are in a queue.
We have a worker process which processes the order from the queue and inserts it into the database.
Server then notifies the client that the order is processed and moves them onto the payment page.
I am new to SignalR and queues in general so my questions are:
Will queues actually have a performance benefit. If so, why?
Are queues even the correct thing to use in this instance?

The overview you mention makes sense. It seems like you should be able to do it without SignalR since ServiceBus will let you know once it successfully inserted the message into the queue.
It is not that queues give you better performance for 1 request. Messages placed onto the queue will be stored until you are ready to process them. By doing this you will not suffer "peak" issues and you will be able to receive from the Queue at a speed that you know your system is able to sustain (Maybe 500 orders/minute or whatever number works for you).
So they will give you a much more stable latency per request without bringing down your system.

Related

Frequent Database Query for Instant Message

I am creating an Instant Messaging application for our department. The features of this application are:
The messages will be stored in a database
The messages may be sent to one, multiple, or all users/locations
The logged in user will be able to see a history of the messages they are included in.
My question: is it appropriate to constantly query the database from each client - there should be less than 20 clients running - say every 15 - 30 secs or so? I have seen examples of a server/client messaging app using tcipclient but am not familiar with that subject. So I thought querying the database might be the approach I could go with. What are the ramifications of performing these queries so often? I'm also looking at sqldependencies??? Should I really go back to and try and learn tcip technology?
Thanks
If you know that you will always have of the order of tens of clients but not of the order of thousands of clients, then polling will work just fine, and you do not have to poll every 15 seconds, (it would be unusable if you did so,) you can poll every 100 or 200 milliseconds, so chatting will appear instantaneous.
Just make sure that each polling operation is as simple as possible. The simplest operation you can do is this:
SELECT * FROM chat_log WHERE chat_log.id > ? where id is your IDENTITY primary key, and ? is the last id that your client has seen so far from the server. Therefore, if there are no new chat messages, no rows are retrieved. With every row retrieved by a client, update the largest id that the client has seen so far, and you are good to go.
I have done it and it works like a charm.
From a technical point of view polling is a very ignoble technique, but in many situations it can be a practical compromise which may yield good enough results with very little development. (The alternative would be to create a proper chat server which sends push notifications to the clients, good luck with that.)
If its less that 20 clients (20 select queries every 20 seconds + some writes), SQL Server will have no issues to process these messages.
Selection of tools and technology depends on your actual requirements. (size of messages, allow file transfers, delete/edit messages...)
I can suggest few options to improve performance,
Reading Messages - You can use Caching (e.g. Azure Redis Cache) for recent messages (last 30days). You can come up with background cache update strategy to make sure it's continuously updated with new messages. Read messages will call the cache first, it will hit the database only if there is a cache miss.
Also you can create a local message cache (client side) which will dramatically improve performance for end user. You can create a SQLite for this (like Skype does. Win + R -> %appdata%\skype -> folder -> main.db)
Or else you can simply have an Archive table in your db where a scheduled (every 24 hours) background process archives messages older than 14/30 days. So you will have recent messages
Writing - Writing messages will be chatty, rather than directly updating the database you can use a Message queue (Azure Message Queue, Rabbit MQ.. etc). Then you can have another process to write messages to the database.
Each technology selection will have it's own cost, pros and cons and learning time. Therefore start simple and leave room to scale later.

In AWS or Azure cloud architecture, why would I choose message queues over polling a database?

In the past I have used message queues to handle spikes in demand. This system works fine, except for logging purposes. I write successfully processed messages to a database for reporting and logging. This makes me wonder why I don't just write the message into a database from the beginning, and have my "worker roles" poll the database, rather than the message queue.
I'm guessing this is not the best design because as the database grows, polling a huge database just to look for one "unchecked" record to process will become very slow, whereas a message queue just gives me one if I ask for it instantaneously.
Am I missing something? Are there other reasons to choose a message queue over polling a database? I would love to offer users the ability to see what has yet to be processed (floating in the queue) but that operation takes much longer than running a query on the database, so it seems to be a tradeoff.
Thanks for any input.
One other reason that springs to mind is blocking/locking. Typically, if you just poll a database looking for work, it'll work reasonably well as long as you have only one worker digesting the messages. However, if you want to horizontally scale out, and throw more workers at the problem, you'll typically end up causing lock escalations as you change the work messages in your database based "queue" from "needs to get run" to "ran successfully" or whatever.
Using the message queue takes care of this trickiness for you, as all the thread safety and locking/blocking is out of the way.

Impressions/Clicks counting design for heavy load

We have an affiliate system which counts millions of banner Impressions/Clicks per day.
Currently it writes to SQL every Impression/Click that occurs in real time on each request.
Web application serves these requests.
We are facing two problems:
If we have a lot of concurrent requests per second, the SQL is
starting to work very hard to insert the Impressons/Clicks data and
as a result lead to problem #2.
If SQL is slow at the moment, the requests are being accumulated and
are waiting in the queue on web server. As a result we have a
slowness on a web application and requests are not being processed.
Design we thought of in high level:
We are now considering changing the design by taking out the writing to SQL logic out of web application (write it to some local storage instead) and making a stand alone service which will read from local storage and eventually write the aggregated Impressions/Clicks data (not in real time) to SQL in background.
Our constraints:
10 web servers (load balanced)
1 SQL server
What do you think of suggested design?
Would you use NoSQL as local storage for each web server?
Suggest your alternative.
Your problem seems to be that your front-end code is synchronusly blocking while waiting for the back-end code to update the database.
Decouple front-end and back-end, e.g. by putting a queue inbetween where the front-end can write to the queue with low latency and high throughput. The back-end then can take its time to process the queued data into their destinations.
It may or may not be necessary to make the queue restartable (i.e. not losing data after a crash). Depending on this, you have various options:
In-memory queue, speedy but not crash-proof.
Database queue, makes sense if writing the raw request data to a simple data structure is faster than writing the final data into its target data structures.
Renundant queues, to cover for crashes.
I'm with Bernd, but I'm not sure about using a queue specifically.
All you need is something asynchronous that you can call; that way the act of logging the impression is pretty much redundant.

Message Queue or DataBase insert and select

I am designing an application and I have two ideas in mind (below). I have a process that collects data appx. 30 KB and this data will be collected every 5 minutes and needs to be updated on client (web side-- 100 users at any given time). Information collected does not need to be stored for future usage.
Options:
I can get data and insert into database every 5 minutes. And then client call will be made to DB and retrieve data and update UI.
Collect data and put it into Topic or Queue. Now multiple clients (consumers) can go to Queue and obtain data.
I am looking for option 2 as better solution because it is faster (no DB calls) and no redundancy of storage.
Can anyone suggest which would be ideal solution and why ?
I don't really understand the difference. The data has to be temporarily stored somewhere until the next update, right.
But all users can see it, not just the first person to get there, right? So a queue is not really an appropriate data structure from my interpretation of your system.
Whether the data is written to something persistent like a database or something less persistent like part of the web server or application server may be relevant here.
Also, you have tagged this as real-time, but I don't see how the web-clients are getting updates real-time without some kind of push/long-pull or whatever.
Seems to me that you need to use a queue and publisher/subscriber pattern.
This is an article about RabitMQ and Publish/Subscribe pattern.
I can get data and insert into database every 5 minutes. And then client call will be made to DB and retrieve data and update UI.
You can program your application to be event oriented. For ie, raise domain events and publish your message for your subscribers.
When you use a queue, the subscriber will dequeue the message addressed to him and, ofc, obeying the order (FIFO). In addition, there will be a guarantee of delivery, different from a database where the record can be delete, and yet not every 'subscriber' have gotten the message.
The pitfalls of using the database to accomplish this is:
Creation of indexes makes querying faster, but inserts slower;
Will have to control the delivery guarantee for every subscriber;
You'll need TTL (Time to Live) strategy for the records purge (considering delivery guarantee);

How do you best offload a database insert, so a web response is returned quicker?

Setup
I have web service that takes its inputs through a REST interface. The REST call does not return any meaningful data, so whatever is passed in to the web service is just recorded in the database and that is it. It is an analytics service which my company is using internally to do some special processing on web requests that are received on their web page. So it is very important the response take as little time to return as possible.
I have pretty much optimized the code down as much as possible, to make the response as fast as possible. However, the time the database stays open still keeps the connection open for longer than I want before a response is sent back to the web client.
The code looks basically like this, by the way it is ASP.NET MVC, using Entity Framework, running on IIS 7, if that matters.
public ActionResult Add(/*..bunch of parameters..*/) {
using (var db = new Entities()) {
var log = new Log {
// populate Log from parameters
}
db.AddToLogs(log);
db.SaveChanges();
}
return File(pixelImage, "image/gif");
}
Question
Is there a way to off load the database insert in to another process, so the response to the client is returned almost instantly?
I was thinking about wrapping everything in the using block in another thread, to make the database insert asynchronous, but didn't know if that was the best way to free up the response back to the client.
What would you recommend if you were trying to accomplish this goal?
If the request has to be reliable then you need to write it into the database. Eg. if your return means 'I have paid the merchant' then you can't return before you actually commit in the database. If the processing is long then there are database based asynchronous patterns, using a table as a queue or using built-in queuing like Asynchronous procedure execution. But these apply when heavy and lengthy processing is needed, not for a simple log insert.
When you want just to insert a log record (visitor/url tracking stuff) then the simplest solution is to use CLR's thread pools and just queue the work, something like:
...
var log = new Log {// populate Log from parameters}
ThreadPool.QueueUserWorkItem(stateInfo=>{
var queueLog = stateInfo as Log;
using (var db = new Entities())
{
db.AddToLogs(queuedLog);
db.SaveChanges();
}
}, log);
...
This is quick and easy and it frees the ASP handler thread to return the response as soon as possible. But it has some drawbacks:
If the incomming rate of requests exceeds the thread pool processing rate then the in memory queue will grow until it will trigger an app pool 'recycle', thus loosing all items 'in progress' (as well as warm caches and other goodies).
The order of requests is not preserved (may or may not be important)
It consumes a CLR pool thread on doing nothing but waiting for a response from the DB
The last concern can be addressed by using a true asynchronous database call, via SqlCommand.BeginExecuteXXX and setting the AsynchronousProcessing on the connection to true. Unfortunately AFAIK EF doesn't yet have true asynchronous execution, so you would have to resort to the SqlClient layer (SqlConnection, SqlCommand). But this solution would not address the first concern, when the rate of page hits is so high that this logging (= writes on every page hit) becomes a critical bottle neck.
If the first concern is real then and no threading and/or producer/consumer wizardry can aleviate it. If you trully have an incomming rate vs. write rate scalability concern ('pending' queue grows in memory) you have to either make the writes faster in the DB layer (faster IO, special log flush IO) and/or you have to aggregate the writes. Instead of logging every requests, just increment in memory counters and write them periodically as aggregates.
I've been working on multi-tier solutions mostly for the last year or so that require this sort of functionality, and that's exactly how I've been doing it.
I have a singleton that takes care of running tasks in the background based on an ITask interface. Then I just register a new ITask with my singleton and pass control from my main thread back to the client.
Create a separate thread that monitors a global, in memory queue. Have your request put it's information on the queue and return, the thread then takes the item off the queue and posts it to the DB.
Under heavy load, if the thread lags the requests, your queue will grow.
Also, if you lose the machine, you will lose any unprocessed queue entries.
Whether these limitations are acceptable to you, you'd need to decide that.
A more formal mechanism is using some actual middleware messaging system (JMS in Java land, dunno the equivalent in .NET, but there's certainly something).
It depends: When you return to the client do you need to be 100% sure that the data is stored in the database?
Take this scenario:
Request comes in
A thread is started to save to the database
Response is sent to the client
Server crashes
Data was not saved to the database
You also need to check how many milliseconds you save by starting a new thread instead of saving to the database.
The added complexity and maintainence cost is probably too high compared with the savings in response time. And the savings in response time are probably so low that they will not be noticed.
Before I spent a lot of time on the optimization I'd be sure of where the time is going. Connections like these have significant latency overhead (check this out). Just for grins, make your service a NOP and see how it performs.
It seems to me that the 'async-ness' needs to be on the client - it should fire off the call to your service and move on, especially since it doesn't care about the result?
I also suspect that if the NOP performance is good-to-tolerable on your LAN it will be a different story in the wild.

Resources