The best way to design a Reservation based table - sql-server

One of my Clients has a reservation based system. Similar to air lines. Running on MS SQL 2005.
The way the previous company has designed it is to create an allocation as a set of rows.
Simple Example Being:
AllocationId | SeatNumber | IsSold
1234 | A01 | 0
1234 | A02 | 0
In the process of selling a seat the system will establish an update lock on the table.
We have a problem at the moment where the locking process is running slow and we are looking at ways to speed it up.
The table is already efficiently index, so we are looking at a hardware solution to speed up the process. The table is about 5 mil active rows and sits on a RAID 50 SAS array.
I am assuming hard disk seek time is going to be the limiting factor in speeding up update locks when you have 5mil rows and are updating 2-5 rows at a time (I could be wrong).
I've herd about people using index partition over several disk arrays, has anyone had similar experiences with trying to speed up locking? can anyone give me some advise onto a possible solution on what hardware might be able to be upgraded or what technology we can take advantage of in order to speed up the update locks (without moving to a cluster)?

One last try…
It is clear that there are too many locks hold for too long.
Once the system starts slowing down
due to too many locks there is no
point in starting more transactions.
Therefore you should benchmark the system to find out the optimal number of currant transaction, then use some queue system (or otherwise) to limit the number of currant transaction. Sql Server may have some setting (number of active connections etc) to help, otherwise you will have to write this in your application code.
Oracle is good at allowing reads to bypass writes, however SqlServer is not as standared...
Therefore I would split the stored proc to use two transactions, the first transaction should just:
be a SNAPSHOT (or READ UNCOMMITTED) transaction
find the “Id” of the rows for the seats you wish to sell.
You should then commit (or abort) this transaction,
and use a 2nd (hopefully very short) transaction that
Most likcly is READ COMMITTED, (or maybe SERIALIZABLE)
Selects each row for update (use a locking hint)
Check it has not been sold in the mean time (abort and start again if it has)
Set the “IsSold” flag on the row
(You may be able to the above in a single update statement using “in”, and then check that the expected number of rows were updated)
Sorry sometimes you do need to understant what each time of transaction does and how locking works in detail.
If the table is smaller, then the
update is shorter and the locks are
hold for less time.
Therefore consider splitting the table:
so you have a table that JUST contains “AllocationId” and “IsSold”.
This table could be stored as a single btree (index organized table on AllocationId)
As all the other indexes will be on the table that contrains the details of the seat, no indexes should be locked by the update.

I don't think you'd getting anything out of table partitioning -- the only improvement you'd get would be in fewer disk reads from a smaller (shorter) index trees (each read will hit each level of the index at least once, so the fewer levels the quicker the read.) However, I've got a table with a 4M+ row partition, indexed on 4 columns, net 10 byte key length. It fits in three index levels, with the topmost level 42.6% full. Assuming you had something similar, it seems reasonable that partitioning might only remove one level from the tree, and I doubt that's much of an improvement.
Some off the-cuff hardward ideas:
Raid 5 (and 50) can be slower on writes, because of the parity calculation. Not an issue (or so I'm told) if the disk I/O cache is large enough to handle the workload, but if that's flooded you might want to look at raid 10.
Partition the table across multiple drive arrays. Take two (or more) Raid arrays, distribute the table across the volumes[files/file groups, with or without table partitioning or partitioned views], and you've got twice the disk I/O speed, depending on where the data lies relative to the queries retrieving it. (If everythings on array #1 and array #2 is idle, you've gained nothing.)
Worst case, there's probably leading edge or bleeding edge technology out there that will blow your socks off. If it's critical to your business and you've got the budget, might be worth some serious research.

How long is the update lock hold for?
Why is the lock on the “table” not just the “rows” being sold?
If the lock is hold for more then a
faction of a second that is likely to
be your problem. SqlServer does not
like you holding locks while users
fill in web forms etc.
With SqlServer, you have to implement a “shopping cart” yourself, by temporary reserving the seat until the user pays for it. E.g add a “IsReserved” and “ReservedAt” colunn, then any seats that has been reserved for more then n minutes should be automatically unreserved.
This is a hard problem, as a shopper does not expect a seat that is in stock to be sold to someone else where he is checking out. However you don’t know if the shopper will ever complete the checkout. So how do you show it on a UI. Think about having a look at what other booking websites do then copy one that your users already know how to use.
(Oracle can sometimes cope with lock being kept for a long time, but even Oracle is a lot faster and happier if you keep your locking short.)

I would first try to figure out why the you are locking the table rather than just a row.
One thing to check out is the Execution plan of the Update statement to see what Indexes it causes to be updated and then make sure that row_level_lock and page_level_lock are enabled on those indexes.
You can do so with the following statement.
Select allow_row_locks, allow_page_locks from sys.indexes where name = 'IndexNameHere'

Here are a few ideas:
Make sure your data and logs are on separate spindles, to maximize write performance.
Configure your drives to only use the first 30% or so for data, and have the remainder be for backups (minimize seek / random access times).
Use RAID 10 for the log volume; add more spindles as needed for performance (write performance is driven by the speed of the log)
Make sure your server has enough RAM. Ideally, everything needed for a transaction should be in memory before the transaction starts, to minimize lock times (consider pre-caching). There are a bunch of performance counters you can check for this.
Partitioning may help, but it depends a lot on the details of your app and data...
I'm assuming that the T-SQL, indexes, transaction size, etc, have already been optimized.
In case it helps, I talk about this subject in detail in my book (including SSDs, disk array optimization, etc) -- Ultra-Fast ASP.NET.

Related

How can a very large table with a single integer primary key index be tuned for massive volume of inserts? [migrated]

This question was migrated from Stack Overflow because it can be answered on Database Administrators Stack Exchange.
Migrated 3 days ago.
Environment:
SQL Server 2019 on Windows Server 2019, on KVM backed by TrueNAS, 16 cores, 32 GB RAM.
Application runs 50 parallel threads all inserting into the same massive table.
This combination appears to work against the SQL Server architecture
Additional details
the problem table is both deep and wide - 20,000,000 rows with over 300 columns and 40-50 indexes
The application uses JDBC Batch API's. This particular table, due to row size, is inserting in batches of 1,000 rows.
Tables with more reasonable row sizes are inserting in batches of 10,000 rows
I can't share the actual DDL, but it's pretty mundane apart from the row simply being massive (a surrogate key BIGINT ID column, two natural key VARCHAR columns, 300 or so cargo columns, 0 BLOB/CLOB columns, then 40-50 indexes)
The primary key index DDL is "create unique index mytable_pk on dbo.mytable (keycolumn);"
The only other unique index DDL is "create unique index mytable_ndx1 on dbo.mytable (division, itemnum)";
The product that owns the database is used by hundreds of fortune 2000 customers, so changing hte data model is not an option for me or the product vendor.
Restrictions
Since the database is ultimately a third party's, any changes I make
to it must be in-place. Once the data is inserted into it, I no
longer have any access to it.
The database is owned by a third party
off-the-shelf application.
the primary key is a sequential integer
Observations and metrics
Early in the process, we were bottlenecked on CPU resources.
Once we hit about 1,000,000 rows, we were single threading on latches, sometimes spending over two seconds in a latch, and rarely spending less than 500ms in a latch. Latching and IO buffer waits were both excessive. CPU dropped to about 12% usage.
In a second test, I dropped all of the indexes and re-ran the job. The job completed 8 times as quickly, showing zero load on the SQL server and bottlenecking on CPU on the application which is very good from the SQL Server perspective.
After reading Microsoft's literature, I came to the conclusion that the data model is working against SQL Server's indexing architecture for tuning for massive inserts.
I will not always have the option of dropping and recreating the indexes. Is there a way to tune the table to distribute the I/O
** Now to the real question **
Is there a way to tune SQL Server, under the covers, to distribute the IO so sequential numbers in an index not in the same buffer when doing massive inserts of sequential data?
There are several well-known approaches to addressing last page insert contention in SQL Server.
Many of these are covered in the documentation at Resolve last-page insert PAGELATCH_EX contention in SQL Server. Summarising the options from that link:
Use OPTIMIZE_FOR_SEQUENTIAL_KEY (details)
Move primary key off identity column
Make the leading key a non-sequential column
Add a non-sequential value as a leading key
Use a GUID as a leading key
Use table partitioning and a computed column with a hash value
Switch to In-Memory OLTP
Method 7 can also be implemented as an in-memory OLTP table to handle a high rate of ingestion with regular batch moves to the final destination table. For the very highest concurrency, use natively compiled code with the in-memory table as much as possible (including for the inserts). The frequency and size of moves is dictated by your requirements.
As mentioned in another answer, delayed durability can also improve insert performance in many cases.
Related Q & A: Solving periodic high PAGELATCH_EX Waits. Last page contention?
All that said, you haven't shown evidence of a last-page contention issue at all. More likely, you're encountering problems related to updating all those secondary indexes and a lack of memory on the instance meaning index maintenance often has to wait for pages to be brought in from storage for modification. You don't mention the type of latch you see waits on, but I imagine they'd be PAGEIOLATCH_*.
The primary solution would be to dramatically increase the memory available to SQL Server for its buffer pool so fewer IOs are necessary. Failing that, a faster storage subsystem would be required.
Have you tried using Delayed Durability?
When to use delayed transaction durability
Some of the cases in which you could benefit from using delayed transaction durability are:
You can tolerate some data loss.
If you can tolerate some data loss, for example, where individual records are not critical as long as you have most of the data, then delayed durability may be worth considering. If you cannot tolerate any data loss, do not use delayed transaction durability.
You are experiencing a bottleneck on transaction log writes.
If your performance issues are due to latency in transaction log writes, your application will likely benefit from using delayed transaction durability.
Your workloads have a high contention rate.
If your system has workloads with a high contention level much time is lost waiting for locks to be released. Delayed transaction durability reduces commit time and thus releases locks faster, which results in higher throughput.
The short answer to your "real question" is no because contiguous keys of a disk-based b-tree index must be stored in the same page.
I've never used SQL server, but your problem isn't specific to one database, so maybe this can still help.
When inserting a large number of rows per second the bottlenecks are either going to be parsing overhead (which can be parallelized), index updates (which may be parallelizable or not), primary key sequence generation, or other stuff like postgres' large object support, but that depends on your column types and database quirks. Then at some point any transactional database must generate sequential transaction log entries which is also a concurrency bottleneck.
First thing you should do is check if the inserts are grouped into transactions (not one insert per transaction). Then make sure the IO is fast, look for bottlenecks there, iowait, etc.
In a second test, I dropped all of the indexes and re-ran the job. The job completed 8 times as quickly, showing zero load on the SQL server
So that eliminates some of the candidates and hints that the problem is indices.
For example if 50 threads each insert a row at the same time, and...
You have a high cardinality index with each row hitting a different page in the index, then these can be parallelized
You have a low cardinality index, most of the inserted rows have the same value in the same column, and all these threads are fighting for control of the same index page.
This can compound with index/table page splits if your fillfactor is too high, in this case all the threads will want to insert in the same index page, and it's already full, so one thread is splitting the page while all others are waiting.
Unfortunately you didn't post the table info in the question, which you should really do. But you probably know if your indices are low cardinality or high. The first thing you could do is run the same tests again, adding the indices one by one, try to see which one causes trouble.
You can also lower fillfactor so there is less chance the inserts end up in a page that is already full.
If you find a problematic low cardinality index then you should first wonder if it's actually useful for queries, maybe you can drop it. If you want to keep it, you can hack it into a high cardinality index by adding a dummy column at the end. For example if you have an index on (category) which has few different values and causes problems for inserts, you can turn it into (category,other_column) which will work just as well for selecting based on category and might provide some extra features like sorting on other_column while selecting on category. However other_column should not be the PK or date or any other column that will have have values that end up in the same page in all your concurrent inserts, because that would be back to square one.
Next, you can try single-threading, or a low number of threads. Back to this:
In a second test, I dropped all of the indexes and re-ran the job. The job completed 8 times as quickly, showing zero load on the SQL server and bottlenecking on CPU on the application which is very good from the SQL Server perspective.
This may look nice at first glance but there's a problem here. Basically your application is doing the easy things (processing rows) and delegating the hard things (ie, concurrency) to the database. That's fine until it exceeds the database's capabilities, then it breaks down. Databases are excellent at handling concurrency correctly, but doing it fast is a very hard problem: coordinating several cores on a lock has a hard performance limit, caused by latency of communication between the cores, which is the speed of information propagation, in other words the speed of light, which cannot be negotiated with.
Locks are just memory held as cache lines in CPU caches. So a side effect of the way multicore systems work is, it's much faster for the same core to reacquire a lock it just released, because the line is still in its cache, so there is no slow inter-core communication involved. Likewise, several cores attempting to modify different parts of the same index page will result in cache line exchanges between them and lots of communication to determine what core owns what byte in that page. And that is surprisingly slow, it can take microseconds instead of nanoseconds.
In addition you have 50 client threads, so 50 server threads, and only 16 cores, so on the database server the OS will multitask the 50 threads between the 16 cores. This means the OS will end up putting one thread to sleep while it's holding a lock, and when that happens, performance is destroyed.
So the next test you can do is to compare insertion time with all your indices between these two scenarii:
Your current one with 50 threads
Then stop it, copy the inserted data from your main table into a temp table, truncate the main table, and insert the exact same data again with:
INSERT INTO yourtable SELECT * FROM temptable
In the second case you're inserting the same data. For the test to be valid it should be in the same order, so you might want to add an ORDER BY primary key while copying the rows into the temp table, so they come out in the proper order. I don't know if the tables are clustered, but you'll find a way to get the order correct.
You can also try various orders, one of the indices may be faster if data is inserted in an order that it likes.
If the second insert is much faster than the mutli-threaded one, then that will give you a clue of what you need to do. In this case that's probably a funnel, ie a process that gathers rows generated by the many threads and inserts them using a low number of threads, maybe just one.
This can simply be all the threads inserting into a non-indexed table, and a separate task flushing this table into the main one every X milliseconds.

What operations are O(n) on the number of tables in PostgreSQL?

Let's say theoretically, I have database with an absurd number of tables (100,000+). Would that lead to any sort of performance issues? Provided most queries (99%+) will only run on 2-3 tables at a time.
Therefore, my question is this:
What operations are O(n) on the number of tables in PostgreSQL?
Please note, no answers about how this is bad design, or how I need to plan out more about what I am designing. Just assume that for my situation, having a huge number of tables is the best design.
pg_dump and pg_restore and pg_upgrade are actually worse than that, being O(N^2). That used to be a huge problem, although in recent versions, the constant on that N^2 has been reduced to so low that for 100,000 table it is probably not enough to be your biggest problem. However, there are worse cases, like dumping tables can be O(M^2) (maybe M^3, I don't recall the exact details anymore) for each table, where M is the number of columns in the table. This only applies when the columns have check constraints or defaults or other additional info beyond a name and type. All of these problems are particularly nasty when you have no operational problems to warn you, but then suddenly discover you can't upgrade within a reasonable time frame.
Some physical backup methods, like barman using rsync, are also O(N^2) in the number of files, which is at least as great as the number of tables.
During normal operations, the stats collector can be a big bottleneck. Everytime someone requests updated stats on some table, it has to write out a file covering all tables in that database. Writing this out is O(N) for the tables in that database. (It used to be worse, writing out one file for the while instance, not just the database). This can be made even worse on some filesystems, which when renaming one file over the top of an existing one, implicitly fsyncs the file, so putting it on a RAM disc can at least ameliorate that.
The autovacuum workers loop over every table (roughly once per autovacuum_naptime) to decide if they need to be vacuumed, so a huge number of tables can slow this down. This can also be worse than O(N), because for each table there is some possibility it will request updated stats on it. Worse, it could block all concurrent autovacuum workers while doing so (this last part fixed in a backpatch for all supported versions).
Another problem you might into is that each database backend maintains a cache of metadata on each table (or other object) it has accessed during its lifetime. There is no mechanism for expiring this cache, so if each connection touches a huge number of tables it will start consuming a lot of memory, and one copy for each backend as it is not shared. If you have a connection pooler which hold connections open indefinitely, this can really add up as each connection lives long enough to touch many tables.
pg_dump with some options, probably -s. Some other options make it depend more on size of data.

Optimum number of rows in a table for creating indexes

My understanding is that creating indexes on small tables could be more cost than benefit.
For example, there is no point creating indexes on a table with less than 100 rows (or even 1000 rows?)
Is there any specific number of rows as a threshold for creating indexes?
Update 1
The more I am investigating, the more I get conflicting information. I might be too concern about preserving IO write operations; since my SQL servers database is in HA Synchronous-commit mode.
Point #1:
This question concerns very much the IO write performance. With scenarios like SQL Server HA Synchronous-commit mode, the cost of IO write is high when database servers reside in cross subnet data centers. Adding indexes adds to the expensive IO write cost.
Point #2:
Books Online suggests:
Indexing small tables may not be optimal because it can take the query
optimizer longer to traverse the index searching for data than to
perform a simple table scan. Therefore, indexes on small tables might
never be used, but must still be maintained as data in the table
changes.
I am not sure adding index to a table with only 1 one row will ever have any benefit - or am I wrong?
Your understanding is wrong. Small tables also benefit from index specially when are used to join with bigger tables.
The cost of index has two part, storage space and process time during insert/update. First one is very cheap this days so is almost discard. So you only consideration should be when you have a table with lot of updates and inserts apply the proper configurations.

Time to retrieve a single record via a SQL Server index in a large table

Short version of the question:
If you have a table with a large number of small rows and you want to retrieve a single record from this table via an index probably consisting of two columns is this likely to be something that wil be low cost and fast or high cost and slow
Longer version of question and background:
I am a consultant working with a software development company and I have an argument with them about the performance implications of a piece of functionality that I want to add to the application they are building (and I am designing).
At the moment, we write out a log record every time somebody retrieves a client record. I want to put the name and time of the last person prevously to access that record onto the client page each time that record is retrieved.
They are saying that the performance implications of this will be high but based on my reasonable but not expert knowledge of how B trees work, this doesn't seem right even if the table is very large.
If you create an index on the GUID of the client record and the date/time of access (descending), then you ought to be able to retrieve the required record via an index scan which would just need to find the first entry for that GUID and then stop? And that with a b-tree index, most of the index would be cached so the number of physical disc accesses needed would be very small and the query time therefore significantly less than 1s.
Or have I got this completely wrong
You will have problems with GUID index fragmentation but because your rows do not increase in size (as you said in the comments) you will not have page-splitting problems. The random insert issue is fixable by doing reorganizing and rebuilding.
Besides that, there is nothing wrong with your approach. If the table is larger than RAM you will likely have a single disk IO per access (the intermediate index levels will be cached). If your data fits in RAM you will pay about 0.2 to 0.5ms per query. If your data is on a magnetic disk a seek will likely require 8-12ms. On an SSD you are back to 0.2ms to 0.5ms (maybe 0.05ms more).
Why don't you just create some test data (by selecting a cross product from sys.object of 1M rows) and measure it. It takes little time and you will find out for sure.
should be low cost and fast since the columns are indexed and that would be O(n) I think
You say last person to access? You mean that for every read you will have a write?
And that write is going to change an indexed date time column?
Then I would be worried too.
Writing on each record read will cause you lots of extra disk writes. This will block reads and it might be bad to your caching too. You also need to update your index a lot, and since you change the indexed data your index will be very fragmented.
It depends.
A single retrieval will be low cost and fast
on a decent indexed table
running on decent hardware
over a decent network
On the other hand, it takes time nonetheless.
If we are talking about one retrieval per hour, don't sweat over it. If we are talking about thousands of retrievals per second (as opposed to currently none) it will start to add up to the point it would be noticable.
Some questions you need to adress
Is my hardware up to spec
Does adding two fields result in a page split (unlikely)
How many extra pages need to be read for your regular result sets
How many retrievals/sec will be made
How many inserts/sec (triggering an index update) will be made
After you've adressed these questions, you should be able to make the determination yourself. As far as my gut feelings go, I would be surprised you would notice the performance difference.

Database scalability - performance vs. database size

I'm creating an app that will have to put at max 32 GB of data into my database. I am using B-tree indexing because the reads will have range queries (like from 0 < time < 1hr).
At the beginning (database size = 0GB), I will get 60 and 70 writes per millisecond. After say 5GB, the three databases I've tested (H2, berkeley DB, Sybase SQL Anywhere) have REALLY slowed down to like under 5 writes per millisecond.
Questions:
Is this typical?
Would I still see this scalability issue if I REMOVED indexing?
What are the causes of this problem?
Notes:
Each record consists of a few ints
Yes; indexing improves fetch times at the cost of insert times. Your numbers sound reasonable - without knowing more.
You can benchmark it. You'll need to have a reasonable amount of data stored. Consider whether or not to index based upon the queries - heavy fetch and light insert? index everywhere a where clause might use it. Light fetch, heavy inserts? Probably avoid indexes. Mixed workload; benchmark it!
When benchmarking, you want as real or realistic data as possible, both in volume and on data domain (distribution of data, not just all "henry smith" but all manner of names, for example).
It is typical for indexes to sacrifice insert speed for access speed. You can find that out from a database table (and I've seen these in the wild) that indexes every single column. There's nothing inherently wrong with that if the number of updates is small compared to the number of queries.
However, given that:
1/ You seem to be concerned that your writes slow down to 5/ms (that's still 5000/second),
2/ You're only writing a few integers per record; and
3/ You're queries are only based on time queries,
you may want to consider bypassing a regular database and rolling your own sort-of-database (my thoughts are that you're collecting real-time data such as device readings).
If you're only ever writing sequentially-timed data, you can just use a flat file and periodically write the 'index' information separately (say at the start of every minute).
This will greatly speed up your writes but still allow a relatively efficient read process - worst case is you'll have to find the start of the relevant period and do a scan from there.
This of course depends on my assumption of your storage being correct:
1/ You're writing records sequentially based on time.
2/ You only need to query on time ranges.
Yes, indexes will generally slow inserts down, while significantly speeding up selects (queries).
Do keep in mind that not all inserts into a B-tree are equal. It's a tree; if all you do is insert into it, it has to keep growing. The data structure allows for some padding, but if you keep inserting into it numbers that are growing sequentially, it has to keep adding new pages and/or shuffle things around to stay balanced. Make sure that your tests are inserting numbers that are well distributed (assuming that's how they will come in real life), and see if you can do anything to tell the B-tree how many items to expect from the beginning.
Totally agree with #Richard-t - it is quite common in offline/batch scenarios to remove indexes completely before bulk updates to a corpus, only to reapply them when update is complete.
The type of indices applied also influence insertion performance - for example with SQL Server clustered index update I/O is used for data distribution as well as index update, where as nonclustered indexes are updated in seperate (and therefore more expensive) I/O operations.
As with any engineering project - best advice is to measure with real datasets (skews page distribution, tearing etc.)
I think somewhere in the BDB docs they mention that page size greatly affects this behavior in btree's. Assuming you arent doing much in the way of concurrency and you have fixed record sizes, you should try increasing your page size

Resources