PageLatch_EX and PageLatch_SH on non-clustered index - sql-server

We have two tables which receive 1 million+ insertions per minute. This table is heavily indexed, which also can’t be removed to support business requirements. Due to such high volume of insertions, we are seeing PAGELATCH_EX and PAGELATCH_SH. These locks further slowdown insertions.
I could solve that using approach described in Removing PAGELATCH with randomized ID instead of GUID
My real question is:
One of nonclustered indexes on table is based on record insertion date henceforth that is also inserting sequential data. We see PAGELATCH_EX and PAGELATCH_SH because page gets locked due to insertion of sequential data. But why am I not seeing PAGELATCH_EX and PAGELATCH_SH on non clustered index with sequential keys?

Related

using sql server index to better performance

I have 2 tables (UserLog and UserInfo) on which there is a nonclustered index on User_UID column which is a unique identifier.
I have a lot of select queries that join these 2 tables on the User_UID column.
There is no cluster index on these tables; so to improve read performance I decide to create, a new column User_ID and then create a cluster index on this column, on each table.
I then tested the new architecture and I obtained great results since I decrease the logical read on both tables since the query optimiser don't use anymore a RID lookup in order to retreive the remain informations. Instead it use only the cluster index seek.
I obtained these good results only when the pages are already in memory cache, i.e. after 2 executions. However if I clean the cache (dbcc dropcleanbuffers) the first execution of the select query give also less logical read but the elapsed time is greater than it was when I execute the same query whith the old architecture (without the clustered index) just after cleaning cache.
So my question is why the elapsed time with the new architecture increase after cleaning the cache. Is it because in the first execution all data have to go into the memory cache and since in the cluster index we have more data than in non cluster index it takes more time??
Thanks in advance
Regardless, you should have a clustered index on your table. If you don't, you have a heap and that requires scans through the leaf level of the table. With a clustered index, your table is now sorted into a b-tree that is used for navigation to the leaf level and is more efficient.
By blowing out the the buffer, whether you have a seek on a clustered index or scan on a heap, the pages will be pulled from disk and that takes time.

How long should a Primary Key delete take?

Picture a simple table structure:
Table1 Table2
---------- ----------
ID<-------| ID
Name |-->Table1ID
Name
Table1 has a few million rows (say 3.5 million for example). I issue a delete by Primary Key:
DELETE FROM Table1 WHERE ID = 100;
There is no row in Table2 that references Table1 with ID = 100, so the delete works without violating any Foreign Key constraints.
How long would you expect the delete to take? On the order of a few milliseconds? A few hundred milliseconds? A second or more? A few seconds? Etc., assuming the machine is not bogged down and readily handles the request.
Now, I have this situation where a delete like this is taking around 700ms. To me, this seems too slow. I'm curious if I'm off-base or if others agree this is too slow, and recommendations to help make it faster!
Here is the actual execution plan:
(XML Execution plan here: http://pastebin.com/q9hSMLi3)
The Clustered Index Delete (81%) hits the Clustered PK, a Non-Clustered Unique Index, and a Non-Clustered Non-Unique Index.
The issue is the clustered index scan to validate the foreign key.
When the delete succeeds and there are no matching records that would cause a violation then all of table2 needs to be scanned. This table has 1,117,190 rows so this is an expensive operation that could definitely benefit from an index.
The 10% figure shown in the execution plan is just an estimate based on certain modelling assumptions.
The entire plan is costed at 0.0369164 with the scan on table 2 costed at 0.0036199 and everything else accounting for the remaining 0.0332965. However notice that for the clustered index scan operator the Estimated CPU Cost is 1.22907 and Estimated IO Cost is 10.7142 (totaling 11.94327 not 0.0369164).
The reason for this discrepancy is that the scan is under an anti semi join operator and the scan can stop as soon as a matching row is found. The estimated subtree cost is scaled down under the modelling assumption that this will happen after only a very small proportion of the table has been scanned.
In the case that there are no FK violations and the delete succeeds then the entire table needs to be scanned so it would be more informative to use the unscaled down figure.
If the percentages are reworked out using the 11.94327 cost for that operator that represents the full scan that happened in practice then this scan operator shows up as being 99.7% of the plan cost (11.94327 / (11.94327 + 0.0332965)).
If all pages being touched are in cache you can expect about 1ms or less for the CPU cost and the log write. The client library overhead might actually be more in terms of CPU than the server load.
For each page not in cache you can expect a disk seek of 5-10ms on a magnetic disk. Roughly, you can expect one such access per index being touched in Table1 plus one access in Table2 to validate the FK.
The execution plan tells you for sure which physical ops are to be performed.
700ms seems like a lot (70 indexes?!). Please post the actual execution plan. The server is unloaded and there is no blocking due to locks?

Clustered indexes SQL Server

I have an Oracle background, and using "Indexed organized tables" (IOT) for every table sounds unreasonable in Oracle and I never actually seen this. In SQL Server, every database I worked on, has a clustered index on every table, which is the same as IOT (conceptually).
Why is that? Is there any reason for using clustered index everywhere? Seems to me like they would be good only for a handful of cases.
Thanks
A clustered index is not quite the same thing as an index-organised table. With an IOT, every field must participate in the IOT key. A clustered index on SQL Server does not have to be unique, and does not have to be the primary key.
Clustered indexes are widely used on SQL Server, as there is almost always some natural ordering that makes a commonly used query more efficient. IOTs in Oracle carry more baggage, so they aren't quite as useful, although they may well be more useful then they're commonly given credit for.
Historically, really old versions of SQL Server pre 6.5 or 7.0 IIRC did not support row-level locking and could only lock at a table or page level. Often a clustered index would be used to ensure that writes were scattered around the table's physical storage to minimise contention on page locks. However, SQL Server 6 went of support some years ago, so applications with this issue will be restricted to rare legacy systems.
Without a clustered index, your table is organized as a heap. This means that every row that is insert is added at the data page at the end of the table. Also as rows get updated, they get moved to the data page at the end of the table if the data updated is larger than than before.
When it is good to not have a clustered index
If you have a table that needs the fastest possible inserts, but can sacrifice update, and read speed, then not having a clustered index may work for you. One example would be if you had a table that was being used as a queue, for instance, lots of inserts that later just get read and moved to a different table.
Clustered Indexes
Clustered indexes organize the data in your table based on the columns in the clustered index. If you cluster on the wrong thing for instance a uniqueidentifier this can slow things down (see below).
As long as your clustered index is on the value that is most commonly used for searching, and it is unique and increasing they you get some amazing performance benefits out of the clustered index. For instance if you have a table called USERS where you are commonly looking up user data based on USER_ID then clustering on USER_ID would speed up the performance of all of those lookups. This simply reduces the number of data pages that need to be read to get at your data.
If you have too many keys in your clustered index this can slow things down also.
General rules for clustered indexes:
Don't cluster on any varchar columns.
Clustering on INT IDENTITY columns is usually best.
Cluster on what you commonly search on.
Clustering on UniqueIdentifiers
With uniqueidentifiers in an index, they are extremely inefficient because there is no natural sort order. Based on the b-tree structure of the index you end up with extremely fragmented indexes when using uniqueidentifiers. After rebuilding or reorganizing, they are still extremely fragmented. So you end up with a slower index, that ends up being really huge in memory and on disk due to the fragmentation. Also on inserts of the uniqueidentifier you are more likely to end up with a page split on the index thus slowing your insert. Generally uniqueidentifiers are bad news for indexes.
Summary
My recommendation is that every table should have a clustered index on it unless there is a really good reason not to (ie table functioning as a queue).
I wouldn't know why you would prefer a heap over a clustered index most of the time. Using clustering, you get one index of your choice for free. Most of the time this is the primary key (which you probably want to enforce anyway!).
Heaps are mostly for special situations.
We are using Primary Keys in relational databases and in general relation is established via these primary keys. Most people used to name first field as TableID and make it primary key. When you join two ore more tables in your query you will get the fastest result if you use clustered indexes.

Cluster the index on ever-increasing datetime column on logging table?

I'm not a DBA ("Good!", you'll be thinking in a moment.)
I have a table of logging data with these characteristics and usage patterns:
A datetime column for storing log timestamps whose value is ever-increasing and mostly (but only mostly) unique
Frequent-ish inserts (say, a dozen a minute), only at the end of the timestamp range (new data being logged)
Infrequent deletes, in bulk, from the beginning of the timestamp range (old data being cleared)
No updates at all
Frequent-ish selects using the timestamp column as the primary criterion, along with secondary criteria on other columns
Infrequent selects using other columns as the criteria (and not including the timestamp column)
A good amount of data, but nowhere near enough that I'm worried much about storage space
Additionally, there is currently a daily maintenance window during which I could do table optimization.
I frankly don't expect this table to challenge the server it's going to be on even if I mis-index it a bit, but nevertheless it seemed like a good opportunity to ask for some input on SQL Server clustered indexes.
I know that clustered indexes determine the storage of the actual table data (the data is stored in the leaf nodes of the index itself), and that non-clustered indexes are separate pointers into the data. So in query terms, a clustered index is going to be faster than a non-clustered index -- once we've found the index value, the data is right there. There are costs on insert and delete (and of course an update changing the clustered index column's value would be particularly costly).
But I read in this answer that deletes leave gaps that don't get cleaned up until/unless the index is rebuilt.
All of this suggests to me that I should:
Put a clustered index on the timestamp column with a 100% fill-factor
Put non-clustered indexes on any other column that may be used as a criterion in a query that doesn't also involve the clustered column (which may be any of them in my case)
Schedule the bulk deletes to occur during the daily maintenance interval
Schedule a rebuild of the clustered index to occur immediately after the bulk delete
Relax and get out more
Am I wildly off base there? Do I need to frequently rebuild the index like that to avoid lots of wasted space? Are there other obvious (to a DBA) things I should be doing?
Thanks in advance.
Contrary to what a lot of people believe, having a good clustered index on a table can actually make operations like INSERTs faster - yes, faster!
Check out the seminal blog post The Clustered Index Debate Continues.... by Kimberly Tripp - the ultimate indexing queen.
She mentions (about in the middle of the article):
Inserts are faster in a clustered
table (but only in the "right"
clustered table) than compared to a
heap. The primary problem here is that
lookups in the IAM/PFS to determine
the insert location in a heap are
slower than in a clustered table
(where insert location is known,
defined by the clustered key). Inserts
are faster when inserted into a table
where order is defined (CL) and where
that order is ever-increasing.
The crucial point is: only with the right clustered index will you be able to reap the benefits - when a clustered index is unique, narrow, stable and optimally ever-increasing. This is best served with an INT IDENTITY column.
Kimberly Tripp also has a great article on how to pick the best possible clustering key for your tables, and what criteria it should fulfil - see her post entitled Ever-increasing clustering key - the Clustered Index Debate..........again!
If you have such a column - e.g. a surrogate primary key - use that for your clustering key and you should see very nice performance on your table - even on lots of INSERTs.
I agree with putting the clustered index on the timestamp column. My query would be on the fillfactor - 100% gives best read performance at the expense of write performance. you may be hurt by page splits. Choosing a lower fillfactor will delay page splitting at the expense of read performance so its a fine balancing act to get the best for your situation.
After the bulk deletes its worth rebuilding the indexes and updating statistics. This not only keeps performance up but also resets the indexes to the specified fillfactor.
Finally, yes put nonclustered indexes on other appropriate columns but only ones that are very select e.g not bit fields. But remember the more indexes, the more this affects write performance
There's two "best practice" ways to index a high traffic logging table:
an integer identity column as a primary clustered key
a uniqueidentifier colum as primary key, with DEFAULT NEWSEQUENTIALID()
Both methods allow SQL Server to grow the table efficiently, because it knows that the index tree will grow in a particular direction.
I would not put any other indexes on the table, or schedule rebuilds of the index, unless there is a specific performance issue.
The obvious answer is it depends on how you will query it. The point of the index is to lessen the quantity of compares when selecting data. The clustered index helps when you consider what data you will load together and the blocking factor of the storage (you can load a bunch of data in a 64k block with one read). If you include an ID and a datetime as the primary key, but not use them in your selection criteria, they will do nothing but hinder your performance. This is why people usually drop indexes upon bulk inserts before loading data.

Is this a bad indexing strategy for a table?

The table in question is part of a database that a vendor's software uses on our network. The table contains metadata about files. The schema of the table is as follows
Metadata
ResultID (PK, int, not null)
MappedFieldname (char(50), not null)
Fieldname (PK, char(50), not null)
Fieldvalue (text, null)
There is a clustered index on ResultID and Fieldname. This table typically contains millions of rows (in one case, it contains 500 million). The table is populated by 24 workers running 4 threads each when data is being "processed". This results in many non-sequential inserts. Later after processing, more data is inserted into this table by some of our in-house software. The fragmentation for a given table is at least 50%. In the case of the largest table, it is at 90%. We do not have a DBA. I am aware we desperately need a DB maintenance strategy. As far as my background, I'm a college student working part time at this company.
My question is this, is a clustered index the best way to go about this? Should another index be considered? Are there any good references for this type and similar ad-hoc DBA tasks?
The indexing strategy entirely depends on how you query the table and how much performance you need to get out of the respective queries.
A clustered index can force re-sorting rows physically (on disk) when out-of-sequence inserts are made (this is called "page split"). In a large table with no free space on the index pages, this can take some time.
If you are not absolutely required to have a clustered index spanning two fields, then don't. If it is more like a kind of a UNIQUE constraint, then by all means make it a UNIQUE constraint. No re-sorting is required for those.
Determine what the typical query against the table is, and place indexes accordingly. The more indexes you have, the slower data changes (INSERTs/UPDATEs/DELETEs) will go. Don't create too many indexes, e.g. on fields that are unlikely to be filtered/sorted on.
Create combined indexes only on fields that are filtered/sorted on together, typically.
Look hard at your queries - the ones that hit the table for data. Will the index serve? If you have an index on (ResultID, FieldName) in that order, but you are querying for the possible ResultID values for a given Fieldname, it is likely that the DBMS will ignore the index. By contrast, if you have an index on (FieldName, ResultID), it will probably use the index - certainly for simple value lookups (WHERE FieldName = 'abc'). In terms of uniqueness, either index works well; in terms of query optimization, there is (at least potentially) a huge difference.
Use EXPLAIN to see how your queries are being handled by your DBMS.
Clustered vs non-clustered indexing is usually a second-order optimization effect in the DBMS. If you have the index correct, there is a small difference between clustered and non-clustered index (with a bigger update penalty for a clustered index as compensation for slightly smaller select times). Make sure everything else is optimized before worrying about the second-order effects.
The clustered index is OK as far as I see. Regarding other indexes you will need to provide typical SQL queries that operate on this table. Just creating an index out of the blue is never a good idea.
You're talking about fragmentation and indexing, does it mean that you suspect that query execution slows down? Or do you simply want to shrink/defragment the database/index?
It is a good idea to have a task to defragment indexes from time to time during off-hours, though you have to consider that with frequent/random inserts it does not hurt to have some spare space in the table to prevent page splits (which do affect performance).
I am aware we desperately need a DB maintenance strategy.
+1 for identifying that need
As far as my background, I'm a college student working part time at this company
Keep studying, gain experience, but get an experienced consultant in in the meantime.
The table is populated by 24 workers running 4 threads each
I presume this is pretty mission critical during the working day, and downtime is bad news? If so don't clutz with it.
There is a clustered index on ResultID and Fieldname
Is ResultID the first column in the PK, as you indicate?
If so I'll bet that it is insufficiently selective and, depending on what the needs are of the queries, the order of the PK fields should be swapped (notwithstanding that this compound key looks to be a poor choice for the clustered PK)
What's the result of:
SELECT COUNT(*), COUNT(DISTINCT ResultID) FROM MyTable
If the first count is, say, 4 x as big as the second, or more, you will most likely be getting scans in preference to seeks, because of the low selectively of ResultsID, and some simple changes will give huge performance improvements.
Also, Fieldname is quite wide (50 chars) so any secondary indexes will have 50 + 4 bytes added to every index entry. Are the fields really CHAR rather than VARCHAR?
Personally I would consider increased the density of the leaf pages. At 90% you will only leave a few gaps - maybe one-per-page. But with a large table of 500 million rows the higher packing density may mean fewer levels in the tree, and thus fewer seeks for retrieval. Against that almost every insert, for a given page, will require a page split. This would favour inserts that are clustered, so may not be appropriate (given that your insert data is probably not clustered). Like many things, you'd need to make a test to establish what index key density works best. SQL Server has tools to help analyse how queries are being parsed, whether they are being cached, how many scans of the table they cause, which queries are "slow running", and so on.
Get a consultant in to take a look and give you some advice. This aint a question that answers here are going to give you a safe solution to implement.
You really REALLY need to have some carefully thought through maintenance policies for tables that have 500 millions rows and shed-loads of inserts daily. Sorry, but I have enormous frustration with companies that get into this state.
The table needs defragmenting (your options will become fewer if you don't have a clustered index, so keep that until you decide that there is a better candidate). "Online" defragmentation methods will have modest impact on performance, and can chug away - and can safely be aborted if they overrun time / CPU constraints [although that will most likely take some programming]. If you have a "quiet" slot then use it for table defragmentation and updating the statistics on indexes. Don't wait until the weekend to try to do all tables in one go - do as much/many as you can during any quiet time daily (during the night presumably).
Defragmenting the tables is likely to lead to a huge increased in Transaction log usage, so make sure that any TLogs are backed up frequently (we have a 10 minute TLog backup policy, which we increase to every minute during table defragging so that the defragging process doesn't become the definition of required Tlog space!)

Resources