I am trying to figure out the best way to handle the indexes on a table in SQL Server.
I have a table that only needs to be read from. No real writing to the table (after the initial setup).
I have about 5-6 columns in the table that need to be indexed. Does it make more sense to setup one nonclustered index for the entire table and add all the columns that I need indexed to that index or should I set up multiple nonclustered indexes each with one column?
I am wondering which setup would have better read performance.
Any help on this would be great.
UPDATE:
There are some good answers already but I wanted to elaborate on my needs a little more.
There is one main table with auto records. I need to be able to perform very quick counts on over 100MM records. The where statements will vary but I am trying to index all of the possible columns in the where statement. So I will have queries like:
SELECT COUNT(recordID)
FROM tableName
WHERE zip IN (32801, 32802, 32803, 32809)
AND makeID = '32'
AND modelID IN (22, 332, 402, 504, 620)
or something like this:
SELECT COUNT(recordID)
FROM tableName
WHERE stateID = '9'
AND classCode IN (3,5,9)
AND makeID NOT IN (55, 56, 60, 80, 99)
So there is about 5-6 columns that could be in the where clause but it will vary a lot on which ones.
The fewer indexes you have - the better. Each index might speed up some queries - but it also incurs overhead and needs to be maintained. Not so bad if you don't write much to the table.
If you can combine multiple columns into a single index - perfect! But if you have a compound index on multiple columns, that index can only be used if you use/need the n left-most columns.
So if you have an index on (City, LastName, FirstName) like in a phone book - this works if you're looking for:
everyone in a given city
every "Smith" in "Boston"
every "Paul Smith" in "New York"
but it cannot be used to find all entries with first name "Paul" or all people with lastname of "Brown" in your entire table; the index can only be used if you also specify the City column
So therefore - compound indexes are beneficial and desirable - but only if you can really use them! Having just one index with your 6 columns does not help you at all, if you need to select the columns individually
Update: with your concrete queries, you can now start to design what indexes would help:
SELECT COUNT(recordID)
FROM tableName
WHERE zip IN (32801, 32802, 32803, 32809)
AND modelID = '32'
AND model ID IN (22, 332, 402, 504, 620)
Here, an index on (zip, modelID) would probably be a good idea - both zip and modelID are used in the where clause (together), and having the recordID in the index as well (as an Include(RecordID) clause) should help, too.
SELECT COUNT(recordID)
FROM tableName
WHERE stateID = '9'
AND classCode IN (3,5,9)
AND makeID NOT IN (55, 56, 60, 80, 99)
Again: based on the WHERE clause - create an index on (stateID, classCode, makeID) and possibly add Include(RecordID) so that the nonclustered index becomes covering (e.g. all the info needed for your query is in the nonclustered index itself - no need to go back to the "base" tables).
It depends on your access pattern
For a read only table, I'd most likely create multiple non-clustered indexes, each having multiple key columns to match WHERE clauses, and INCLUDEd columns for non-key columns
I would have neither one non-clustered for all nor one per column: they won't be useful.actual queries
Related
For sync purposes, I am trying to get a subset of the existing objects in a table.
The table has two fields, [Group] and Member, which are both stringified Guids.
All rows together may be to large to fit into a datatable; I already encountered an OutOfMemory exception. But I have to check that everything I need right now is in the datatable. So I take the Guids I want to check (they come in chunks of 1000), and query only for the related objects.
So, instead of filling my datatable once with all
SELECT * FROM Group_Membership
I am running the following SQL query against my SQL database to get related objects for one thousand Guids at a time:
SELECT *
FROM Group_Membership
WHERE
[Group] IN (#Guid0, #Guid1, #Guid2, #Guid3, #Guid4, #Guid5, ..., #Guid999)
The table in question now contains a total of 142 entries, and the query already times out (CommandTimeout = 30 seconds). On other tables, which are not as sparsely populated, similar queries don't time out.
Could someone shed some light on the logic of SQL Server and whether/how I could hint it into the right direction?
I already tried to add a nonclustered index on the column Group, but it didn't help.
I'm not sure that WHERE IN will be able to maximally use an index on [Group], or if at all. However, if you had a second table containing the GUID values, and furthermore if that column had an index, then a join might perform very fast.
Create a temporary table for the GUIDs and populate it:
CREATE TABLE #Guids (
Guid varchar(255)
)
INSERT INTO #Guids (Guid)
VALUES
(#Guid0, #Guid1, #Guid2, #Guid3, #Guid4, ...)
CREATE INDEX Idx_Guid ON #Guids (Guid);
Now try rephrasing your current query using a join instead of a WHERE IN (...):
SELECT *
FROM Group_Membership t1
INNER JOIN #Guids t2
ON t1.[Group] = t2.Guid;
As a disclaimer, if this doesn't improve the performance, it could be because your table has low cardinality. In such a case, an index might not be very effective.
I have a table [Documents] with the following columns:
Name (string)
Status (string)
DateCreated [datetime]
This table has around 1 million records. All three of these columns have an index (a single index for each one).
When I run this query:
select top 50 *
from [Documents]
where (Name = 'None' OR Name is null OR Name = '')
and Status = 'New';
Execution is really fast (300 ms.)
If I run the same query but with the ORDER BY clause, it's really slow (3000 ms)
select top 50 *
from [Documents]
where (Name = 'None' OR Name is null OR Name = '')
and Status = 'New'
order by DateCreated;
I understand that its searching in another index (DateCreated), but should it really be that much slower? If so, why? Anything I can do to speed this query up (a composite index)?
Thanks
BTW: All Indexes including DateCreated have really low fragmentation, in fact I ran a reorganize and it didn't change a thing.
As far as why the query is slower, the query is required to return the rows "in order", so it either needs to do a sort, or it needs to use an index.
Using the index with a leading column of CreatedDate, SQL Server can avoid a sort. But SQL Server would also have to visit the pages in the underlying table to evaluate whether the row is to be returned, looking at the values in Status and Name columns.
If the optimizer chooses not to use the index with CreatedDate as the leading column, then it needs to first locate all of the rows that satisfy the predicates, and then perform a sort operation to get those rows in order. Then it can return the first fifty rows from the sorted set. (SQL Server wouldn't necessarily need to sort the entire set, but it would need to go through that whole set, and do sufficient sorting to guarantee that it's got the "first fifty" that need to be returned.
NOTE: I suspect you already know this, but to clarify: SQL Server honors the ORDER BY before the TOP 50. If you wanted any 50 rows that satisfied the predicates, but not necessarily the 50 rows with the lowest values of DateCreated,you could restructure/rewrite your query, to get (at most) 50 rows, and then perform the sort of just those.
A couple of ideas to improve performance
Adding a composite index (as other answers have suggested) may offer some improvement, for example:
ON Documents (Status, DateCreated, Name)
SQL Server might be able to use that index to satisfy the equality predicate on Status, and also return the rows in DateCreated order without a sort operation. SQL server may also be able to satisfy the predicate on Name from the index, limiting the number of lookups to pages in the underlying table, which it needs to do for rows to be returned, to get "all" of the columns for the row.
For SQL Server 2008 or later, I'd consider a filtered index... dependent on the cardinality of Status='New' (that is, if rows that satisfy the predicate Status='New' is a relatively small subset of the table.
CREATE NONCLUSTERED INDEX Documents_FIX
ON Documents (Status, DateCreated, Name)
WHERE Status = 'New'
I would also modify the query to specify ORDER BY Status, DateCreated, Name
so that the order by clause matches the index, it doesn't really change the order that the rows are returned in.
As a more complicated alternative, I would consider adding a persisted computed column and adding a filtered index on that
ALTER TABLE Documents
ADD new_none_date_created AS
CASE
WHEN Status = 'New' AND COALESCE(Name,'') IN ('','None') THEN DateCreated
ELSE NULL
END
PERSISTED
;
CREATE NONCLUSTERED INDEX Documents_FIXP
ON Documents (new_none_date_created)
WHERE new_none_date_created IS NOT NULL
;
Then the query could be re-written:
SELECT TOP 50 *
FROM Documents
WHERE new_none_date_created IS NOT NULL
ORDER BY new_none_date_created
;
If DateCreated field means insertion time to table, you can create an integer id field and order by that integer field.
You need an index by 2 columns: (Name, DateCreated). The order of fields in the index is important. So, replace your index for just Name with a new index for two columns (Name, DateCreated).
Im a begginer. I know indexes are necessary for performance boosts, but i want to know how they actually work behind the scenes. Beforehand, I used to think that we should make indexes on those columns which are included in where clause (which I realized is wrong)
For example, SELECT * from MARKS where marks_obtained > 50
Consider that there's a clustered index on primary key of this table and I created a non-clustered index on marks_obtained column as its there in my where clause.
My perception: So the leaf nodes will be containing pointers to clustered index and as clustered index points to actual rows, it will select entire rows (due to asteric in my query)
Scenario
I came across following query (from AdventureWorks DB on which a non-clustered index was created) which works fine and took less than a second to execute 3200000 rows until a new column was inserted into it:
Query
SELECT x.*
INTO#X
FROM dbo.bigProduct AS p
CROSS APPLY
(
SELECT TOP 1000 *
FROM dbo.bigTransactionHistory AS bth
WHERE
bth.ProductId = p.bth.ProductId
ORDER BY
TransactionDate DESC
) AS x
WHERE
p.ProductId BETWEEN 1000 AND 7500
GO
NEW INSERTED COLUMN
ALTER TABLE dbo.bigTransactionHistory
ADD CustomerId INT NULL
After insertion of above column it took 17 seconds! means 17 times slower. A non-clusered index was now missing CustomerId column in the index. Just after including CustomerId, problem was gone.
Question CustomerId seemed to be the culprit until it was added to the index. BUT HOW???
The execution plan would answer this but I'll make a guess: The non-clustered index was no longer enough to satisfy the query after the additional column had been added. This can cause the index to not be used anymore. It also can cause one clustered index seek per row.
Learn to read execution plans. Turn on the "actual execution plan" feature routinely for each query that you test.
I have a query that looks like this:
--Updated To remove Distinct per Aaron Bertrand's suggestion in the comments
SELECT TOP 100 ord.OrderId
FROM Customer cust
JOIN CustomerOrder ord
ON ord.CustomerId = cust.CustomerId
WHERE cust.FirstName LIKE (#firstName + '%')
ORDER BY ord.CreatedWhen DESC
And I have an index like this:
CREATE NONCLUSTERED INDEX [IX_MyIndex] ON CustomerOrder
(
OrderId DESC,
CustomerId DESC,
CreatedWhen Desc
)
GO
When I run my query, the index gets used, but it is an index scan. And it gives this message:
PROBE([Bitmap1011],[MyDatabase].[order].[CustomerOrder].[OrderId] as [ord].[OrderId],N'[IN ROW]')
The output list consists of the OrderId and CreatedWhen.
What is this PROBE doing and why I don't get an Index Seek?
UPDATE:
The FirstName column on the Customer table does have an index that is being used in an IndexSeek.
CREATE NONCLUSTERED INDEX [IX_Customer_FirstName] ON Customer
(
[FirstName] ASC
)
GO
The reason that an Index Scan gets used is because your WHERE clause predicate is based on CustomerId, but it appears as the SECOND column in the list of columns in your non-clustered index [IX_MyIndex].
If you want an Index Seek to be performed, you would need to specify a new non-clustered index just on column CustomerId.
And that would essentially be a good practice - have two separate NC indices for OrderId and CustomerId. So when you join Customer and CustomerOrder tables, it will use the NC Index for CustomerId, and when you join Order and CustomerOrder tables, it will use the NC index for OrderId.
Refer to this article to read more about the difference between a multi-column non-clustered index (which you currently have) and multiple non-clustered indexes (which I proposed using).
[UPDATE]
But creating separate non-clustered indexes is not sufficient in getting an Index Seek everytime. That will depend on the columns being selected in the query, and the size of the data being read - based on that the query optimizer will accordingly make a decision on whether to use an Index Seek or an Index Scan. See this answer for more information.
[UPDATE Feb 8, 2021]
At a high-level, the PROBE function in question would essentially try to verify whether the CustomerOrder.OrderId column value is present in the Customer table. This is achieved internally through the using of bitmaps and hash keys, and you can read in detail about it here.
Note that a PROBE is not specific to an Index Scan or an Index Seek. It is simply a function that is utilized for verifying matches (based on a certain hash keyed column(s)) between two tables in a join.
Simple reason: your FirstName column isn't in the index. It must scan every row to see if the row matches the pattern you want.
I have a fairly simple query:
SELECT
col1,
col2…
FROM
dbo.My_Table
WHERE
col1 = #col1 AND
col2 = #col2 AND
col3 <= #col3
It was performing horribly, so I added an index on col1, col2, col3 (int, bit, and datetime). When I checked the query plan it was ignoring my index. I tried reordering the columns in the index in every possible configuration and it always ignored the index. When I run the query it does a clustered index scan (table size is between 700K and 800K rows) and takes 10-12 seconds. When I force it to use my index it returns instantly. I was careful to clear the cache and buffers between tests.
Other things I’ve tried:
UPDATE STATISTICS dbo.My_Table
CREATE STATISTICS tmp_stats ON dbo.My_Table (col1, col2, col3) WITH FULLSCAN
Am I missing anything here? I hate to put an index hint in a stored procedure, but SQL Server just can’t seem to get a clue on this one. Anyone know any other things that might prevent SQL Server from recognizing that using the index is a good idea?
EDIT: One of the columns being returned is a TEXT column, so using a covering index or an INCLUDE won't work :(
You have 800k rows indexed by col1, col2, col3. Col2 is a bit, so its selectivity is 50%. Col3 is a checked on a range (<=), so it's selectivity will be roughly at about 50% too. Which leaves col1. The query is compiled for the generic, parametrized plan, so it has to account for the general case. If you have 10 distinct values of col1, then your index will return approximately 800k /10 * 25% that is about ~20k keys to lookup in the clustered index to retrieve the '...' part. If you have 10k distinct col1 values then the index will return just 20 keys to look up. As you can see, what matters is not how you build your index in this case, but the actual data. Based on the selectivity of col1, the optimizer will choose a plan based on a clustered index scan (as better than 20k key lookups, each lookup at a cost of at least 3-5 page reads) or one based on the non-clustered index (if col1 is selective enough). In real life the distribution of col1 also plays a role, but going into that would complicate the explanation too much.
You can come with the benefit of hindsight and claim the plan is wrong, but the plan is the best cost estimate based on the data available at compile time. You can influence it with hints (index hint as you suggests, or optimize for hints as Quassnoi suggests) but then your query may perform better for your test set, and far worse for a different set of data, say for the case when #col1 = <the value that matches 500k records>. You can also make the index covering, thus eliminating the '...' in the projection list that require the clustered index lookup necessary, in which case the non-clustered index is always a better cost match than the clustered scan.
Kimberley Tripp has a blog article covering this subject, she calls it the 'index tipping point' which explains how come an apparently perfect candidate index is being ignored: a non-clustered index that does not cover the projection list and has poor selectivity will be seen as more costly than a clustered scan.
SQL Server optimizer is not good in optimizing queries that use variables.
If you are sure that you will always benefit from using the index, just put a hint.
If you will put the literal values to the query instead of variables, it will pick the correct statistics and will use the index.
You may also try to put a more light hint:
OPTION (OPTIMIZE FOR (#col1 = 1, #col2 = 0, #col3 = '2009-07-09'))
, which will calculate the best execution plan for these values of the variables, using statistics, and won't stick to using index no matter what.
The order of the index is important for this query:
CREATE INDEX MyIndex ON MyTable (col3 DESC, col2 ASC, col1 ASC)
It's not so much the ASC/DESC as that when sql server goes to match that where clause, it can match on col3 first and walk the index along that value.
Have you tried tossing out the bit from the index?
create index ix1 on My_Table(Col3, Col1) INCLUDE(Col2)
-- include other columns from the select list if needed
Also, you've left out the rest of the columns from the select list. You might want to consider including those if there aren't many either in the index or as INCLUDE statement to create a covering index for the query.
Try masking your parameters to prevent paramter sniffing:
CREATE PROCEDURE MyProc AS
#Col1 INT
-- etc...
AS
DECLARE #MaskedCol1 INT
SET #MaskedCol1 = #Col1
-- etc...
SELECT
col1,
col2…
FROM
dbo.My_Table
WHERE
col1 = #MaskecCol1 AND
-- etc...
Sounds stupid but I've seen SQL server do some weird things because of parameter sniffing.
I bet SQL Server thinks the price of getting the rest of the columns (designated by ... in your example) from the clustered index outweighs the benefit of the index so it just scans the clustered key. If so, see if you can make this a covering index.
Or does it use another index instead?
Are the columns nullable? Sometimes Sql Server thinks it has to scan the table to find NULL values.
Try adding "and col1 is not null" to the query, it mgiht make sqlserver use the index wtihout hint.
Also, check if the statistics are really up to date:
SELECT
object_name = Object_Name(ind.object_id),
IndexName = ind.name,
StatisticsDate = STATS_DATE(ind.object_id, ind.index_id)
FROM SYS.INDEXES ind
order by STATS_DATE(ind.object_id, ind.index_id) desc
If your SELECT is returning columns that aren't in your index SQL my find that its more efficient to scan the clustered index instead of having to do a key lookup to find the other values that you are requesting.
If you have a TEXT column try switching the data type to VARCHAR(MAX) then including the values in the nonclustered index.