What columns generally make good indexes? - sql-server

As a follow up to "What are indexes and how can I use them to optimise queries in my database?" where I am attempting to learn about indexes, what columns are good index candidates? Specifically for an MS SQL database?
After some googling, everything I have read suggests that columns that are generally increasing and unique make a good index (things like MySQL's auto_increment), I understand this, but I am using MS SQL and I am using GUIDs for primary keys, so it seems that indexes would not benefit GUID columns...

Indexes can play an important role in query optimization and searching the results speedily from tables. The most important step is to select which columns are to be indexed. There are two major places where we can consider indexing: columns referenced in the WHERE clause and columns used in JOIN clauses. In short, such columns should be indexed against which you are required to search particular records. Suppose, we have a table named buyers where the SELECT query uses indexes like below:
SELECT
buyer_id /* no need to index */
FROM buyers
WHERE first_name='Tariq' /* consider indexing */
AND last_name='Iqbal' /* consider indexing */
Since "buyer_id" is referenced in the SELECT portion, MySQL will not use it to limit the chosen rows. Hence, there is no great need to index it. The below is another example little different from the above one:
SELECT
buyers.buyer_id, /* no need to index */
country.name /* no need to index */
FROM buyers LEFT JOIN country
ON buyers.country_id=country.country_id /* consider indexing */
WHERE
first_name='Tariq' /* consider indexing */
AND
last_name='Iqbal' /* consider indexing */
According to the above queries first_name, last_name columns can be indexed as they are located in the WHERE clause. Also an additional field, country_id from country table, can be considered for indexing because it is in a JOIN clause. So indexing can be considered on every field in the WHERE clause or a JOIN clause.
The following list also offers a few tips that you should always keep in mind when intend to create indexes into your tables:
Only index those columns that are required in WHERE and ORDER BY clauses. Indexing columns in abundance will result in some disadvantages.
Try to take benefit of "index prefix" or "multi-columns index" feature of MySQL. If you create an index such as INDEX(first_name, last_name), don’t create INDEX(first_name). However, "index prefix" or "multi-columns index" is not recommended in all search cases.
Use the NOT NULL attribute for those columns in which you consider the indexing, so that NULL values will never be stored.
Use the --log-long-format option to log queries that aren’t using indexes. In this way, you can examine this log file and adjust your queries accordingly.
The EXPLAIN statement helps you to reveal that how MySQL will execute a query. It shows how and in what order tables are joined. This can be much useful for determining how to write optimized queries, and whether the columns are needed to be indexed.
Update (23 Feb'15):
Any index (good/bad) increases insert and update time.
Depending on your indexes (number of indexes and type), result is searched. If your search time is gonna increase because of index then that's bad index.
Likely in any book, "Index Page" could have chapter start page, topic page number starts, also sub topic page starts. Some clarification in Index page helps but more detailed index might confuse you or scare you. Indexes are also having memory.
Index selection should be wise. Keep in mind not all columns would require index.

Some folks answered a similar question here: How do you know what a good index is?
Basically, it really depends on how you will be querying your data. You want an index that quickly identifies a small subset of your dataset that is relevant to a query. If you never query by datestamp, you don't need an index on it, even if it's mostly unique. If all you do is get events that happened in a certain date range, you definitely want one. In most cases, an index on gender is pointless -- but if all you do is get stats about all males, and separately, about all females, it might be worth your while to create one. Figure out what your query patterns will be, and access to which parameter narrows the search space the most, and that's your best index.
Also consider the kind of index you make -- B-trees are good for most things and allow range queries, but hash indexes get you straight to the point (but don't allow ranges). Other types of indexes have other pros and cons.
Good luck!

It all depends on what queries you expect to ask about the tables. If you ask for all rows with a certain value for column X, you will have to do a full table scan if an index can't be used.
Indexes will be useful if:
The column or columns have a high degree of uniqueness
You frequently need to look for a certain value or range of values for
the column.
They will not be useful if:
You are selecting a large % (>10-20%) of the rows in the table
The additional space usage is an issue
You want to maximize insert performance. Every index on a table reduces insert and update performance because they must be updated each time the data changes.
Primary key columns are typically great for indexing because they are unique and are often used to lookup rows.

Any column that is going to be regularly used to extract data from the table should be indexed.
This includes:
foreign keys -
select * from tblOrder where status_id=:v_outstanding
descriptive fields -
select * from tblCust where Surname like "O'Brian%"
The columns do not need to be unique. In fact you can get really good performance from a binary index when searching for exceptions.
select * from tblOrder where paidYN='N'

In general (I don't use mssql so can't comment specifically), primary keys make good indexes. They are unique and must have a value specified. (Also, primary keys make such good indexes that they normally have an index created automatically.)
An index is effectively a copy of the column which has been sorted to allow binary search (which is much faster than linear search). Database systems may use various tricks to speed up search even more, particularly if the data is more complex than a simple number.
My suggestion would be to not use any indexes initially and profile your queries. If a particular query (such as searching for people by surname, for example) is run very often, try creating an index over the relevate attributes and profile again. If there is a noticeable speed-up on queries and a negligible slow-down on insertions and updates, keep the index.
(Apologies if I'm repeating stuff mentioned in your other question, I hadn't come across it previously.)

It really depends on your queries. For example, if you almost only write to a table then it is best not to have any indexes, they just slow down the writes and never get used. Any column you are using to join with another table is a good candidate for an index.
Also, read about the Missing Indexes feature. It monitors the actual queries being used against your database and can tell you what indexes would have improved the performace.

Your primary key should always be an index. (I'd be surprised if it weren't automatically indexed by MS SQL, in fact.) You should also index columns you SELECT or ORDER by frequently; their purpose is both quick lookup of a single value and faster sorting.
The only real danger in indexing too many columns is slowing down changes to rows in large tables, as the indexes all need updating too. If you're really not sure what to index, just time your slowest queries, look at what columns are being used most often, and index them. Then see how much faster they are.

Numeric data types which are ordered in ascending or descending order are good indexes for multiple reasons. First, numbers are generally faster to evaluate than strings (varchar, char, nvarchar, etc). Second, if your values aren't ordered, rows and/or pages may need to be shuffled about to update your index. That's additional overhead.
If you're using SQL Server 2005 and set on using uniqueidentifiers (guids), and do NOT need them to be of a random nature, check out the sequential uniqueidentifier type.
Lastly, if you're talking about clustered indexes, you're talking about the sort of the physical data. If you have a string as your clustered index, that could get ugly.

A GUID column is not the best candidate for indexing. Indexes are best suited to columns with a data type that can be given some meaningful order, ie sorted (integer, date etc).
It does not matter if the data in a column is generally increasing. If you create an index on the column, the index will create it's own data structure that will simply reference the actual items in your table without concern for stored order (a non-clustered index). Then for example a binary search can be performed over your index data structure to provide fast retrieval.
It is also possible to create a "clustered index" that will physically reorder your data. However you can only have one of these per table, whereas you can have multiple non-clustered indexes.

The ol' rule of thumb was columns that are used a lot in WHERE, ORDER BY, and GROUP BY clauses, or any that seemed to be used in joins frequently. Keep in mind I'm referring to indexes, NOT Primary Key
Not to give a 'vanilla-ish' answer, but it truly depends on how you are accessing the data

It should be even faster if you are using a GUID.
Suppose you have the records
100
200
3000
....
If you have an index(binary search, you can find the physical location of the record you are looking for in O( lg n) time, instead of searching sequentially O(n) time. This is because you dont know what records you have in you table.

Best index depends on the contents of the table and what you are trying to accomplish.
Taken an example A member database with a Primary Key of the Members Social Security Numnber. We choose the S.S. because the application priamry referes to the individual in this way but you also want to create a search function that will utilize the members first and last name. I would then suggest creating a index over those two fields.
You should first find out what data you will be querying and then make the determination of which data you need indexed.

Related

Create more than one non clustered index on same column in SQL Server

What is the index creating strategy?
Is it possible to create more than one non-clustered index on the same column in SQL Server?
How about creating clustered and non-clustered on same column?
Very sorry, but indexing is very confusing to me.
Is there any way to find out the estimated query execution time in SQL Server?
The words are rather logical and you'll learn them quite quickly. :)
In layman's terms, SEEK implies seeking out precise locations for records, which is what the SQL Server does when the column you're searching in is indexed, and your filter (the WHERE condition) is accurrate enough.
SCAN means a larger range of rows where the query execution planner estimates it's faster to fetch a whole range as opposed to individually seeking each value.
And yes, you can have multiple indexes on the same field, and sometimes it can be a very good idea. Play out with the indexes and use the query execution planner to determine what happens (shortcut in SSMS: Ctrl + M). You can even run two versions of the same query and the execution planner will easily show you how much resources and time is taken by each, making optimization quite easy.
But to expand on these a bit, say you have an address table like so, and it has over 1 billion records:
CREATE TABLE ADDRESS
(ADDRESS_ID INT -- CLUSTERED primary key ADRESS_PK_IDX
, PERSON_ID INT -- FOREIGN KEY, NONCLUSTERED INDEX ADDRESS_PERSON_IDX
, CITY VARCHAR(256)
, MARKED_FOR_CHECKUP BIT
, **+n^10 different other columns...**)
Now, if you want to find all the address information for person 12345, the index on PERSON_ID is perfect. Since the table has loads of other data on the same row, it would be inefficient and space-consuming to create a nonclustered index to cover all other columns as well as PERSON_ID. In this case, SQL Server will execute an index SEEK on the index in PERSON_ID, then use that to do a Key Lookup on the clustered index in ADDRESS_ID, and from there return all the data in all other columns on that same row.
However, say you want to search for all the persons in a city, but you don't need other address information. This time, the most effective way would be to create an index on CITY and use INCLUDE option to cover PERSON_ID as well. That way, a single index seek / scan would return all the information you need without the need to resort to checking the CLUSTERED index for the PERSON_ID data on the same row.
Now, let's say both of those queries are required but still rather heavy because of the 1 billion records. But there's one special query that needs to be really really fast. That query wants all the persons on addresses that have been MARKED_FOR_CHECKUP, and who must live in New York (ignore whatever checkup means, that doesn't matter). Now you might want to create a third, filtered index on MARKED_FOR_CHECKUP and CITY, with INCLUDE covering PERSON_ID, and with a filter saying CITY = 'New York' and MARKED_FOR_CHECKUP = 1. This index would be insanely fast, as it only ever cover queries that satisfy those exact conditions, and therefore has a fraction of the data to go through compared to the other indexes.
(Disclaimer here, bear in mind that the query execution planner is not stupid, it can use multiple nonclustered indexes together to produce the correct results, so the examples above may not be the best ones available as it's very hard to imagine when you would need 3 different indexes covering the same column, but I'm sure you get the idea.)
The types of index, their columns, included columns, sorting orders, filters etc depend entirely on the situation. You will need to make covering indexes to satisfy several different types of queries, as well as customized indexes created specifically for singular, important queries. Each index takes up space on the HDD so making useless indexes is wasteful and requires extra maintenance whenever the data model changes, and wastes time in defragmentation and statistics update operations though... so you don't want to just slap an index on everything either.
Experiment, learn and work out which works best for your needs.
I'm not the expert on indexing either, but here is what I know.
You can have only ONE Clustered Index per table.
You can have up to a certain limit of non clustered indexes per table. Refer to http://social.msdn.microsoft.com/Forums/en-US/63ba3877-e0bd-4417-a04b-19c3bfb02ac9/maximum-number-of-index-per-table-max-no-of-columns-in-noncluster-index-in-sql-server?forum=transactsql
Indexes should just have different names, but its better not to use the same column(s) on a lot of different indexes as you will run into some performance problems.
A very important point to remember is that Indexes although it makes your select faster, influence your Insert/Update/Delete speed as the information needs to be added to the index, which means that the more indexes you have on a column that gets updated a lot, will drastically reduce the speed of the update.
You can include columns that is used on a CLUSTERED index in one or more NON-CLUSTERED indexes.
Here is some more reading material
http://www.sqlteam.com/article/sql-server-indexes-the-basics
http://www.programmerinterview.com/index.php/database-sql/what-is-an-index/
EDIT
Another point to remember is that an index takes up space just like the table. The more indexes you create the more space it uses, so try not to use char/varchar (or nchar/nvarchar) in an index. It uses to much space in the index, and on huge columns give basically no benefit. When your Indexes start to become bigger than your table, it also means that you have to relook your index strategy.

Will a covering index help if fields already indexed individually

In a SQL Server 2005 database, I have lots of tables like this Products table
ProductID (PK)
ProductCategoryID (IX)
Description
Price
ExpiryDate
BreakableYN
...where there is a primary key, a foreign key and then a bunch of other fields. Another characteristic of this type of table is that lots of queries only use the 2 ID fields (ProductID, ProductCategoryID), e.g. Employees JOIN EmployeeProductJoin JOIN Products JOIN ProductCategories JOIN ProductDepartments.
If ProductID and ProductCategoryID are already indexed, is it worth adding another index for ProductID, ProductCategoryID?
I know it seems that I'm asking if adding a covering index will help, but what I'm really asking is whether a covering index will help if the fields in that covering index are already indexed individually.
These are definition tables that are not huge, so I'm not worried about adding extra time to INSERTs etc.
Is the primary key clustered? If it is, then adding a new index will accomplish nothing, because the ProductCategoryID index will already contain the ProductID values, so it effectively "covers" both columns.
Yes it might. The point of a covering index is that a query can be served by the index alone, without having to access the table. So you include not only the fields on which you are searching but also the fields you want to return, and the query optimizer can avoid accessing the table at all.
You might not really mean "covering index" though...
Only the query plans (with and without the extra indices, and with tables containing realistic amounts and kinds of data) can tell you for sure if the extra indices will help; it's all about helping the query optimizer find a smarter plan, but you can only help so far, and it is conceivable that it may fail to find the plan you'd like (it's but a heuristic "let me try to optimize" engine, after all). That's why looking at query plans is so important (and you need to have realistic data, because that usually does influence the heuristics!).
In short Yes, it will improve query performance.
Using a covering index, all of the columns required in your query are present in the Index data structure. This means that SQL server need only query a single index in order to provide the results for your query.
Whereas when you have a scenario of multiple columns, that are indexed separately, in order to serve this query SQL Server will more than likely have to perform a seek/scan of numerous indexes as opposed to just the one. This of course potentially creates more I/O activity.
Make sense?
I definitely may help, especially if your descriptions are large. It would be easy to benchmark and see for yourself. This new index may be much smaller that the clustered one.
But you only want to have this narrow index if you have highly important queries which you need to speed up no matter what.
Yes, it can help in one specific way. The idea of a covering index is that it has some redundant fields that you are using in queries. If the index can satisfy the data requirements of a query without the query having to hit the underlying table you can save on I/O by getting the data from the index.
Where you have two indexes as you show above the DBMS would have to hit the table as well as resolving two index seeks.
If your query results are widely scattered on the table but belong together on the index you could potentially save quite a lot of I/O on a large query. In this way, covering indexes can also be used as a sort of 'second clustered index' on a table.

What are the best practices for creating indexes on multiple bit columns?

Good day,
In SQL Server 2005, I have a table numerous columns, including a few boolean (bit) columns. For example,
table 'Person' has columns ID and columns HasItem1, HasItem2, HasItem3, HasItem4. This table is kinda large, so I would like to create indexes to get faster search results.
I know that is not I good idea to create an index on a bit column, so I thought about using a index with all of the bit columms. However, the thing is, all of these bit columns may or may not be in the query. Since the order of the indexed columns are important in an index, and that I don't know which ones will be used in the query, how should I handle this?
BTW, there is already clustered index that I can't remove.
I would suggest that this is probably not a good idea. Trying to index fields with very low cardinality will generally not make queries faster and you have the overhead of maintaining the index as well.
If you generally search for one of your bit fields with another field then a composite index on the two fields would probably benefit you.
If you were to create a composite index on the bit fields then this would help but only if the composite fields at the beginning of the index were provided. If you do not include the 1st value within the composite index then the index will probably not be used at all.
If, as an example bita was used in 90% of your queries and bitd in 70% and bits b and c in 20% then a composite index on (bita, bitd, bitb, bitc) would probably yield some benefit but for at least 10% of your queries and possibly even 40% the index would most likely not be used.
The best advice is probably to try it with the same data volumes and data cardinality and see what the Execution plan says.
I don't know a lot of specifics on sql server, but in general indexing a column that has non-unique data is not very effective. In some RDBMS systems, the optimizer will ignore indexes that are less than a certain percent unique anyway, so the index may as well not even exist.
Using a composite, or multi-column index can help, but only in particular cases where the filter constraints are in the same order that the index was built in. If you index includes 'field1, field2' and you are searching for 'field2, field1' or some other combination, the index may not be used. You could add an index for each of the particular search cases that you want to optimize, that is really all I can think of that you could do. And in the case that your data is not very unique, even after considering all of the bit fields, the index may be ignored anyway.
For example, if you have 3 bit fields, you are only segmenting your data into 8 distinct groups. If you have a reasonable number of rows in the table, segmenting it by 8 isn't going to be very effective.
Odds are it will be easier for SQL to query the large table with the person_id and item_id and BitValue then it will be to search a single table with Item1, Item2, ... ItemN.
I don't know about 2005 but in SQL Server 2000 (From Books Online):
"Columns of type bit cannot have indexes on them."
How about using checksum?
Add a int field named mysum to your table and execute this
UPDATE checksumtest SET mysum = CHECKSUM(hasitem1,hasitem2,hasitem3,hasitem4)
Now you have a value that represents the combination of bits.
Do the same checksum calc in your search query and match on mysum.
This may speed things up.
You should revisit the design of your database. Instead of having a table with fields HasItem1 to HasItem#, you should create a bridge entity, and a master Items table if you don't have one. The bridge entity (table), person_items, would have (a minimum of) two fields: person_id and item_id.
Designing the database this way doesn't lock you in to a database that only handles N number of items based on column definitions. You can add as many items as you want to a master Items table, and associate as many of them as you need with as many people as you need.

Database indexes: Only selects!

Good day,
I have about 4GB of data, separated in about 10 different tables. Each table has a lot of columns, and each column can be a search criteria in a query. I'm not a DBA at all, and I don't know much about indexes, but I want to speed up the search as much as possible. The important point is, there won't be any update, insert or delete at any moment (the tables are populated once every 4 months). Is it appropriate to create an index on each and every column? Remember: no insert, update or delete, only selects!
Also, if I can make all of these columns integer instead of varchar, would i make a difference in speed?
Thank you very much!
Answer: No. Indexing every column separately is not good design. Indexes need to comprise multiple columns in many cases, and there are different types of indexes for different requirements.
The tuning wizard mentioned in other answers is a good first cut (esp. for a learner).
Don't try to guess your way through it, or hope you understand complex analyses - get advice specific to your situation. We seem to have several threads going here that are quite active for specific situations and query optimization.
Have you looked at running the Index Tuning Wizard? Will give you suggestions of indexes based on a workload.
Absolutely not.
You have to understand how indexes work. If you have a table of say, 1000 records, but it's a BIT and there can be one of two values, if you index on that column and that column only, it will be worthless, because it will not be selective enough. When you index on a column, be very cognizant of what types of selects are going to be done on the table. When you create an index on a column, will that index be selective enough for the optimizer to use effectively?
To that point, you may very well find that a few carefully selected composite indexes will vastly outperform the solution of many single indexes on each column. The golden rule: how the database is queried will determine how you should make your indexes.
Two pieces of missing information: how many distinct values are in each column, and which DBMS you're using. If you're using Oracle and have less than a few thousand distinct values per column, you can create bitmap indexes. These are very space- and execution-efficient for exact matches.
Otherwise, it's a tradeoff: each index will add roughly the same amount of space as a one-column name containing the same data, so you'll essentially double (probably 2.5x) your space requirements. So maybe 10G, which isn't a whole lot of data.
Then there's the question of whether your DBMS will efficiently merge multiple index-based selects. It's quite possible that it won't, unless you do self-joins for every column that you're selecting against.
Best answer: try it on a smaller dataset (so that you're not spending all your time building the indexes) and see how it works.
If you are selecting a set of columns from the table greater than those covered by the columns in the selected indexes, then you will inevitably incur a bookmark lookup in the query plan, which is where the query processor has to retrieve the non-covered columns from the clustered index using the reference ID from leaf rows in the associated non-clustered index.
In my experience, bookmark lookups can really kill query performance, due to the volume of extra reads required and the fact that each row in the clustered index has to be resolved individually. This is why I try to make NC indexes covering wherever possible, which is easier on smaller tables where the required query plans are well-known, but if you have large tables with lots of columns with arbitrary queries expected then this probably won't be feasible.
This means you only get bang for your buck with an NC index of any kind, if the index is covering, or selects a small-enough data set that the cost of a bookmark lookup is mitigated - indeed, you may find that the query optimizer won't even look at your indexes if the cost is prohibitive compared to a clustered index scan, where all the columns are already available.
So there is no point in creating an index unless you know that index will optimize the result of a given query. The value of an index is therefore proportional to the percentage of queries that it can optimize for a given table, and this can only be determined by analyzing the queries that are being executed, which is exactly what the Index Tuning Wizard does for you.
so in summary:
1) Don't index every column. This is classic premature optimization. You cannot optimize a large table with indexes for all possible query plans in advance.
2) Don't index any column, until you have captured and run a base workload through the Index Tuning Wizard. This workload needs to be representative of the usage patterns of your application, so that the wizard can determine what indexes would actually help the performance of your queries.

How do you know what a good index is?

When working with tables in Oracle, how do you know when you are setting up a good index versus a bad index?
This depends on what you mean by 'good' and 'bad'. Basically you need to realise that every index you add will increase performance on any search by that column (so adding an index to the 'lastname' column of a person table will increase performance on queries that have "where lastname = " in them) but decrease write performance across the whole table.
The reason for this is when you add or update a row, it must add-to or update both the table itself and every index that row is a member of. So if you have five indexes on a table, each addition must write to six places - five indexes and the table - and an update may be touching up to six places in the worst case.
Index creation is a balancing act then between query speed and write speed. In some cases, such as a datamart that is only loaded with data once a week in an overnight job but queried thousands of times daily, it makes a great deal of sense to overload with indexes and speed the queries up as much as possible. In the case of online transaction processing systems however, you want to try and find a balance between them.
So in short, add indexes to columns that are used a lot in select queries, but try to avoid adding too many and so add the most-used columns first.
After that its a matter of load testing to see how the performance reacts under production conditions, and a lot of tweaking to find an aceeptable balance.
Fields that are diverse, highly specific, or unique make good indexes. Such as dates and timestamps, unique incrementing numbers (commonly used as primary keys), person's names, license plate numbers, etc...
A counterexample would be gender - there are only two common values, so the index doesn't really help reduce the number of rows that must be scanned.
Full-length descriptive free-form strings make poor indexes, as whoever is performing the query rarely knows the exact value of the string.
Linearly-ordered data (such as timestamps or dates) are commonly used as a clustered index, which forces the rows to be stored in index order, and allows in-order access, greatly speeding range queries (e.g. 'give me all the sales orders between October and December'). In such a case the DB engine can simply seek to the first record specified by the range and start reading sequentially until it hits the last one.
#Infamous Cow -- you must be thinking of primary keys, not indexes.
#Xenph Yan --
Something others have not touched on is choosing what kind of index to create. Some databases don't really give you much of a choice, but some have a large variety of possible indexes. B-trees are the default but not always the best kind of index. Choosing the right structure depends on the kind of usage you expect to have. What kind of queries do you need to support most? Are you in a read-mostly or write-mostly environment? Are your writes dominated by updates or appends? Etc, etc.
A description of the different types of indexes and their pros and cons is available here: http://20bits.com/2008/05/13/interview-questions-database-indexes/ .
Here's a great SQL Server article:
http://www.sql-server-performance.com/tips/optimizing_indexes_general_p1.aspx
Although the mechanics won't work on Oracle, the tips are very apropos (minus the thing on clustered indexes, which don't quite work the same way in Oracle).
Some rules of thumb if you are trying to improve a particular query.
For a particular table (where you think Oracle should start) try indexing each of the columns used in the WHERE clause. Put columns with equality first, followed by columns with a range or like.
For example:
WHERE CompanyCode = ? AND Amount BETWEEN 100 AND 200
If columns are very large in size (e.g. you are storing some XML or something) you may be better off leaving them out of the index. This will make the index smaller to scan, assuming you have to go to the table row to satisfy the select list anyway.
Alternatively, if all the values in the SELECT and WHERE clauses are in the index Oracle will not need to access the table row. So sometimes it is a good idea to put the selected values last in the index and avoid a table access all together.
You could write a book about the best ways to index - look for author Jonathan Lewis.
A good index is something that you can rely on to be unique for a specific table row.
One commonly used index scheme is the use of numbers which increment by 1 for each row in the table. Every row will end up having a different number index.

Resources