In a sqlserver 2008 database we have a table with 10 columns. In a web application the user interface is designed to allow the user to specify search criteria on some or all of the columns. The web application calls a stored procedure which dynamically creates a sql statement with only the specified options in the where clause, then executes the query using sp_executesql.
What is the best way to index these columns? We currently have 10 indexes, each one with a different column. Should we have 1 index with all 10, or some other combination?
The bible on optimizing dynamic search queries was written by SQL Server MVP Erland Sommarskog:
http://www.sommarskog.se/dyn-search.html
For SQL Server 2008 specifically:
http://www.sommarskog.se/dyn-search-2008.html
There is a lot of information to digest here, and what you ultimately decide will depend on how the queries are formed. Are there certain parameters that are always searched on? Are there certain combinations of parameters that are usually requested together? Can you really afford to create an index on every column (remember that not all will [edit] necessarily be used even if multiple columns are mentioned in the where clause, and additional indexes are not "free" - you pay for them in maintenance)?
A compound index can only be used when the leftmost key is specified in the search condition. If you have an index on (A, B, C) it can be used to search values WHERE A =#a, WHERE A=#a AND B=#b, WHERE A=#a AND C=#c or WHERE A=#a AND B=#b AND C=#c. But it cannot be used if the leftmost key is not specified, WHERE B=#b or WHERE C=#c cannot use this index. Therefore 10 indexes each in a column can each be used for on specific user criteria, but 1 index on 10 column will only be useful if the user includes the criteria on the first column and useless on all other cases. At least this is the 10000ft answer. There are more details if you start to digg into it.
For a comprehensive discussion of your problem and possible solutions, see Dynamic Search Conditions in T-SQL.
It completely depends on what the data are: how well they index (eg an index on a column with only two values isn't going to help you much), how likely they are to be searched on, and how likely they are to be search on together.
In particular, if column A is queried a lot, and column B tends only to be queried when also querying column A, a compound index over (A, B) will make queries that search for particular values of both columns very fast, and also give you the benefits of a single index on A (but not B) for free.
It is possible that one index per column makes sense for your data, but more likely not. There will probably be a better trade-off taking into account the nature of your data and schema.
Personally I would not bother using a stored procedure to create dynamic SQL. There are no performance benefits compared to doing it in whatever server-side scripting language you're using in the webapp itself, and the language you're writing the webapp in will almost always have more flexible, readable and secure string handling functions than SQL does. Generating SQL strings in SQL itself is an exercise in pain; you will almost certainly get some escaping wrong somewhere and give yourself an SQL-injection security hole.
One index per column. The prioblem is that you have no clue about the queries, and this is the most generic way.
In my experience, having combined indexes does make the queries faster. In this case, you can't have all possible combinations.
I would suggest doing some usage testing to determine which combinations are used most frequently. Then focus on indexes that combine those columns. If the most frequent combinations are:
C1, C2, C3
C1, C2, C5
... then make a combined index on C1 and C2.
Related
Firstly, I should point out I don't have much knowledge on SQL Server indexes.
My situation is that I have an SQL Server 2008 database table that has a varchar(max) column usually filled with a lot of text.
My ASP.NET web application has a search facility which queries this column for keyword searches, and depending on the number of keywords searched for their may be one or many LIKE '%keyword%' statements in the SQL query to do the search.
My web application also allows searching by various other columns in this table as well, not just that one column. There is also a few joins from other tables too.
My question is, is it worthwhile creating an index on this column to improve performance of these search queries? And if so, what type of index, and will just indexing the one column be enough or do I need to include other columns such as the primary key and other searchable columns?
The best analogy I've ever seen for why an index won't help '%wildcard%' searches:
Take two people. Hand each one the same phone book. Say to the person on your left:
Tell me how many people are in this phone book with the last name "Smith."
Now say to the person on your right:
Tell me how many people are in this phone book with the first name "Simon."
An index is like a phone book. Very easy to seek for the thing that is at the beginning. Very difficult to scan for the thing that is in the middle or at the end.
Every time I've repeated this in a session, I see light bulbs go on, so I thought it might be useful to share here.
you cannot create an index on a varchar(max) field. The maximum amount of bytes on a index is 900. If the column is bigger than 900 bytes, you can create the index but any insert with more then 900 bytes will fail.
I suggest you to read about fulltext search. It should suits you in this case
It's not worthwhile creating a regular index if you're doing LIKE '%keyword%' searches. The reason is that indexing works like searching a dictionary, where you start in the middle then split the difference until you find the word. That wildcard query is like asking you to lookup a word that contains the text "to" or something-- the only way to find matches is to scan the whole dictionary.
You might consider a full-text search, however, which is meant for this kind of scenario (see here).
The best way to find out is to create a bunch of test queries that resemble what would happen in real life and try to run them against your DB with and without the index. However, in general, if you are doing many SELECT queries, and little UPDATE/DELETE queries, an index might make your queries faster.
However, if you do a lot of updates, the index might hurt your performance, so you have to know what kind of queries your DB will have to deal with before you make this decision.
I am new in sql server,
my manager has given me job where i have to find out performance of query in sql server 2008.
That query is very complex having joins between views and table. I read in internet that joins between views and table cause performance hurt?
If any expert can help me on this? Any good link where i found knowledge of this? How to calculate query performance in sql server?
Look at the query plan - it could be anything. Missing indexes on one of the underlying view tables, missing indexes on the join table or something else.
Take a look at this article by Gail Shaw about finding performance issues in SQL Server - part 1, part 2.
A view (that isn't indexed/materialised) is just a macro: no more, no less
That is, it expands into the outer query. So a join with 3 views (with 4, 5, and 6 joins respectively) becomes a single query with 15 JOINs.
This is a common "I can reuse it" mistake: DRY doesn't usually apply to SQL code.
Otherwise, see Oded's answer
Oded is correct, you should definitely start with the query plan. That said, there are a couple of things that you can already see at a high level, for example:
CONVERT(VARCHAR(8000), CN.Note) LIKE '%T B9997%'
LIKE searches with a wildcard at the front are bad news in terms of performance, because of the way indexes work. If you think about it, it's easy to find all people in the phone book whose name starts with "Smi". If you try to find all people who have "mit" anywhere in their name, you will find that you need to read the entire phone book. SQL Server does the same thing - this is called a full table scan, and is typically quite slow. The other issue is that the left side of the condition uses a function to modify the column (specifically, converting it to a varchar). Essentially, this again means that SQL Server cannot use an index, even if there was one for the column CN.Note.
My guess is that the column is a text column, and that you will not be allowed to change the filter logic to remove the wildcard at the beginning of the search. In this case, I would recommend looking into Full-Text Search / Indexing functionality. By enabling full text indexing, and using specific keywords such as CONTAINS, you should get better performance.
Again (as with all performance optimisation scenarios), you should still start with the query plan to see if this really is the biggest problem with the query.
I am creating a new database, which I am basically designing for the logging/history purpose. So, I'll make around 8-10 tables in this database. Which will keep the data and I'll retrieve it for showing history information to the user.
I am creating database from the SQL Server 2005 and I can see that there is a check box of " Use full Indexing". I am not sure whether I make it check or unchecked. As I am not familiar with the database too much, suggest me that by checking it, will it increase the performance of my database in retrieval?
I think that is the check box for FULLTEXT indexing.
You turn it on only if you plan to do some natural language queries or a lot of text-based queries.
See here for a description of what it is used to support.
http://msdn.microsoft.com/en-us/library/ms142571.aspx
From that base link, you can follow through to http://msdn.microsoft.com/en-us/library/ms142547.aspx (amongst others). Interesting is this quote
Comparison of LIKE to Full-Text Search
In contrast to full-text search, the LIKE Transact-SQL predicate works
on character patterns only. Also, you cannot use the LIKE predicate to
query formatted binary data. Furthermore, a LIKE query against a large
amount of unstructured text data is much slower than an equivalent
full-text query against the same data. A LIKE query against millions
of rows of text data can take minutes to return; whereas a full-text
query can take only seconds or less against the same data, depending
on the number of rows that are returned.
There is a cost for this of course which is in the storage of the patterns and relationships between words in the same record. It is really useful if you are storing articles for example, where you want to enable searching by "contains a, b and c". A LIKE pattern would be complicated and extremely slow to process like %A%B%C% OR LIKE '%B%A%C' Or ... and all the permutations for the order of appearance of A, B and C.
I am trying to visualize how to create a search for an application that we are building. I would like a suggestion on how to approach 'searching' through large sets of data.
For instance, this particular search would be on a 750k record minimum table, of product sku's, sizing, material type, create date, etc;
Is anyone aware of a 'plugin' solution for Coldfusion to do this? I envision a google like single entry search where a customer can type in the part number, or the sizing, etc, and get hits on any or all relevant results.
Currently if I run a 'LIKE' comparison query, it seems to take ages (ok a few seconds, but still), and it is too long. At times making a user sit there and wait up to 10 seconds for queries & page loads.
Or are there any SQL formulas to help accomplish this? I want to use a proven method to search the data, not just a simple SQL like or = comparison operation.
So this is a multi-approach question, should I attack this at the SQL level (as it ultimately looks to be) or is there a plug in/module for ColdFusion that I can grab that will give me speedy, advanced search capability.
You could try indexing your db records with a Verity (or Solr, if CF9) search.
I'm not sure it would be faster, and whether even trying it would be worthwhile would depend a lot on how often you update the records you need to search. If you update them rarely, you could do an Verity Index update whenever you update them. If you update the records constantly, that's going to be a drag on the webserver, and certainly mitigate any possible gains in search speed.
I've never indexed a database via Verity, but I've indexed large collections of PDFs, Word Docs, etc, and I recall the search being pretty fast. I don't know if it will help your current situation, but it might be worth further research.
If your slowdown is specifically the search of textual fields (as I surmise from your mentioning of LIKE), the best solution is building an index table (not to be confiused with DB table indexes that are also part of the answer).
Build an index table mapping the unique ID of your records from main table to a set of words (1 word per row) of the textual field. If it matters, add the field of origin as a 3rd column in the index table, and if you want "relevance" features you may want to consider word count.
Populate the index table with either a trigger (using splitting) or from your app - the latter might be better, simply call a stored proc with both the actual data to insert/update and the list of words already split up.
This will immediately drastically speed up textual search as it will no longer do "LIKE", AND will be able to use indexes on index table (no pun intended) without interfering with indexing on SKU and the like on the main table.
Also, ensure that all the relevant fields are indexed fully - not necessarily in the same compund index (SKU, sizing etc...), and any field that is searched as a range field (sizing or date) is a good candidate for a clustered index (as long as the records are inserted in approximate order of that field's increase or you don't care about insert/update speed as much).
For anything mode detailed, you will need to post your table structure, existing indexes, the queries that are slow and the query plans you have now for those slow queries.
Another item is to enure that as little of the fields are textual as possible, especially ones that are "decodable" - your comment mentioned "is it boxed" in the text fields set. If so, I assume the values are "yes"/"no" or some other very limited data set. If so, simply store a numeric code for valid values and do en/de-coding in your app, and search by the numeric code. Not a tremendous speed improvement but still an improvement.
I've done this using SQL's full text indexes. This will require very application changes and no changes to the database schema except for the addition of the full text index.
First, add the Full Text index to the table. Include in the full text index all of the columns the search should perform against. I'd also recommend having the index auto update; this shouldn't be a problem unless your SQL Server is already being highly taxed.
Second, to do the actual search, you need to convert your query to use a full text search. The first step is to convert the search string into a full text search string. I do this by splitting the search string into words (using the Split method) and then building a search string formatted as:
"Word1*" AND "Word2*" AND "Word3*"
The double-quotes are critical; they tell the full text index where the words begin and end.
Next, to actually execute the full text search, use the ContainsTable command in your query:
SELECT *
from containstable(Bugs, *, '"Word1*" AND "Word2*" AND "Word3*"')
This will return two columns:
Key - The column identified as the primary key of the full text search
Rank - A relative rank of the match (1 - 1000 with a higher ranking meaning a better match).
I've used approaches similar to this many times and I've had good luck with it.
If you want a truly plug-in solution then you should just go with Google itself. It sounds like your doing some kind of e-commerce or commercial site (given the use of the term 'SKU'), So you probably have a catalog of some kind with product pages. If you have consistent markup then you can configure a google appliance or service to do exactly what you want. It will send a bot in to index your pages and find your fields. No SQl, little coding, it will not be dependent on your database, or even coldfusion. It will also be quite fast and familiar to customers.
I was able to do this with a coldfusion site in about 6 hours, done! The only thing to watch out for is that google's index is limited to what the bot can see, so if you have a situation where you want to limit access based on a users role or permissions or group, then it may not be the solution for you (although you can configure a permission service for Google to check with)
Because SQL Server is where your data is that is where your search performance is going to be a possible issue. Make sure you have indexes on the columns you are searching on and if using a like you can't use and index if you do this SELECT * FROM TABLEX WHERE last_name LIKE '%FR%'
But it can use an index if you do it like this SELECT * FROM TABLEX WHERE last_name LIKE 'FR%'. The key here is to allow as many of the first characters to not be wild cards.
Here is a link to a site with some general tips. https://web.archive.org/web/1/http://blogs.techrepublic%2ecom%2ecom/datacenter/?p=173
This has been bugging me for a while and I'm hoping that one of the SQL Server experts can shed some light on it.
The question is:
When you index a SQL Server column containing a UDT (CLR type), how does SQL Server determine what index operation to perform for a given query?
Specifically I am thinking of the hierarchyid (AKA SqlHierarchyID) type. The way Microsoft recommends that you use it - and the way I do use it - is:
Create an index on the hierarchyid column itself (let's call it ID). This enables a depth-first search, so that when you write WHERE ID.IsDescendantOf(#ParentID) = 1, it can perform an index seek.
Create a persisted computed Level column and create an index on (Level, ID). This enables a breadth-first search, so that when you write WHERE ID.GetAncestor(1) = #ParentID, it can perform an index seek (on the second index) for this expression.
But what I don't understand is how is this possible? It seems to violate the normal query plan rules - the calls to GetAncestor and IsDescendantOf don't appear to be sargable, so this should result in a full index scan, but it doesn't. Not that I am complaining, obviously, but I am trying to understand if it's possible to replicate this functionality on my own UDTs.
Is hierarchyid simply a "magical" type that SQL Server has a special awareness of, and automatically alters the execution plan if it finds a certain combination of query elements and indexes? Or does the SqlHierarchyID CLR type simply define special attributes/methods (similar to the way IsDeterministic works for persisted computed columns) that are understood by the SQL Server engine?
I can't seem to find any information about this. All I've been able to locate is a paragraph stating that the IsByteOrdered property makes things like indexes and check constraints possible by guaranteeing one unique representation per instance; while this is somewhat interesting, it doesn't explain how SQL Server is able to perform a seek with certain instance methods.
So the question again - how do the index operations work for types like hierarchyid, and is it possible to get the same behaviour in a new UDT?
The query optimizer team is trying to handle scenarios that don't change the order of things. For example, cast(someDateTime as date) is still sargable. I'm hoping that as time continues, they fix up a bunch of old ones, such as dateadd/datediff with a constant.
So... handling Ancestor is effectively like using the LIKE operator with the start of a string. It doesn't change the order, and you can still get away with stuff.
You are correct - HierarchyId and Geometry/Geography are both "magical" types that the Query Optimizer is able to recognize and rewrite the plans for in order to produce optimized queries - it's not as simple as just recognizing sargable operators. There is no way to simulate equivalent behavior with other UDTs.
For HierarchyId, the binary serialization of the type is special in order to represent the hierarchical structure in a binary ordered fashion. It is similar to the mechanism used by the SQL Xml type and described in a research paper ORDPATHs: Insert-Friendly XML Node Labels. So while the QO rules to translate queries that use IsDescendant and GetAncestor are special, the actual underlying index is a regular relational index on the binary hierarchyid data and you could achieve the same behavior if you were willing to write your original queries to do range seeks instead of calling the simple method.