I have a table with 5-10 million records which has 2 fields
example data
Row Field1 Field2
------------------
1 0712334 072342344
2 06344534 083453454
3 06344534 0845645565
Given 2 variables
variable1 : 0634453445645
variable2 : 08345345456756
I need to be able to query the table for best matches as fast as possible
The above example would produce 1 record (e.g row 2)
What would be the fastest way to query the database for matches?
Note : the data and variables are always in this format (i.e always a number, may or may not have a leading zero, and fields are not unique however the combination of both will be )
My initial thought was to do something like this
Select blah where Field1 + "%" like variable1 and Field2 + "%" like variable2
Please forgive my pseudo-code if it's not correct, as this is more a fact-finding mission. However I think I'm in the ball park.
Note : I don't think any indexing can help here, though a memory-based table I'm guessing would speed this up.
Can anyone think of a better way of solving the problem?
You can get a plan with a seek on an index on Field1 with query like this.
declare #V1 varchar(20) = '0634453445645'
declare #V2 varchar(20) = '08345345456756'
select Field1,
Field2
from YourTable
where Field1 like left(#V1, 4) + '%' and
#V1 like Field1 + '%' and
#V2 like Field2 + '%'
It does a range seek on the first four characters on Field1 and uses the full comparison on Field1 and Field2 in a residual predicate.
There is no performance tip. SImple like that.
%somethin% is table scan, Indices are not used due to the beginning %. Ful ltext indexing won't work as it is not a full text you seek but part of a word.
Getting a faster machine to handle the table scans and denormalizing is the only thing you can do. 5-10 million rows should be faste enough on a decent computer. Memory based table is not needed - just enough RAM to cache that table.
And that pretty much is it. Either find a way to get rid of the initial % or get hardware (mostly memory) fast enough to handle this.
OR - handle it OUTSIDE sql server. Load the 5-10 million rows into a search service and use a better data structure. SQL being generic has to make compromises. But again, the partial match will kill pretty much most approaches.
Postgres has trigram indexes http://www.postgresql.org/docs/current/interactive/pgtrgm.html
Maybe SQL Server has something like that?
What is the shortest length in Column 'Field1' and 'Field2'? Call this number 'N'.
Then create a select statement which asks for all substrings starting at the first character of length N to the length of each variable. Example (say, N=10)
select distinct * from myTable
where Field1 in ('0634453445','06344534456','063445344564', '0634453445645')
and Field2 in ('0834534545','08345345456','083453454567', '0834534545675','08345345456756')
Write a small script which creates the query for you. Of course there is much more to optimize but this requires (imho) changes in the structure of your table and I can imagine that this is something you don't want. At least you can give it a fast try.
Also, you should include the query plan when you try this approach in SSMS. The query plan will give you a nice hint in how to organize your index.
Related
I have a full-text index on a column in a table that contains data like this:
searchColumn
90210 Brooks Diana Miami FL diana.brooks#email.com 5612233395
The column is an aggregate of Zip, last name, first name, city, state, e-mail and phone number.
I use this column to search for a customer based on any of this possible information.
The issue I am worried about is with the high number of reads that occurs when doing a query on this column. The query I am using is:
declare #searchTerm varchar(100) = ' "FL" AND "90210*" AND "Diana*" AND "Brooks*" '
select *
from CustomerInformation c
where contains(c.searchColumn, #searchTerm)
Now, when running Profiler I can see that this search has about 50.000 page reads to return a single row, as opposed to when using a different approach using regular indexes and multiple variables, broken down like #firstName, #LastName, like below:
WHERE C.FirstName like coalesce(#FirstName + '%' , C.FirstName)
AND C.LastName like coalesce(#LastName + '%' , C.LastName)
etc.
Using this approach I get only around 140 page reads. I know the approaches are quite different, but I'm trying to understand why the full-text version has so much more reads and if there is any way I can bring that down to something closer to the numbers I get when using regular indexes.
I have a couple of thoughts on this. First the Select * will generate a great number of page reads because it has to pull all columns which may or may not be indexed. When you pull every column it most likely will not make use of the best Index plan out there.
As to your Where clauses, when using the #searchTerm and the value of "FL" AND "90210*" AND "Diana*" AND "Brooks*" it has to check the datapages multiple times each time it is run. Think of how you would look up this information if you had to do it. You look at a piece of paper with the info on it and see if the search column contains FL. Now does it contain FL and 90210*. Now does it contain both of those plus Diana...etc.
You can see why it would keep having to go back to the page to read over and over again. The second query only has to look at 2 columns narrowly defined.
If you want more information on this, I would suggest a class by Brent Ozar that is free right now.
How to think like the SQL Server Engine
I hope that helps.
I need to store content keyed by strings, so a database table of key/value pairs, essentially. The keys, however, will be of a hierarchical format, like this:
foo.bar.baz
They'll have multiple categories, delimited by dots. The above value is in a category called "baz" which is in a parent category called "bar" which is in a parent category called "foo."
How can I index this in such a way that it's rapidly searchable for different permutations of the key/dot combo? For example, I want to be able to very quick find everything that starts
foo
Or
foo.bar
Yes, I could do a LIKE query, but I never need find anything like:
fo
So that seems like a waste to me.
Is there any way that SQL would index all permutation of a string delimited by the dots? So, in the above case we have:
foo
foo.bar
foo.bar.baz
Is there any type of index that would facilitate searching like that?
Edit
I will never need to search backwards or from the middle. My searches will always begin from the front of the string:
foo.bar
Never:
bar.baz
SQL Server can't really index substrings, no. If you only ever want to search on the first string, this will work fine, and will perform an index seek (depending on other query semantics of course):
WHERE col LIKE 'foo.%';
-- or
WHERE col LIKE 'foo.bar.%';
However when you start needing to search for bar or baz following any leading string, you will need to search on the substring:
WHERE col LIKE '%.bar.%';
-- or
WHERE PATINDEX('%.bar.%', col) > 0;
This won't work well with regular B-tree indexes, and I don't think Full-Text Search will be much help either, because of the special characters (periods) - but you should try it out if this is a requirement.
In general, storing data this way smells wrong to me. Seems to me that you should either have separate columns instead of jamming all the data into one column, or using a more relational EAV design.
Its appears to be a work for CTE!
create TableA(
id int identity,
parentid int null,
name varchar(50)
)
for a (fixed) two level its easy
select t2.name, t1.name
from tableA t1
join tableA t2 on t2.id = t1.parentid
where t2.name = 'father'
To find that kind of hierarchical values for a most general case you ill need some kind of recursion in self-join table by using a CTE.
http://msdn.microsoft.com/pt-br/library/ms175972.aspx
I have table that has the following schema:
ID,firstName,MiddleName,LastName,FML,[some other columns]
FML column is created by concatenation firstName,space character,MiddleName,space character and last name. I want to search persong when you know FML. Therefore my query is
SELECT * from tbl where FML LIKE #Param
But I want to optimize this query, I'm thinking of separating input string into firstName,MiddleName,LastName strings and make query like that
SELECT * FROM tbl where firstName like #FN and MiddleName like #MN and LastName like #ln.
Also will query
SELECT smth from tbl where Val='test'
Be better in terms of performance then
Select smth from tbl where Val like 'test'
Thank you.
If you mean =, then use =. If you mean like, then use like. But once you add wildcards to like, the performance will decrease.
By separating and filtering on separate fields, you lose flexibility, but increase the ability to be more specific in your search. So it's not optimising, per se, as the functionality is different.
Imagine you have two records, Jack Roberts, and Robert Jack
Your first query allows you to find them both if your query is '%Robert%', whereas the second allows you to find them with separate queries.
Yes '=' operator gives the best performance, whereas LIKE searches all the Val which has test in its value.
I'm attempting to optimize a T-SQL stored procedure I have. It's for pulling records based on a VIN (a 17-character alphanumeric string); usually people only know a few of the digits—e.g. the first digit could be '1', '2', or 'J'; the second is 'H' but the third could be 'M' or 'G'; and so on.
This leads to a pretty convoluted query whose WHERE clause is something like
WHERE SUBSTRING(VIN,1,1) IN ('J','1','2')
AND SUBSTRING(VIN,2,1) IN ('H')
AND SUBSTRING(VIN,3,1) IN ('M','G')
AND SUBSTRING(VIN,4,1) IN ('E')
AND ... -- and so on for however many digits we need to search on
The table I'm querying on is huge (millions of records) so the queries I'm running that have this kind of WHERE clause can take hours to run if there are more than a couple digits being searched on, even if I'm only requesting the top 3000 records. I feel like there has to be a way to get this substring character matching to run faster. Hours are completely unacceptable; I'd like to have these kinds of queries run in just a few minutes.
I don't have any editing privileges on the database, sadly, so I can't add indexes or anything like that; all I can do is change my stored procedure (although I can try to beg the DBAs to modify the table).
You can use
WHERE VIN LIKE '[J12]H[MG]E%'
At least that should hopefully lead to 3 index seeks on the ranges JH%, 1H%, and 2H% rather than a full scan.
Edit Although testing locally I found that it does not do multiple index seeks as I had hoped it converts the above to a single seek on the larger range VIN >= '1' and VIN < 'K' with a residual predicate to evaluate the LIKE
I'm not sure whether it will do this for larger tables or not but otherwise it may well be worth trying to encourage this plan with
WHERE (VIN LIKE 'JH%' OR VIN LIKE '1H%' OR VIN LIKE '2H%')
AND VIN LIKE '[J12]H[MG]E%'
You could use the LIKE keyword
SELECT
*
FROM Table
WHERE VIN LIKE '[J12]H[MG]E%'
This would even allow you to work with instance where they know the second character is not 'A' by using [^A] in the statement, such as:
WHERE VIN LIKE '[J12][^A][MG]E%'
Reference
http://msdn.microsoft.com/en-us/library/ms179859.aspx
I like the LIKE answers, but here's another alternative (especially if your input isn't always the same).
I would do this as a series of queries on ever-smaller temp tables (Yes, I'm in love with temp tables- sue me.)
So I would do something like
SELECT [Fields]
INTO #tempResultsFirstTwoDigits
FROM VIN
WHERE [Clause]
Then keep moving down the chain digit by digit until you've searched each of the provided characters. So you might do this:
if len(#input) > 2
SELECT [Fields]
INTO #tempResultsThreeDigits
FROM VIN
WHERE Substring(VIN, 3, 1) = Substring(#input, 3, 1)
//NOTE: That where clause might be sped up by initializing a variable at
// the beginning of the SP for each character you got.
Else Select * From #tempResultsFirstTwoDigits
GOTO Stop //Where "Stop" just defines the end of the SP to skip any further checks
Again, LIKE might be a better answer for you, but I would try both approaches and benchmark both of them.
By definition (at least from what I've seen) sargable means that a query is capable of having the query engine optimize the execution plan that the query uses. I've tried looking up the answers, but there doesn't seem to be a lot on the subject matter. So the question is, what does or doesn't make an SQL query sargable? Any documentation would be greatly appreciated.
For reference: Sargable
The most common thing that will make a query non-sargable is to include a field inside a function in the where clause:
SELECT ... FROM ...
WHERE Year(myDate) = 2008
The SQL optimizer can't use an index on myDate, even if one exists. It will literally have to evaluate this function for every row of the table. Much better to use:
WHERE myDate >= '01-01-2008' AND myDate < '01-01-2009'
Some other examples:
Bad: Select ... WHERE isNull(FullName,'Ed Jones') = 'Ed Jones'
Fixed: Select ... WHERE ((FullName = 'Ed Jones') OR (FullName IS NULL))
Bad: Select ... WHERE SUBSTRING(DealerName,4) = 'Ford'
Fixed: Select ... WHERE DealerName Like 'Ford%'
Bad: Select ... WHERE DateDiff(mm,OrderDate,GetDate()) >= 30
Fixed: Select ... WHERE OrderDate < DateAdd(mm,-30,GetDate())
Don't do this:
WHERE Field LIKE '%blah%'
That causes a table/index scan, because the LIKE value begins with a wildcard character.
Don't do this:
WHERE FUNCTION(Field) = 'BLAH'
That causes a table/index scan.
The database server will have to evaluate FUNCTION() against every row in the table and then compare it to 'BLAH'.
If possible, do it in reverse:
WHERE Field = INVERSE_FUNCTION('BLAH')
This will run INVERSE_FUNCTION() against the parameter once and will still allow use of the index.
In this answer I assume the database has sufficient covering indexes. There are enough questions about this topic.
A lot of the times the sargability of a query is determined by the tipping point of the related indexes. The tipping point defines the difference between seeking and scanning an index while joining one table or result set onto another. One seek is of course much faster than scanning a whole table, but when you have to seek a lot of rows, a scan could make more sense.
So among other things a SQL statement is more sargable when the optimizer expects the number of resulting rows of one table to be less than the tipping point of a possible index on the next table.
You can find a detailed post and example here.
For an operation to be considered sargable, it is not sufficient for it to just be able to use an existing index. In the example above, adding a function call against an indexed column in the where clause, would still most likely take some advantage of the defined index. It will "scan" aka retrieve all values from that column (index) and then eliminate the ones that do not match to the filter value provided. It is still not efficient enough for tables with high number of rows.
What really defines sargability is the query ability to traverse the b-tree index using the binary search method that relies on half-set elimination for the sorted items array. In SQL, it would be displayed on the execution plan as a "index seek".