Requirements: table Product has Name and Barcode
I want to create a non-clustered index to search with Name or Barcode
Query sample
DECLARE #Filter NVARCHAR(100) = NULL
SET #Filter = '%' + ISNULL(#Filter, '') + '%'
SELECT *
FROM Product
WHERE Name LIKE #Filter
OR Barcode LIKE #Filter
Please help me provide any the solution as separate to two indexes for name and barcode
or using one index include name and barcode
You have a couple of issues at hand here. First off, even if an index existed on Name or Barcode your filter expression would not be able to benefit from the index (in the traditional sense for adding an index) because the expression is not sargable. Brent Ozar has a great article explaining Why %string% Is Slow
Second, you cannot create a single index to cover filters on two separate columns where the filters on each column are independent. Meaning you are either using two separate filters (Ex: your OP), or your query only includes a filter for Name = 'NameValue' or vice versa. A query that has a where clause of:
WHERE
Name = 'NameValue'
OR
Barcode = 'BarcodeValue'
Would only be able to seek an index for both filter expressions if separate indexes existed where Name was the first listed column and Barcode was the first listed column.
An index containing two columns is meant to primarily serve the purpose of filters that use both columns as part of the filter expression. Ex: Name = 'NameValue' AND Barcode = 'BarcodeValue'. It is also very important to think about the ordinal position of each column within the index. For example, lets say you create this index:
CREATE NONCLUSTERED INDEX NCIX_Product_Name_Barcode ON Product (Name,Barcode);
A query with the filter expression Name = 'NameValue' can still seek this index because Name is the first column in the index, but a query with the filter expression Barcode = 'BarcodeValue' cannot.
Before making any long term decisions on index design, you should first familiarize yourself with the guidelines published by Microsoft in General Index Design Guidelines.
Lastly, if you truly need to search the Name or Barcode for string matches, you should look in to Microsoft's documentation on full text indexes which is likely going to be your best solution for indexed searches of this manner.
Related
I want to build a sybase ASE query to match lastname, firstname for a person. There are few different formats for name. It can be "lastname, firstname" OR it can be "lastname,firstname" (no space in between , and firstname). I have tried using name like 'lastname[,][ ]firstname' but it does not work. I can not use lastname,%firstname as it would match with any character for firstname. The valid character is either space or nothing. Any suggestions?
Unfortunately SAP/Sybase ASE does not provide support for regex patterns (eg, 'zero or more spaces'), so you're left with a few basic options ...
union (all) two queries:
select *
from names_table
where name like 'lastname, firstname'
union all
select *
from names_table
where name like 'lastname,firstname'
NOTE: Both queries should use an index on the name column assuming statistics show an index access plan is the best option.
or two where clauses:
select *
from names_table
where (name like 'lastname, firstname' or name like 'lastname,firstname')
NOTE: Whether or not this uses an index on the name column will depend on the statistics for the index and column and/or the complexity of the actual query.
Strip out spaces and match what's left:
select *
from names_table
where str_replace(name,' ',null) like 'lastname,firstname'
NOTE: In most cases this will disable the use of an index on the name column.
From an indexing perspective ...
If you need to run this type of query often, and the performance of said query is less than acceptable, you could look at a couple additional indexing options:
(materialized) computed column + index on said computed column
function-based index (ASE basically creates a 'system' computed column under the covers and then creates the index on said column)
Could anyone tell me how to change the Stored Procedure in the article below to recursively expand all the attributes of a json file (multiple JSON document schemas)?
https://support.snowflake.net/s/article/Automating-Snowflake-Semi-Structured-JSON-Data-Handling-part-2
Craig Warman's stored procedure posted in that blog is a great idea. I asked him if it was okay to refactor his code, and he agreed. I've used the refactored version in the field, so I know the SP well as well as how it works.
It may be possible to modify the SP to work on your JSON. It will depend on whether or not Snowflake types the JSON in your variant column. The way you have it structured, it may not type everything. You can check by running this SQL and seeing if the result set includes all the columns you need:
set VARIANT_TABLE = 'WEATHER';
set VARIANT_COLUMN = 'V';
with MAIN_TABLE as
(
select * from identifier($VARIANT_TABLE) sample (1000 rows)
)
select distinct REGEXP_REPLACE(REGEXP_REPLACE(f.path, '\\[(.+)\\]'),'[^a-zA-Z0-9]','_') AS path_name, -- This generates paths with levels enclosed by double quotes (ex: "path"."to"."element"). It also strips any bracket-enclosed array element references (like "[0]")
typeof(f.value) AS attribute_type, -- This generates column datatypes.
path_name AS alias_name -- This generates column aliases based on the path
from
MAIN_TABLE,
LATERAL FLATTEN(identifier($VARIANT_COLUMN), RECURSIVE=>true) f
where TYPEOF(f.value) != 'OBJECT'
AND NOT contains(f.path, '[');
Be sure to replace the variables to your table and column names. If this picks up the type information for the columns in your JSON, then it's possible to modify this SP to do what you need. If it doesn't but there's a way to modify the query to get it to pick up the columns, that would work too.
If it doesn't pick up the columns, based on Craig's idea I decided to write type inference for non variant (such as strings from CSV log files without type information). Try the SQL above and see what results first.
I have a full-text index on a column in a table that contains data like this:
searchColumn
90210 Brooks Diana Miami FL diana.brooks#email.com 5612233395
The column is an aggregate of Zip, last name, first name, city, state, e-mail and phone number.
I use this column to search for a customer based on any of this possible information.
The issue I am worried about is with the high number of reads that occurs when doing a query on this column. The query I am using is:
declare #searchTerm varchar(100) = ' "FL" AND "90210*" AND "Diana*" AND "Brooks*" '
select *
from CustomerInformation c
where contains(c.searchColumn, #searchTerm)
Now, when running Profiler I can see that this search has about 50.000 page reads to return a single row, as opposed to when using a different approach using regular indexes and multiple variables, broken down like #firstName, #LastName, like below:
WHERE C.FirstName like coalesce(#FirstName + '%' , C.FirstName)
AND C.LastName like coalesce(#LastName + '%' , C.LastName)
etc.
Using this approach I get only around 140 page reads. I know the approaches are quite different, but I'm trying to understand why the full-text version has so much more reads and if there is any way I can bring that down to something closer to the numbers I get when using regular indexes.
I have a couple of thoughts on this. First the Select * will generate a great number of page reads because it has to pull all columns which may or may not be indexed. When you pull every column it most likely will not make use of the best Index plan out there.
As to your Where clauses, when using the #searchTerm and the value of "FL" AND "90210*" AND "Diana*" AND "Brooks*" it has to check the datapages multiple times each time it is run. Think of how you would look up this information if you had to do it. You look at a piece of paper with the info on it and see if the search column contains FL. Now does it contain FL and 90210*. Now does it contain both of those plus Diana...etc.
You can see why it would keep having to go back to the page to read over and over again. The second query only has to look at 2 columns narrowly defined.
If you want more information on this, I would suggest a class by Brent Ozar that is free right now.
How to think like the SQL Server Engine
I hope that helps.
I need to store content keyed by strings, so a database table of key/value pairs, essentially. The keys, however, will be of a hierarchical format, like this:
foo.bar.baz
They'll have multiple categories, delimited by dots. The above value is in a category called "baz" which is in a parent category called "bar" which is in a parent category called "foo."
How can I index this in such a way that it's rapidly searchable for different permutations of the key/dot combo? For example, I want to be able to very quick find everything that starts
foo
Or
foo.bar
Yes, I could do a LIKE query, but I never need find anything like:
fo
So that seems like a waste to me.
Is there any way that SQL would index all permutation of a string delimited by the dots? So, in the above case we have:
foo
foo.bar
foo.bar.baz
Is there any type of index that would facilitate searching like that?
Edit
I will never need to search backwards or from the middle. My searches will always begin from the front of the string:
foo.bar
Never:
bar.baz
SQL Server can't really index substrings, no. If you only ever want to search on the first string, this will work fine, and will perform an index seek (depending on other query semantics of course):
WHERE col LIKE 'foo.%';
-- or
WHERE col LIKE 'foo.bar.%';
However when you start needing to search for bar or baz following any leading string, you will need to search on the substring:
WHERE col LIKE '%.bar.%';
-- or
WHERE PATINDEX('%.bar.%', col) > 0;
This won't work well with regular B-tree indexes, and I don't think Full-Text Search will be much help either, because of the special characters (periods) - but you should try it out if this is a requirement.
In general, storing data this way smells wrong to me. Seems to me that you should either have separate columns instead of jamming all the data into one column, or using a more relational EAV design.
Its appears to be a work for CTE!
create TableA(
id int identity,
parentid int null,
name varchar(50)
)
for a (fixed) two level its easy
select t2.name, t1.name
from tableA t1
join tableA t2 on t2.id = t1.parentid
where t2.name = 'father'
To find that kind of hierarchical values for a most general case you ill need some kind of recursion in self-join table by using a CTE.
http://msdn.microsoft.com/pt-br/library/ms175972.aspx
By definition (at least from what I've seen) sargable means that a query is capable of having the query engine optimize the execution plan that the query uses. I've tried looking up the answers, but there doesn't seem to be a lot on the subject matter. So the question is, what does or doesn't make an SQL query sargable? Any documentation would be greatly appreciated.
For reference: Sargable
The most common thing that will make a query non-sargable is to include a field inside a function in the where clause:
SELECT ... FROM ...
WHERE Year(myDate) = 2008
The SQL optimizer can't use an index on myDate, even if one exists. It will literally have to evaluate this function for every row of the table. Much better to use:
WHERE myDate >= '01-01-2008' AND myDate < '01-01-2009'
Some other examples:
Bad: Select ... WHERE isNull(FullName,'Ed Jones') = 'Ed Jones'
Fixed: Select ... WHERE ((FullName = 'Ed Jones') OR (FullName IS NULL))
Bad: Select ... WHERE SUBSTRING(DealerName,4) = 'Ford'
Fixed: Select ... WHERE DealerName Like 'Ford%'
Bad: Select ... WHERE DateDiff(mm,OrderDate,GetDate()) >= 30
Fixed: Select ... WHERE OrderDate < DateAdd(mm,-30,GetDate())
Don't do this:
WHERE Field LIKE '%blah%'
That causes a table/index scan, because the LIKE value begins with a wildcard character.
Don't do this:
WHERE FUNCTION(Field) = 'BLAH'
That causes a table/index scan.
The database server will have to evaluate FUNCTION() against every row in the table and then compare it to 'BLAH'.
If possible, do it in reverse:
WHERE Field = INVERSE_FUNCTION('BLAH')
This will run INVERSE_FUNCTION() against the parameter once and will still allow use of the index.
In this answer I assume the database has sufficient covering indexes. There are enough questions about this topic.
A lot of the times the sargability of a query is determined by the tipping point of the related indexes. The tipping point defines the difference between seeking and scanning an index while joining one table or result set onto another. One seek is of course much faster than scanning a whole table, but when you have to seek a lot of rows, a scan could make more sense.
So among other things a SQL statement is more sargable when the optimizer expects the number of resulting rows of one table to be less than the tipping point of a possible index on the next table.
You can find a detailed post and example here.
For an operation to be considered sargable, it is not sufficient for it to just be able to use an existing index. In the example above, adding a function call against an indexed column in the where clause, would still most likely take some advantage of the defined index. It will "scan" aka retrieve all values from that column (index) and then eliminate the ones that do not match to the filter value provided. It is still not efficient enough for tables with high number of rows.
What really defines sargability is the query ability to traverse the b-tree index using the binary search method that relies on half-set elimination for the sorted items array. In SQL, it would be displayed on the execution plan as a "index seek".