Improving SQL Server query performance - sql-server

I have a written a query in which I create a string and take distinct values from the table based on conditions.
The table has some 5000 rows. This query takes almost 20 second to execute.
I believe that the string comparisons have made the query so slow. But I wonder what are my alternatives.
Query:
select distinct
Convert(nvarchar, p1.trafficSerial) +' ('+ p1.sourceTraffic + ' - ' + p1.sinkTraffic + ' )' as traffic
from
portList as p1
inner join
portList as p2 ON p1.pmId = p2.sinkId
AND P1.trafficSerial IS NOT NULL
AND (p1.trafficSerial = p2.trafficSerial)
AND (P1.sourceTraffic = P2.sourceTraffic)
AND (P1.sinkTraffic = P2.sinkTraffic)
where
p1.siteCodeID = #SiteId

One option is to create a computed column and create an index on that
This article discusses it http://blog.sqlauthority.com/2010/08/22/sql-server-computed-columns-index-and-performance/
ALTER TABLE dbo.portList ADD
traffic AS Convert(nvarchar,trafficSerial) +' ('+ sourceTraffic + ' - ' + sinkTraffic + ' )' PERSISTED
GO
CREATE NONCLUSTERED INDEX IX_portList_traffic
ON dbo.portList (traffic)
GO
select distinct traffic from dbo.portList
You should also make sure each of the columns in your join relationships have indexes on them:
p1.trafficSerial & p2.trafficSerial
P1.sourceTraffic & P2.sourceTraffic
P1.sinkTraffic & P2.sinkTraffic
and the column for your filter: p1.siteCodeID

this for surely help:
if you can remove distinct from your select statement this query will speed up quite a bit. many times, i handle distinct values over the client or web part of the system, like visual basic, php, c#, etc.
please, remove distinct keyword and time again your query after executing it at least twice.
however, if you cannot remove distinct then simply leave it there.
this is important: convert clustered index scan to clustered index seek or only index seek. that will speed up your query quite a bit. you acquire an index seek from an index scan by modifying the index. typically, a clustered index scan comes from a comparison on a column with a clustered index, which many times is a primary key. i suspect this column is portside.sitecodeid
best regards,
tonci korsano

Related

SQL Server : Text Search Pattern for Performance

I have a requirement in which I periodically have to check 40k names against table of 70k names (on Azure SQL Server).
Table has 2 relevant columns
FIRSTNAME (nvarchar(15))
LASTNAME (nvarchar(20))
Name matches must be exact first and last name match.
Naively, my first approach would be to run 40k select/where firstname='xxx' and lastname='yyy' queries, but I have to believe there is a more performant way of doing it. I guess, on the surface, it sounds like about 280k text-based queries. Obviously, the column is short enough to where I can index it, but surely there is something more I could do?
My first question is, what's the most efficient way to handle a problem like this in SQL Server?
My second question is, does anyone with experience with something like this have any idea of how long a 40k text searches across 70k rows query would take, even just on order of magnitude? I.e. am I looking at minutes, hours, days, etc?
Thanks in advance for any insights.
An index which contains both FIRSTNAME and LASTNAME columns should be enough, if possible, make it clustered.
CREATE CLUSTERED INDEX [idx_yourTable] ON yourTable (
FIRSTNAME ASC,
LASTNAME ASC
)
If you are not able to create an index on your table, then you can retrive all the data to a temp table and make an index on the temp table.
DROP TABLE IF EXISTS #T_Local
DROP TABLE IF EXISTS #T_Azure
SELECT
ID
-- A seperator is used to avoid case like
-- 'FirstName' + 'LastName' = 'FirstNameLast' + 'Name'
,FIRSTNAME + '|' + LASTNAME AS [FULL_NAME]
,FIRSTNAME
,LASTNAME
INTO #T_Local
FROM server1.DB1.dbo.YourTable
SELECT
ID
,FIRSTNAME + '|' + LASTNAME AS [FULL_NAME]
,FIRSTNAME
,LASTNAME
INTO #T_Azure
FROM server2.DB1.dbo.YourTable
CREATE CLUSTERED INDEX [idx_t_local] ON #T_Local (
[FULL_NAME] ASC)
CREATE CLUSTERED INDEX [idx_t_azure] ON #T_Azure (
[FULL_NAME] ASC)
SELECT
tl.ID AS [ID_Local]
,tl.FIRSTNAME AS [FIRSTNAME_Local]
,tl.LASTNAME AS [LASTNAME_Local]
,ta.ID AS [ID_Azure]
,ta.FIRSTNAME AS [FIRSTNAME_Azure]
,ta.LASTNAME AS [FIRSTNAME_Azure]
FROM #T_Local tl
INNER JOIN #T_Azure ta
ON tl.FULL_NAME = ta.FULL_NAME
At last, 40k to 70k records are not that much data to cause any performance issues even without a proper index.

A big 'like' matching query

I've got 2 tables,
'[Item] with field [name] nvarchar(255)
'[Transaction] with field [short_description] nvarchar(3999)
And I need to do thus :
Select [Transaction].id, [Item].id
From [Transaction] inner join [Item]
on [Transaction].[short_description] like ('%' + [Item].[name] + '%')
The above works if limited to a handful of items, but unfiltered is just going over 20 mins and I cancel.
I have a NC index on [name], but I cannot index [short_description] due to its length.
[Transaction] has 320,000 rows
[Items] has 42,000.
That's 13,860,000,000 combinations.
Is there a better way to perform this query ?
I did poke at full-text, but I'm not really that familiar, the answer was not jumping out at me there.
Any advice appreciated !!
Starting a comparison string with a wildcard (% or _) will NEVER use an index, and will typically be disastrous for performance. Your query will need to scan indexes rather than seek through them, so indexing won't help.
Ideally, you should have a third table that would allow a many-to-many relationship between Transaction and Item based on IDs. The design is the issue here.
After some more sleuthing I have utilized some Fulltext features.
sp_fulltext_keymappings
gives me my transaction table id, along with the FT docID
(I found out that 'doc' = text field)
sys.dm_fts_index_keywords_by_document
gives me FT documentId along with the individual keywords within it
Once I had that, the rest was simple.
Although, I do have to look into the term 'keyword' a bit more... seems that definition can be variable.
This only works because the text I am searching for has no white space.
I believe that you could tweak the FTI configuration to work with other scenarios... but I couldn't promise.
I need to look more into Fulltext.
My current 'beta' code below.
CREATE TABLE #keyMap
(
docid INT PRIMARY KEY ,
[key] varchar(32) NOT NULL
);
DECLARE #db_id int = db_id(N'<database name>');
DECLARE #table_id int = OBJECT_ID(N'Transactions');
INSERT INTO #keyMap
EXEC sp_fulltext_keymappings #table_id;
select km.[key] as transaction_id, i.[id] as item_id
from
sys.dm_fts_index_keywords_by_document ( #db_id, #table_id ) kbd
INNER JOIN
#keyMap km ON km.[docid]=kbd.document_id
inner join [items] i
on kdb.[display_term] = i.name
;
My actual version of the code includes inserting the data into a final table.
Execution time is coming in at 30 seconds, which serves my needs for now.

Using SQLServer contains for partial words

We are running many products search on a huge catalog with partially matched barcodes.
We started with a simple like query
select * from products where barcode like '%2345%'
But that takes way too long since it requires a full table scan.
We thought a fulltext search will be able to help us here using contains.
select * from products where contains(barcode, '2345')
But, it seems like contains doesn't support finding words that partially contains a text but, only full a word match or a prefix. (But in this example we're looking for '123456').
My answer is: #DenisReznik was right :)
ok, let's take a look.
I have worked with barcodes and big catalogs for many years and I was curious about this question.
So I have made some tests on my own.
I have created a table to store test data:
CREATE TABLE [like_test](
[N] [int] NOT NULL PRIMARY KEY,
[barcode] [varchar](40) NULL
)
I know that there are many types of barcodes, some contains only numbers, other contains also letters, and other can be even much complex.
Let's assume our barcode is a random string.
I have filled it with 10 millions records of random alfanumeric data:
insert into like_test
select (select count(*) from like_test)+n, REPLACE(convert(varchar(40), NEWID()), '-', '') barcode
from FN_NUMBERS(10000000)
FN_NUMBERS() is just a function I use in my DBs (a sort of tally_table)
to get records quick.
I got 10 million records like that:
N barcode
1 1C333262C2D74E11B688281636FAF0FB
2 3680E11436FC4CBA826E684C0E96E365
3 7763D29BD09F48C58232C7D33551E6C9
Let's declare a var to search for:
declare #s varchar(20) = 'D34F15' -- random alfanumeric string
Let's take a base try with LIKE to compare results to:
select * from like_test where barcode like '%'+#s+'%'
On my workstation it takes 24.4 secs for a full clustered index scan. Very slow.
SSMS suggests to add an index on barcode column:
CREATE NONCLUSTERED INDEX [ix_barcode] ON [like_test] ([barcode]) INCLUDE ([N])
500Mb of index, I retry the select, this time 24.0 secs for the non clustered index seek.. less than 2% better, almost the same result. Very far from the 75% supposed by SSMS. It seems to me this index really doesn't worth it. Maybe my SSD Samsung 840 is making the difference..
For the moment I let the index active.
Let's try the CHARINDEX solution:
select * from like_test where charindex(#s, barcode) > 0
This time it took 23.5 second to complete, not really so much better than LIKE.
Now let's check the #DenisReznik 's suggestion that using the Binary Collation should speed up things.
select * from like_test
where barcode collate Latin1_General_BIN like '%'+#s+'%' collate Latin1_General_BIN
WOW, it seems to work! Only 4.5 secs this is impressive! 5 times better..
So, what about CHARINDEX and Collation toghether? Let's try it:
select * from like_test
where charindex(#s collate Latin1_General_BIN, barcode collate Latin1_General_BIN)>0
Unbelivable! 2.4 secs, 10 times better..
Ok, so far I have realized that CHARINDEX is better than LIKE, and that Binary Collation is better than normal string collation, so from now on I will go on only with CHARINDEX and Collation.
Now, can we do anything else to get even better results? Maybe we can try reduce our very long strings.. a scan is always a scan..
First try, a logical string cut using SUBSTRING to virtually works on barcodes of 8 chars:
select * from like_test
where charindex(
#s collate Latin1_General_BIN,
SUBSTRING(barcode, 12, 8) collate Latin1_General_BIN
)>0
Fantastic! 1.8 seconds.. I have tried both SUBSTRING(barcode, 1, 8) (head of the string) and SUBSTRING(barcode, 12, 8) (middle of the string) with same results.
Then I have tried to phisically reduce the size of the barcode column, almost no difference than using SUBSTRING()
Finally I have tried to drop the index on barcode column and repeated ALL above tests...
I was very surprised to get almost same results, with very little differences.
Index performs 3-5% better, but at cost of 500Mb of disk space and and maintenance cost if the catalog is updated.
Naturally, for a direct key lookup like where barcode = #s with the index it takes 20-50 millisecs, without index we can't get less than 1.1 secs using Collation syntax where barcode collate Latin1_General_BIN = #s collate Latin1_General_BIN
This was interesting.
I hope this helps
I often use charindex and just as often have this very debate.
As it turns out, depending on your structure you may actually have a substantial performance boost.
http://cc.davelozinski.com/sql/like-vs-substring-vs-leftright-vs-charindex
The good option here for your case - creating your FTS index. Here is how it could be implemented:
1) Create table Terms:
CREATE TABLE Terms
(
Id int IDENTITY NOT NULL,
Term varchar(21) NOT NULL,
CONSTRAINT PK_TERMS PRIMARY KEY (Term),
CONSTRAINT UK_TERMS_ID UNIQUE (Id)
)
Note: index declaration in the table definition is a feature of 2014. If you have a lower version, just bring it out of CREATE TABLE statement and create separately.
2) Cut barcodes to grams, and save each of them to a table Terms. For example: barcode = '123456', your table should have 6 rows for it: '123456', '23456', '3456', '456', '56', '6'.
3) Create table BarcodeIndex:
CREATE TABLE BarcodesIndex
(
TermId int NOT NULL,
BarcodeId int NOT NULL,
CONSTRAINT PK_BARCODESINDEX PRIMARY KEY (TermId, BarcodeId),
CONSTRAINT FK_BARCODESINDEX_TERMID FOREIGN KEY (TermId) REFERENCES Terms (Id),
CONSTRAINT FK_BARCODESINDEX_BARCODEID FOREIGN KEY (BarcodeId) REFERENCES Barcodes (Id)
)
4) Save a pair (TermId, BarcodeId) for the barcode into the table BarcodeIndex. TermId was generated on the second step or exists in the Terms table. BarcodeId - is an identifier of the barcode, stored in Barcodes (or whatever name you use for it) table. For each of the barcodes, there should be 6 rows in the BarcodeIndex table.
5) Select barcodes by their parts using the following query:
SELECT b.* FROM Terms t
INNER JOIN BarcodesIndex bi
ON t.Id = bi.TermId
INNER JOIN Barcodes b
ON bi.BarcodeId = b.Id
WHERE t.Term LIKE 'SomeBarcodePart%'
This solution force all similar parts of barcodes to be stored nearby, so SQL Server will use Index Range Scan strategy to fetch data from the Terms table. Terms in the Terms table should be unique to make this table as small as possible. This could be done in the application logic: check existence -> insert new if a term doesn't exist. Or by setting option IGNORE_DUP_KEY for clustered index of the Terms table. BarcodesIndex table is used to reference Terms and Barcodes.
Please note that foreign keys and constraints in this solution are the points of consideration. Personally, I prefer to have foreign keys, until they hurt me.
After further testing and reading and talking with #DenisReznik I think the best option could be to add virtual columns to barcode table to split barcode.
We only need columns for start positions from 2nd to 4th because for the 1st we will use original barcode column and the last I think it is not useful at all (what kind of partial match is 1 char on 6 when 60% of records will match?):
CREATE TABLE [like_test](
[N] [int] NOT NULL PRIMARY KEY,
[barcode] [varchar](6) NOT NULL,
[BC2] AS (substring([BARCODE],(2),(5))),
[BC3] AS (substring([BARCODE],(3),(4))),
[BC4] AS (substring([BARCODE],(4),(3))),
[BC5] AS (substring([BARCODE],(5),(2)))
)
and then to add indexes on this virtual columns:
CREATE NONCLUSTERED INDEX [IX_BC2] ON [like_test2] ([BC2]);
CREATE NONCLUSTERED INDEX [IX_BC3] ON [like_test2] ([BC3]);
CREATE NONCLUSTERED INDEX [IX_BC4] ON [like_test2] ([BC4]);
CREATE NONCLUSTERED INDEX [IX_BC5] ON [like_test2] ([BC5]);
CREATE NONCLUSTERED INDEX [IX_BC6] ON [like_test2] ([barcode]);
now we can simply find partial matches with this query
declare #s varchar(40)
declare #l int
set #s = '654'
set #l = LEN(#s)
select N from like_test
where 1=0
OR ((barcode = #s) and (#l=6)) -- to match full code (rem if not needed)
OR ((barcode like #s+'%') and (#l<6)) -- to match strings up to 5 chars from beginning
or ((BC2 like #s+'%') and (#l<6)) -- to match strings up to 5 chars from 2nd position
or ((BC3 like #s+'%') and (#l<5)) -- to match strings up to 4 chars from 3rd position
or ((BC4 like #s+'%') and (#l<4)) -- to match strings up to 3 chars from 4th position
or ((BC5 like #s+'%') and (#l<3)) -- to match strings up to 2 chars from 5th position
this is HELL fast!
for search strings of 6 chars 15-20 milliseconds (full code)
for search strings of 5 chars 25 milliseconds (20-80)
for search strings of 4 chars 50 milliseconds (40-130)
for search strings of 3 chars 65 milliseconds (50-150)
for search strings of 2 chars 200 milliseconds (190-260)
There will be no additional space used for table, but each index will take up to 200Mb (for 1 million barcodes)
PAY ATTENTION
Tested on a Microsoft SQL Server Express (64-bit) and Microsoft SQL Server Enterprise (64-bit) the optimizer of the latter is slight better but the main difference is that:
on express edition you have to extract ONLY the primary key when searching your string, if you add other columns in the SELECT, the optimizer will not use indexes anymore but it will go for full clustered index scan so you will need something like
;with
k as (-- extract only primary key
select N from like_test
where 1=0
OR ((barcode = #s) and (#l=6))
OR ((barcode like #s+'%') and (#l<6))
or ((BC2 like #s+'%') and (#l<6))
or ((BC3 like #s+'%') and (#l<5))
or ((BC4 like #s+'%') and (#l<4))
or ((BC5 like #s+'%') and (#l<3))
)
select N
from like_test t
where exists (select 1 from k where k.n = t.n)
on standard (enterprise) edition you HAVE to go for
select * from like_test -- take a look at the star
where 1=0
OR ((barcode = #s) and (#l=6))
OR ((barcode like #s+'%') and (#l<6))
or ((BC2 like #s+'%') and (#l<6))
or ((BC3 like #s+'%') and (#l<5))
or ((BC4 like #s+'%') and (#l<4))
or ((BC5 like #s+'%') and (#l<3))
You do not include many constraints, which means you want to search for string in a string -- and if there was a way to optimized an index to search a string in a string, it would be just built in!
Other things that make it hard to give a specific answer:
It's not clear what "huge" and "too long" mean.
It's not clear as to how your application works. Are you searching in batch as you add a 1,000 new products? Are you allowing a user to enter a partial barcode in a search box?
I can make some suggestions that may or may not be helpful in your case.
Speed up some of the queries
I have a database with lots of licence plates; sometimes an officer wants to search by the last 3-characters of the plate. To support this I store the license plate in reverse, then use LIKE ('ZYX%') to match ABCXYZ. When doing the search, they have the option of a 'contains' search (like you have) which is slow, or an option of doing 'Begins/Ends with' which is super because of the index. This would solve your problem some of the time (which may be good enough), especially if this is a common need.
Parallel Queries
An index works because it organizes data, an index cannot help with a string within a string because there is no organization. Speed seems to be your focus of optimization, so you could store/query your data in a way that searches in parallel. Example: if it takes 10-seconds to sequentially search 10-million rows, then having 10-parallel processes (so process searches 1-million) will take you from 10-seconds to 1-second (kind'a-sort'a). Think of it as scaling out. There are various options for this, within your single SQL Instance (try data partitioning) or across multiple SQL Servers (if that's an option).
BONUS: If you're not on a RAID setup, that can help with reads since it's a effectively of reading in parallel.
Reduce a bottleneck
One reason searching "huge" datasets take "too long" is because all that data needs to be read from the disk, which is always slow. You can skip-the-disk, and use InMemory Tables. Since "huge" isn't defined, this may not work.
UPDATED:
We know from that FULL-TEXT searches can be used for the following:
Full-Text Search -
MSDN
One or more specific words or phrases (simple term)
A word or a phrase where the words begin with specified text (prefix term)
Inflectional forms of a specific word (generation term)
A word or phrase close to another word or phrase (proximity term)
Synonymous forms of a specific word (thesaurus)
Words or phrases using weighted values (weighted term)
Are any of these fulfilled by your query requirements? If you are having to search for patterns as you described, without an consistent pattern (such as '1%'), then there may not be a way for SQL to use a SARG.
You could use Boolean statements
Coming from a C++ perspective, B-Trees are accessed from Pre-Order, In-Order, and Post-Order traversals and utilize Boolean statements to search the B-Tree. Processed much faster than string comparisons, booleans offer at the least an improved performance.
We can see this in the following two options:
PATINDEX
Only if your column is not numeric, as PATINDEX is designed for strings.
Returns an integer (like CHARINDEX) which is easier to process than strings.
CHARINDEX is a solution
CHARINDEX has no problem searching INTs and again, returns a number.
May require some extra cases built in (i.e. first number is always ignored), but you can add them like so: CHARINDEX('200', barcode) > 1.
Proof of what I am saying, let us go back to the old [AdventureWorks2012].[Production].[TransactionHistory]. We have TransactionID which contains the number of the items we want, and lets for fun assume you want every transactionID that has 200 at the end.
-- WITH LIKE
SELECT TOP 1000 [TransactionID]
,[ProductID]
,[ReferenceOrderID]
,[ReferenceOrderLineID]
,[TransactionDate]
,[TransactionType]
,[Quantity]
,[ActualCost]
,[ModifiedDate]
FROM [AdventureWorks2012].[Production].[TransactionHistory]
WHERE TransactionID LIKE '%200'
-- WITH CHARINDEX(<delimiter>, <column>) > 3
SELECT TOP 1000 [TransactionID]
,[ProductID]
,[ReferenceOrderID]
,[ReferenceOrderLineID]
,[TransactionDate]
,[TransactionType]
,[Quantity]
,[ActualCost]
,[ModifiedDate]
FROM [AdventureWorks2012].[Production].[TransactionHistory]
WHERE CHARINDEX('200', TransactionID) > 3
Note CHARINDEX removes the value 200200 in the search, so you may need to adjust your code appropriately. But look at the results:
Clearly, booleans and numbers are faster comparisons.
LIKE uses string comparisons, which again is much slower to process.
I was a bit surprised at the size of the difference, but the fundamentals are the same. Integers and Boolean statements are always faster to process than string comparisons.
I'm late to the game but here's another way to get a full-text like index in the spirit of #MtwStark's second answer.
This is a solution using a search table join
drop table if exists #numbers
select top 10000 row_number() over(order by t1.number) as n
into #numbers
from master..spt_values t1
cross join master..spt_values t2
drop table if exists [like_test]
create TABLE [like_test](
[N] INT IDENTITY(1,1) not null,
[barcode] [varchar](40) not null,
constraint pk_liketest primary key ([N])
)
insert into dbo.like_test (barcode)
select top (1000000) replace(convert(varchar(40), NEWID()), '-', '') barcode
from #numbers t,#numbers t2
drop table if exists barcodesearch
select distinct ps.n, trim(substring(ps.barcode,ty.n,100)) as searchstring
into barcodesearch
from like_test ps
inner join #numbers ty on ty.n < 40
where len(ps.barcode) > ty.n
create clustered index idx_barcode_search_index on barcodesearch (searchstring)
The final search should look like this:
declare #s varchar(20) = 'D34F15'
select distinct lt.* from dbo.like_test lt
inner join barcodesearch bs on bs.N = lt.N
where bs.searchstring like #s+'%'
If you have the option of full-text searching, you can speed this up even further by adding the full-text search column directly to the barcode table
drop table if exists #liketestupdates
select n, string_agg(searchstring, ' ')
within group (order by reverse(searchstring)) as searchstring
into #liketestupdates
from barcodesearch
group by n
alter table dbo.like_test add search_column varchar(559)
update lt
set search_column = searchstring
from like_test lt
inner join #liketestupdates lu on lu.n = lt.n
CREATE FULLTEXT CATALOG ftcatalog as default;
create fulltext index on dbo.like_test ( search_column )
key index pk_liketest
The final full-text search would look like this:
declare #s varchar(20) = 'D34F15'
set #s = '"*' + #s + '*"'
select n,barcode from dbo.like_test where contains(search_column, #s)
I understand that Estimated Costs aren't the best measure of expected performance but the number's aren't wildly off here.
With the search table join, the Estimated Subtree Cost is 2.13
With the full-text search, the Estimated Subtree Cost is 0.008
Full-text is aimed for bigger texts, let's say texts with more than about 100 chars. You can use LIKE '%string%'. (However it depends how the barcode column is defined.) Do you have an index for barcode? If not, then create one and it will improve your query.
First make the index on column on which you have to put as where clause .
Secondly for the datatype of the column which are used in where clause make them as Char in place of Varchar which will save you some space,in the table and in the indexes that will include that column.
varchar(1) column needs one more byte over char(1)
Do pull only the number of columns you need try to avoid * , be specific to number of columns you wish to select.
Don't write as
select * from products
In place of it write as
Select Col1, Col2 from products with (Nolock)

Sybase query optimization

I'm seeing how we can improve the performance of the following sybase query. Currently it takes about 1.5 hrs.
CREATE TABLE #TempTable
(
T_ID numeric,
M_ID numeric,
M_USR_NAME char(10),
M_USR_GROUP char(10),
M_CMP_DATE datetime,
M_CMP_TIME numeric,
M_TYPE char(10),
M_ACTION char(15),
)
select
T.M_USR_NAME,
T.M_USR_GROUP,
T.M_CMP_DATE,
T.M_CMP_TIME,
T.M_TYPE,
T.M_ACTION
from #TempTable T, AUD_TN B
where T.M_ID=B.M_ID
and T.T_ID in
(
select M_NB from TRN H where (M_BENTITY ="KROP" or M_SENTITY = "KROP")
)
UNION
select
A.M_USR_NAME,
A.M_USR_GROUP,
A.M_DATE_CMP,
A.M_TIME_CMP,
A.M_TYPE,
A.M_ACTION
from AUD_VAL A, TRN H
where A.M_DATE_CMP >= '1 May 2012' and A.M_DATE_CMP <= '31 May 2012'
and A.M_ACT_NB0=H.M_NB
and (H.M_BENTITY ="KROP" or H.M_SENTITY = "KROP")
UNION
select
TR.M_USR_NAME,
TR.M_USR_GROUP,
TR.M_DATE_CMP,
TR.M_TIME_CMP,
TR.M_TYPE,
TR.M_ACTION
from TRN_AUD TR, TRN H
where TR.M_DATE_CMP >= '1 May 2012' and TR.M_DATE_CMP <= '31 May 2012'
and TR.M_ACT_NB0=H.M_NB
and (H.M_BENTITY ="KROP" or H.M_SENTITY = "KROP")
DROP table #TempTable
Any help is greatly appreciated. Please note the following
The only table which is not indexed above is AUD_TN
Cheers
RC
Presumably the temporary table is populated, and with a lot of rows?
The temp doesn't need to be indexed but all joins in that part will need to use indexes.
Why not try each part of the UNION separately to find if one of them's slow?
Are you okay using SET SHOWPLAN ON? I think you need to be able to do that as well probably - you need to be able to check that Sybase is using indexes to join right.
TRN BENTITY and SENTITY - indexed? If not your IN is going to be a bit slow, although it might be okay, doing a single table scan into a worktable that Sybase'll index internally. Use an EXISTS instead as well - that might/should work better.
2nd part - both have SARGS (look up in Sybooks if you don't know - search arguments.) I don't know what proportion of rows is found by them but assuming it's a small fraction, you should see an index used on a SARG for whichever table is scanned first, then you should see index join (or perhaps merge join) to the 2nd - but using indexes.
3rd part - similar discussion to the 2nd.
I reckon it'll be the 2nd or 3rd part
How about using cache for these tables. if the query is used kn a regular basis. Its better to get a named cache and bind the tables to it. Also bind the tempdb to cache. This will greatly improve the process execution time. If the temp table is huge then you can create a index on it which may help with performance but i need some more details for that.
If you still have this issue open :
1) Try this at top of sql batch
set showplan on
set noexec on
See if the expected indexes are being picked up by SQL optimizer. If no indexes exist on the columns in where clause, please create one. Create clustered index if possible.
2) In the first query you can replace the subquery in where clause with
create table #T_ID (
M_NB datatype
)
insert into #T_ID
select M_NB from TRN H where (M_BENTITY ="KROP" or M_SENTITY = "KROP")
and modify the where clause as :
where T.M_ID=B.M_ID
and T.T_ID = #T_ID.M_NB

Very Slow Sql Server Query

I have 2 tables resulted from merging the following tables:
Authors
-Aid bigint
-Surname nvarchar(500)
-Email nvarchar(500)
Articles
-ArId varchar(50)
-Year int
-……Some other fields……
ArticleAuthors
-ArId varchar(50)
-Aid bigint
Classifications
-ClassNumber int
-ClassDescription nvarchar(100)
ClassArticles
-ArId varchar(50)
-ClassNumber int
After denormalizing these tables the resulted tables were:
Articles
-FieldId int
-ArId varchar(50)
-ClassNumber int (Foreign key from the Classifications table)
-Year int
Authors
-FieldId int
-ArId varchar(50) (Foreign key from the Articles table)
-Aid bigint
-Surname nvarchar(500)
-Email nvarchar(500)
-Year int
Here are the conditions of the data within the resulted tables:
SQL Server 2008 database
The relationships between the two tables are applied physically
The Authors table has 50 million records
The Articles table has 20 million records
The author has written many articles during the same year with different emails
There are authors in the authors table with ArIds that don’t reference ArIds in the Articles table (Orphan records)
The values within the Year fields ranges from 2002 and 2009
The Articles table has a unique clustered index on the [FieldId and Year] fields and this index created on 9 partitions (1 partition per year)
The Authors table has a non-unique clustered index on the [Year, ArId, Aid] fields and this index is created on the same 9 partition as the Articles table (1 partition per year)
The question is:
We need to create a stored procedure that gets the following result from the two tables [Aid,Surname,Email] under the following conditions:
Authors that have written articles during and after a specific year (AND)
The total number of articles for the author is greater than a specific number (AND)
The count of the articles written by the author under a specific ClassNumber is greater than a specific percentage of the total number of his articles (AND)
Get only the most recent email of the author (in the last year during which he has written an article)
If the author has more than one email in the same year, get them all.
We need the query to take the least possible time
If anyone can help, Thank you very much.
Without having the data this is very difficult to work on, but I created the tables and duplicated the procedure to get a rough idea on the query plan and potential problems.
First noticable thing, the part of the query written as :
SELECT DISTINCT Aid
FROM Authors EAE
WHERE EAE.[Year] >= #year AND EAE.Email IS NOT NULL AND EAE.Email != ' '
Is going to table scan, you have Year as your partitioning key, but within each partition there is no index supporting the email clauses in the query. As a side note the EAE.Email != ' ' might not give you quite what you expect.
If ' ' != '' print 'true' else print 'false'
That will give false for most systems. (Based on ansi padding)
FROM Articles ED
INNER JOIN Authors EAD ON EAD.ArId = ED.ArId
WHERE EAD.Aid = [YearAuthors].Aid AND ED.ClassNumber = #classNumber
ED.ClassNumber will have no supporting index, causing a clustered index scan.
In the final select statement :
INNER JOIN Authors EA ON EA.Aid = #TT.Aid
This has no supporting index on the #TT side and doesn't appear to be one on the Authors Table side.
WHERE EA.Email IS NOT NULL AND EA.Email != ' '
this has no supporting index, causing a scan.
There are a lot more issues in there, with a considerable number of sort's appearing that probably will disappear with suitable indexes - You will have to sort out some of the basic indexing on the tables and then get a new query plan / set of problems and iteratively fix the plan - you will not fix it in a single 'silver bullet' shot.
Do you want help writing the query, or help in fixing the performance? The query itself should be relatively simple. That's not where you're going to get the most bang for your buck.
SQL Server comes with tools for analyzing queries and boosting performance by tuning your indexes. That's where you're going to see the biggest help in getting it to run quickly.
First step would be appropriate indexes. With the where criteria being the primary contenders and then items not used in where but selected can simply be included in the index. As mentioned there are standard tools and queries to find these.
To focus on the query in hand run the Query with "Query | Include Execution Plan" (Ctrl + M) turned on. This should show up any obvious bottlenecks.
Given the beforementioned conditions
This is the query I have created but it takes 3 minutes (So long time for a web page response):
CREATE PROC [dbo].[GetAuthorForMailing]
(
#classNumber INT,
#noPapers int,
#year int,
#percent int
)
AS
BEGIN
CREATE TABLE #TT
(
Aid bigint,
allPapers int,
classPapers int,
perc as CEILING(CAST(classPapers AS DECIMAL) / CAST(allPapers AS DECIMAL) * 100)
)
INSERT INTO #TT(Aid,allPapers,classPapers)
SELECT [YearAuthors].Aid,
(
SELECT COUNT(EA.Aid)
FROM Authors EA
WHERE EA.Aid =[YearAuthors].Aid) AS [AllPapers],
(
SELECT COUNT(*)
FROM Articles ED INNER JOIN Authors EAD ON EAD.ArId = ED.ArId
WHERE EAD.Aid = [YearAuthors].Aid AND ED.ClassNumber = #classNumber) AS [ClassPapers]
FROM
(
SELECT DISTINCT Aid
FROM Authors EAE
WHERE EAE.[Year] >= #year AND EAE.Email IS NOT NULL AND EAE.Email != ' '
)AS [YearAuthors]
SELECT DISTINCT EA.Aid,EA.Surname,EA.Email,[Year]
FROM #TT INNER JOIN Authors EA ON EA.Aid = #TT.Aid
AND allPapers > #noPapers
AND perc > #percent
AND EA.[Year] = (SELECT MAX([Year]) FROM Authors WHERE Aid = EA.Aid)
WHERE EA.Email IS NOT NULL AND EA.Email != ' '
DROP TABLE #TT

Resources