SQL Server slow results with parameters - sql-server

I have a query which selects some data, I pass some parameters in it:
DECLARE #FromAccDocNo INT = 1,
#ToAccDocNo INT = 999999999,
#FromDate CHAR(10) = '1900/01/01',
#ToDate CHAR(10) = '2999/12/30',
#L1Code INT = 129
SELECT ad.AccDocNo,
ad.AccDocDate,
add1.Row,
add1.RowComment,
add1.Debit,
add1.Credit
FROM AccDoc ad
INNER JOIN AccDocDetail add1
ON add1.AccDocNo = ad.AccDocNo
INNER JOIN Topic t
ON t.TopicCode = add1.TopicCode
WHERE t.L1Code = #L1Code -- here is the difference
AND add1.AccDocNo BETWEEN #FromAccDocNo AND #ToAccDocNo
AND ad.EffectiveDate BETWEEN #FromDate AND #ToDate
ORDER BY
ad.AccDocNo
In first, I write the value 129 explicitly as #L1Code (it takes 0.010 sec)
In second form I pass #L1Code into query (it takes 2.500 sec)
Can anyone explain what happens?

Please read the canonical reference: Slow in the Application, Fast in SSMS? (specifically this bit)
One way to fix, is to add OPTION (RECOMPILE) at the end of the query.

Related

How do I use ##RowCount in a stored procedure, against rows in another table to work out the percentage?

Firstly, may I state that I'm aware of the ability to, e.g., create a new function, declare variables for rowcount1 and rowcount2, run a stored procedure that returns a subset of rows from a table, then determine the entire rowcount for that same table, assign it to the second variable and then 1 / 2 x 100....
However, is there a cleaner way to do this which doesn't result in numerous running of things like this stored procedure? Something like
select (count(*stored procedure name*) / select count(*) from table) x 100) as Percentage...
Sorry for the crap scenario!
EDIT: Someone has asked for more details. Ultimately, and to cut a very long story short, I wish to know what people would consider the quickest and most processor-concise method there would be to show the percentage of rows that are returned in the stored procedure, from ALL rows available in that table. Does that make more sense?
The code in the stored procedure is below:
SET #SQL = 'SELECT COUNT (DISTINCT c.ElementLabel), r.FirstName, r.LastName, c.LastReview,
CASE
WHEN c.LastReview < DateAdd(month, -1, GetDate()) THEN ''OUT of Date''
WHEN c.LastReview >= DateAdd(month, -1, GetDate()) THEN ''In Date''
WHEN c.LastReview is NULL THEN ''Not Yet Reviewed'' END as [Update Status]
FROM [Residents-'+#home_name+'] r
LEFT JOIN [CarePlans-'+#home_name+'] c ON r.PersonID = c.PersonID
WHERE r.Location = '''+#home_name+'''
AND CarePlanType = 0
GROUP BY r.LastName, r.FirstName, c.LastReview
HAVING COUNT(ELEMENTLABEL) >= 14
Thanks
Ant
I could not tell from your question if you are attempting to get the count and the result set in one query. If it is ok to execute the SP and separately calculate a table count then you could store the results of the stored procedure into a temp table.
CREATE TABLE #Results(ID INT, Value INT)
INSERT #Results EXEC myStoreProc #Parameter1, #Parameter2
SELECT
Result = ((SELECT COUNT(*) FROM #Results) / (select count(*) from table))* 100

Why does my query to hang/run for ever with variables instead of literals?

I have a relatively simple query that runs in about 2.5 minutes when I run it with literal values in the where clause. But when I run it with local variables containing those same values, the query hangs or runs presumably forever. (I haven't tried letting it run more than 90 minutes.)
Here's the query with names obfuscated (because rules). I've tried replacing the EXISTS with INNER JOINs, but it didn't help. The variables below are local variables, not parameters. This can't be a parameter sniffing issue. But when the variables are replaced with their literal values (two DATETIMEs and three INTs) the query runs fine.
DECLARE #SubsetStart DATETIME = '2013-01-01 00:00:00'
DECLARE #SubsetEnd DATETIME = '2013-12-31 23:59:59'
DECLARE #SCD INT = 217
DECLARE #MFP INT = 8
DECLARE #EXP INT = 39298
SELECT
MainTable.MFID ManufacturerID
,SUM(MainTable.AMT) AMT
FROM
MainTable
WHERE
EXISTS (
SELECT TID
FROM MTMTable
WHERE MTMTable.TID = MainTable.TID
AND MTMTable.DEL = 0
AND EXISTS (
SELECT CID
FROM RelatedTable
WHERE RelatedTable.CID = MTMTable.CID
AND DEL = 0
AND RelatedTable.TD BETWEEN '2013-01-1' AND '2013-12-31 23:59:59'
)
)
AND EXISTS (
SELECT AID
FROM OtherTable
WHERE OtherTable.AID = MainTable.CAID
AND OtherTable.AHTID = 8
AND OtherTable.DEL = 0)
AND MainTable.DAID <> 39298
AND MainTable.SID = 217
GROUP BY
MainTable.MFID
I am completely at a loss as to why this simple query should behave this way.
The issue is that when you use local variables the optimizer ignores their value and uses general statistic assumptions. Basically at compile time it does not know the value of those variables. You can verify this by using the OPTION(RECOMPILE) hint on your query, this will recompile the query using the values in your variables.
You can read about why using local variables in stored procedures can hurt performance here:
https://www.brentozar.com/archive/2014/06/tuning-stored-procedures-local-variables-problems/
and here:
http://www.sqlbadpractices.com/using-local-variables-in-t-sql-queries/

Function in select statement makes my query run very slowly

I have this query in a stored procedure:
SELECT
*,
ISNULL(dbo.ReturnShortageByItemCodeLinePackage(LineId, TestPackageId, MaterialDescriptionId), 0) AS Shortage
FROM
dbo.ViewMTO
I am using a function inside the query to calculate an integer value as you can see here :
ALTER FUNCTION [dbo].[ReturnShortageByItemCodeLinePackage]
(#lineId int,#testpackId int, #MaterialDescriptionId int)
RETURNS float
AS
BEGIN
DECLARE #shortageQuantity float
DECLARE #MIVQuantity float
DECLARE #totalQuantity float
DECLARE #spoolQuantity float
DECLARE #ExistInSiteQuantity float
DECLARE #BeforeDoneQuantity float
SELECT
#totalQuantity = Quantity,
#spoolQuantity = QuantitySpool,
#ExistInSiteQuantity = QuantityExistInSite,
#BeforeDoneQuantity = QuantityBeforeDone
FROM
[SPMS2].[dbo].Materials
WHERE
LineId = #lineId
AND TestPackageId = #testpackId
AND MaterialDescriptionId = #MaterialDescriptionId
SELECT
#MIVQuantity = SUM(QuantityDeliver)
FROM
MaterialIssueVoucherDetails miv
JOIN
MaterialRequestContractorDetails mrc ON miv.MaterialRequestContractorDetailId = mrc.Id
WHERE
TestPackageId = #testpackId
AND LineId = #lineId
AND miv.MaterialDescriptionId = #MaterialDescriptionId
IF #MIVQuantity IS NULL
BEGIN
SET #MIVQuantity = 0
END
SET #shortageQuantity = #totalQuantity - (#BeforeDoneQuantity + #ExistInSiteQuantity + #spoolQuantity + #MIVQuantity)
RETURN round(#shortageQuantity, 3)
END
My query is executed in 3 minutes, it is catastrophic for my users! Is there any better solution?
I can recommend three things:
A. The following line..
SELECT #totalQuantity= ...
FROM [SPMS2].[dbo].Materials
Is this accessing a different database via a Linked Server connection ? How fast is this connection ?
B. Your SP contains two SELECT statements. Which of them is the bottleneck ?
You can add some PRINT statements to show when each is started:
PRINT convert(nvarchar, GetDate(), 108) + ' This is the time !'
C. Try running the SQL show on my webpage below, which will highlight missing Indexes.
Find missing indexes
Hope this helps.
Convert your Scaler function to Table-Valued function, and then place the function in FROM clause for LEFT JOIN. Do check execution plans to find any warning.
Testing performance of Scalar vs Table-valued functions in sql server

Performance issue with larger resultsets MSSQL

I currently have a stored procedure in MSSQL where I execute a SELECT-statement multiple times based on the variables I give the stored procedure. The stored procedure counts how many results are going to be returned for every filter a user can enable.
The stored procedure isn't the issue, I transformed the select statement from te stored procedure to a regular select statement which looks like:
DECLARE #contentRootId int = 900589
DECLARE #RealtorIdList varchar(2000) = ';880;884;1000;881;885;'
DECLARE #publishSoldOrRentedSinceDate int = 8
DECLARE #isForSale BIT= 1
DECLARE #isForRent BIT= 0
DECLARE #isResidential BIT= 1
--...(another 55 variables)...
--Table to be returned
DECLARE #resultTable TABLE
(
variableName varchar(100),
[value] varchar(200)
)
-- Create table based of inputvariable. Example: turns ';18;118;' to a table containing two ints 18 AND 118
DECLARE #RealtorIdTable table(RealtorId int)
INSERT INTO #RealtorIdTable SELECT * FROM dbo.Split(#RealtorIdList,';') option (maxrecursion 150)
INSERT INTO #resultTable ([value], variableName)
SELECT [Value], VariableName FROM(
Select count(*) as TotalCount,
ISNULL(SUM(CASE WHEN reps.ForRecreation = 1 THEN 1 else 0 end), 0) as ForRecreation,
ISNULL(SUM(CASE WHEN reps.IsQualifiedForSeniors = 1 THEN 1 else 0 end), 0) as IsQualifiedForSeniors,
--...(A whole bunch more SUM(CASE)...
FROM TABLE1 reps
LEFT JOIN temp t on
t.ContentRootID = #contentRootId
AND t.RealEstatePropertyID = reps.ID
WHERE
(EXISTS(select 1 from #RealtorIdTable where RealtorId = reps.RealtorID))
AND (#SelectedGroupIds IS NULL OR EXISTS(select 1 from #SelectedGroupIdtable where GroupId = t.RealEstatePropertyGroupID))
AND (ISNULL(reps.IsForSale,0) = ISNULL(#isForSale,0))
AND (ISNULL(reps.IsForRent, 0) = ISNULL(#isForRent,0))
AND (ISNULL(reps.IsResidential, 0) = ISNULL(#isResidential,0))
AND (ISNULL(reps.IsCommercial, 0) = ISNULL(#isCommercial,0))
AND (ISNULL(reps.IsInvestment, 0) = ISNULL(#isInvestment,0))
AND (ISNULL(reps.IsAgricultural, 0) = ISNULL(#isAgricultural,0))
--...(Around 50 more of these WHERE-statements)...
) as tbl
UNPIVOT (
[Value]
FOR [VariableName] IN(
[TotalCount],
[ForRecreation],
[IsQualifiedForSeniors],
--...(All the other things i selected in above query)...
)
) as d
select * from #resultTable
The combination of a Realtor- and contentID gives me a set default set of X amount of records. When I choose a Combination which gives me ~4600 records, the execution time is around 250ms. When I execute the sattement with a combination that gives me ~600 record, the execution time is about 20ms.
I would like to know why this is happening. I tried removing all SUM(CASE in the select, I tried removing almost everything from the WHERE-clause, and I tried removing the JOIN. But I keep seeing the huge difference between the resultset of 4600 and 600.
Table variables can perform worse when the number of records is large. Consider using a temporary table instead. See When should I use a table variable vs temporary table in sql server?
Also, consider replacing the UNPIVOT by alternative SQL code. Writing your own TSQL code will give you more control and even increase performance. See for example PIVOT, UNPIVOT and performance

Optimizing sql server scalar-valued function

Here is my question,
I have a view calling another view. And that second view has a scalar function which obviously runs for each row of the table. For only 322 rows, it takes around 30 seconds. When I take out the calculated field, it takes 1 second.
I appreciate if you guys give me an idea if I can optimize the function or if there is any other way to increase the performance?
Here is the function:
ALTER FUNCTION [dbo].[fnCabinetLoad] (
#site nvarchar(15),
#cabrow nvarchar(50),
#cabinet nvarchar(50))
RETURNS float
AS BEGIN
-- Declare the return variable here
DECLARE #ResultVar float
-- Add the T-SQL statements to compute the return value here
SELECT #ResultVar = SUM(d.Value)
FROM
(
SELECT dt.*,
ROW_NUMBER()
OVER (PARTITION BY dt.tagname ORDER BY dt.timestamp DESC) 'RowNum'
FROM vDataLog dt
WHERE dt.Timestamp BETWEEN dateadd(minute,-15,getdate()) AND GetDate()
) d
INNER JOIN [SKY_EGX_CONFIG].[dbo].[vPanelSchedule] AS p
ON p.rpp = left(d.TagName,3) + substring(d.TagName,5,5)
+ substring(d.TagName,11,8)
AND right(p.pole,2) = substring(d.TagName,23,2)
AND p.site = #site
AND p.EqpRowNumber = #cabrow
AND p.EqpCabinetName= #cabinet
WHERE d.RowNum = 1
AND Right(d.TagName, 6) = 'kW Avg'
RETURN #ResultVar
END
Scalar-valued functions have atrocious performance. Your function looks like an excellent candidate for an inline table-valued function that you can CROSS APPLY:
CREATE FUNCTION [dbo].[fnCabinetLoad]
(
#site nvarchar(15),
#cabrow nvarchar(50),
#cabinet nvarchar(50)
)
RETURNS TABLE
AS RETURN
SELECT SUM(d.Value) AS [TotalLoad]
FROM
(
SELECT dt.*, ROW_NUMBER() OVER (PARTITION BY dt.tagname ORDER BY dt.timestamp DESC) 'RowNum'
FROM vDataLog dt
WHERE dt.Timestamp BETWEEN dateadd(minute,-15,getdate()) AND GetDate()) d INNER JOIN [SKY_EGX_CONFIG].[dbo].[vPanelSchedule] AS p
ON p.rpp = left(d.TagName,3) + substring(d.TagName,5,5) + substring(d.TagName,11,8)
AND right(p.pole,2) = substring(d.TagName,23,2)
AND p.site = #site
AND p.EqpRowNumber = #cabrow
AND p.EqpCabinetName= #cabinet
WHERE d.RowNum = 1
AND Right(d.TagName, 6) = 'kW Avg'
In your view:
SELECT ..., cabinetLoad.TotalLoad
FROM ... CROSS APPLY dbo.fnCabinetLoad(.., .., ..) AS cabinetLoad
My understanding is the returned result set is 322 rows, but if the vDataLog table is significantly larger, I would run that subquery first and dump that result set into a table variable. Then, you can use that table variable instead of a nested query.
Otherwise, as it stands now, I think the joins are being done on all rows of the nested query and then you're stripping them off with the where clause to get the rows you want.
You really don't need a function and get rid of nested view(very poor performant)! Encapsulate the entire logic in a stored proc to get the desired result, so that instead of computing everything row by row, it's computed as a set. Instead of view, use the source table to do the computation inside the stored proc.
Apart from that, you are using the functions RIGHT, LEFT AND SUBSTRING inside your code. Never have them in WHERE OR JOIN. Try to compute them before hand and dump them into a temp table so that they are computed once. Then index the temp tables on these columns.
Sorry for the theoretical answer, but right now code seems a mess. It needs to go through layers of changes to have decent performance.
Turn the function into a view.
Use it by restraining on the columns site, cabrow and cabinet and Timestamp. When doing that, try storing GetDate() and dateadd(minute,-15,getdate()) on a variable. I think not doing so can prevent you from taking advantage on any index on Timestamp.
SELECT SUM(d.Value) AS [TotalLoad],
dt.Timestamp,
p.site,
p.EqpRowNumber AS cabrow,
p.EqpCabinetName AS cabinet
FROM
( SELECT dt.*,
ROW_NUMBER() OVER (PARTITION BY dt.tagname ORDER BY dt.timestamp DESC)'RowNum'
FROM vDataLog dt) d
INNER JOIN [SKY_EGX_CONFIG].[dbo].[vPanelSchedule] AS p
ON p.rpp = left(d.TagName,3) + substring(d.TagName,5,5) + substring(d.TagName,11,8)
AND right(p.pole,2) = substring(d.TagName,23,2)
WHERE d.RowNum = 1
AND d.TagName LIKE '%kW Avg'

Resources