I have this query in a stored procedure:
SELECT
*,
ISNULL(dbo.ReturnShortageByItemCodeLinePackage(LineId, TestPackageId, MaterialDescriptionId), 0) AS Shortage
FROM
dbo.ViewMTO
I am using a function inside the query to calculate an integer value as you can see here :
ALTER FUNCTION [dbo].[ReturnShortageByItemCodeLinePackage]
(#lineId int,#testpackId int, #MaterialDescriptionId int)
RETURNS float
AS
BEGIN
DECLARE #shortageQuantity float
DECLARE #MIVQuantity float
DECLARE #totalQuantity float
DECLARE #spoolQuantity float
DECLARE #ExistInSiteQuantity float
DECLARE #BeforeDoneQuantity float
SELECT
#totalQuantity = Quantity,
#spoolQuantity = QuantitySpool,
#ExistInSiteQuantity = QuantityExistInSite,
#BeforeDoneQuantity = QuantityBeforeDone
FROM
[SPMS2].[dbo].Materials
WHERE
LineId = #lineId
AND TestPackageId = #testpackId
AND MaterialDescriptionId = #MaterialDescriptionId
SELECT
#MIVQuantity = SUM(QuantityDeliver)
FROM
MaterialIssueVoucherDetails miv
JOIN
MaterialRequestContractorDetails mrc ON miv.MaterialRequestContractorDetailId = mrc.Id
WHERE
TestPackageId = #testpackId
AND LineId = #lineId
AND miv.MaterialDescriptionId = #MaterialDescriptionId
IF #MIVQuantity IS NULL
BEGIN
SET #MIVQuantity = 0
END
SET #shortageQuantity = #totalQuantity - (#BeforeDoneQuantity + #ExistInSiteQuantity + #spoolQuantity + #MIVQuantity)
RETURN round(#shortageQuantity, 3)
END
My query is executed in 3 minutes, it is catastrophic for my users! Is there any better solution?
I can recommend three things:
A. The following line..
SELECT #totalQuantity= ...
FROM [SPMS2].[dbo].Materials
Is this accessing a different database via a Linked Server connection ? How fast is this connection ?
B. Your SP contains two SELECT statements. Which of them is the bottleneck ?
You can add some PRINT statements to show when each is started:
PRINT convert(nvarchar, GetDate(), 108) + ' This is the time !'
C. Try running the SQL show on my webpage below, which will highlight missing Indexes.
Find missing indexes
Hope this helps.
Convert your Scaler function to Table-Valued function, and then place the function in FROM clause for LEFT JOIN. Do check execution plans to find any warning.
Testing performance of Scalar vs Table-valued functions in sql server
Related
I have a relatively simple query that runs in about 2.5 minutes when I run it with literal values in the where clause. But when I run it with local variables containing those same values, the query hangs or runs presumably forever. (I haven't tried letting it run more than 90 minutes.)
Here's the query with names obfuscated (because rules). I've tried replacing the EXISTS with INNER JOINs, but it didn't help. The variables below are local variables, not parameters. This can't be a parameter sniffing issue. But when the variables are replaced with their literal values (two DATETIMEs and three INTs) the query runs fine.
DECLARE #SubsetStart DATETIME = '2013-01-01 00:00:00'
DECLARE #SubsetEnd DATETIME = '2013-12-31 23:59:59'
DECLARE #SCD INT = 217
DECLARE #MFP INT = 8
DECLARE #EXP INT = 39298
SELECT
MainTable.MFID ManufacturerID
,SUM(MainTable.AMT) AMT
FROM
MainTable
WHERE
EXISTS (
SELECT TID
FROM MTMTable
WHERE MTMTable.TID = MainTable.TID
AND MTMTable.DEL = 0
AND EXISTS (
SELECT CID
FROM RelatedTable
WHERE RelatedTable.CID = MTMTable.CID
AND DEL = 0
AND RelatedTable.TD BETWEEN '2013-01-1' AND '2013-12-31 23:59:59'
)
)
AND EXISTS (
SELECT AID
FROM OtherTable
WHERE OtherTable.AID = MainTable.CAID
AND OtherTable.AHTID = 8
AND OtherTable.DEL = 0)
AND MainTable.DAID <> 39298
AND MainTable.SID = 217
GROUP BY
MainTable.MFID
I am completely at a loss as to why this simple query should behave this way.
The issue is that when you use local variables the optimizer ignores their value and uses general statistic assumptions. Basically at compile time it does not know the value of those variables. You can verify this by using the OPTION(RECOMPILE) hint on your query, this will recompile the query using the values in your variables.
You can read about why using local variables in stored procedures can hurt performance here:
https://www.brentozar.com/archive/2014/06/tuning-stored-procedures-local-variables-problems/
and here:
http://www.sqlbadpractices.com/using-local-variables-in-t-sql-queries/
I created a database with NBA player statistics just to practice SQL and SSRS. I am new to working with stored procedures, but I created the following procedure that should (I think) allow me to specify the team and number of minutes.
CREATE PROCEDURE extrapstats
--Declare variables for the team and the amount of minutes to use in --calculations
#team NCHAR OUTPUT,
#minutes DECIMAL OUTPUT
AS
BEGIN
SELECT p.Fname + ' ' + p.Lname AS Player_Name,
p.Position,
--Creates averages based on the number of minutes per game specified in #minutes
(SUM(plg.PTS)/SUM(plg.MP))*#minutes AS PTS,
(SUM(plg.TRB)/SUM(plg.MP))*#minutes AS TRB,
(SUM(plg.AST)/SUM(plg.MP))*#minutes AS AST,
(SUM(plg.BLK)/SUM(plg.MP))*#minutes AS BLK,
(SUM(plg.STL)/SUM(plg.MP))*#minutes AS STL,
(SUM(plg.TOV)/SUM(plg.MP))*#minutes AS TOV,
(SUM(plg.FT)/SUM(plg.MP))*#minutes AS FTs,
SUM(plg.FT)/SUM(plg.FTA) AS FT_Percentage,
(SUM(plg.FG)/SUM(plg.MP))*#minutes AS FGs,
SUM(FG)/SUM(FGA) as Field_Percentage,
(SUM(plg.[3P])/SUM(plg.MP))*#minutes AS Threes,
SUM([3P])/SUM([3PA]) AS Three_Point_Percentage
FROM PlayerGameLog plg
--Joins the Players and PlayerGameLog tables
INNER JOIN Players p
ON p.PlayerID = plg.PlayerID
AND TeamID = #team
GROUP BY p.Fname, p.Lname, p.Position, p.TeamID
ORDER BY PTS DESC
END;
I then tried to use the SP by executing the query below:
DECLARE #team NCHAR,
#minutes DECIMAL
EXECUTE extrapstats #team = 'OKC', #minutes = 35
SELECT *
When I do that, I encounter this message:
Msg 263, Level 16, State 1, Line 5
Must specify table to select from.
I've tried different variations of this, but nothing has worked. I thought the SP specified the tables from which to select the data.
Any ideas?
Declaring the stored procedure parameters with OUTPUT clause means the values will be returned by the stored procedure to the calling function. However you are using them as input parameters, please remove the OUTPUT clause from both input parameters and try.
Also remove the SELECT * in your execute statement, it is not required, the stored procedure will return the data as it has the select statement.
I currently have a stored procedure in MSSQL where I execute a SELECT-statement multiple times based on the variables I give the stored procedure. The stored procedure counts how many results are going to be returned for every filter a user can enable.
The stored procedure isn't the issue, I transformed the select statement from te stored procedure to a regular select statement which looks like:
DECLARE #contentRootId int = 900589
DECLARE #RealtorIdList varchar(2000) = ';880;884;1000;881;885;'
DECLARE #publishSoldOrRentedSinceDate int = 8
DECLARE #isForSale BIT= 1
DECLARE #isForRent BIT= 0
DECLARE #isResidential BIT= 1
--...(another 55 variables)...
--Table to be returned
DECLARE #resultTable TABLE
(
variableName varchar(100),
[value] varchar(200)
)
-- Create table based of inputvariable. Example: turns ';18;118;' to a table containing two ints 18 AND 118
DECLARE #RealtorIdTable table(RealtorId int)
INSERT INTO #RealtorIdTable SELECT * FROM dbo.Split(#RealtorIdList,';') option (maxrecursion 150)
INSERT INTO #resultTable ([value], variableName)
SELECT [Value], VariableName FROM(
Select count(*) as TotalCount,
ISNULL(SUM(CASE WHEN reps.ForRecreation = 1 THEN 1 else 0 end), 0) as ForRecreation,
ISNULL(SUM(CASE WHEN reps.IsQualifiedForSeniors = 1 THEN 1 else 0 end), 0) as IsQualifiedForSeniors,
--...(A whole bunch more SUM(CASE)...
FROM TABLE1 reps
LEFT JOIN temp t on
t.ContentRootID = #contentRootId
AND t.RealEstatePropertyID = reps.ID
WHERE
(EXISTS(select 1 from #RealtorIdTable where RealtorId = reps.RealtorID))
AND (#SelectedGroupIds IS NULL OR EXISTS(select 1 from #SelectedGroupIdtable where GroupId = t.RealEstatePropertyGroupID))
AND (ISNULL(reps.IsForSale,0) = ISNULL(#isForSale,0))
AND (ISNULL(reps.IsForRent, 0) = ISNULL(#isForRent,0))
AND (ISNULL(reps.IsResidential, 0) = ISNULL(#isResidential,0))
AND (ISNULL(reps.IsCommercial, 0) = ISNULL(#isCommercial,0))
AND (ISNULL(reps.IsInvestment, 0) = ISNULL(#isInvestment,0))
AND (ISNULL(reps.IsAgricultural, 0) = ISNULL(#isAgricultural,0))
--...(Around 50 more of these WHERE-statements)...
) as tbl
UNPIVOT (
[Value]
FOR [VariableName] IN(
[TotalCount],
[ForRecreation],
[IsQualifiedForSeniors],
--...(All the other things i selected in above query)...
)
) as d
select * from #resultTable
The combination of a Realtor- and contentID gives me a set default set of X amount of records. When I choose a Combination which gives me ~4600 records, the execution time is around 250ms. When I execute the sattement with a combination that gives me ~600 record, the execution time is about 20ms.
I would like to know why this is happening. I tried removing all SUM(CASE in the select, I tried removing almost everything from the WHERE-clause, and I tried removing the JOIN. But I keep seeing the huge difference between the resultset of 4600 and 600.
Table variables can perform worse when the number of records is large. Consider using a temporary table instead. See When should I use a table variable vs temporary table in sql server?
Also, consider replacing the UNPIVOT by alternative SQL code. Writing your own TSQL code will give you more control and even increase performance. See for example PIVOT, UNPIVOT and performance
I have a big table that includes a column of floats (1.5million rows) that I join to a slightly smaller table (15k rows) which also has floats, I then multiply various floats.
I have discovered I get significant performance gains (over 10 times faster) using numerics rather than floats in the big table.
Trouble is I don't know the size of the floats in advance so I was hoping to calculate the length of the biggest float and then use that information to cast the float to a numeric using a variable in the declaration i.e. cast(MyFloatColumn as numeric(#varInt,2))
It seems I'm not allowed to do this (Incorrect syntax error) so is there an alternative?
Below is some code that shows what I am trying to do - final statement is where the error is.
Many thanks for your help,
Simon
CREATE TABLE dbo.MyTable
(
MyFloatColumn float
);
GO
INSERT INTO dbo.MyTable VALUES (12345.12041);
INSERT INTO dbo.MyTable VALUES (123.1);
GO
declare #precisionofbiggest int
SELECT #precisionofbiggest = sizeofintpart + 2
FROM (SELECT TOP (1) Len(Cast(Cast(myfloatcolumn AS BIGINT) AS VARCHAR)) AS
sizeofintpart
FROM dbo.mytable
ORDER BY myfloatcolumn DESC) AS atable
SELECT cast(myfloatcolumn AS numeric(#precisionofbiggest,2)) AS anewnumericcolumn
FROM dbo.mytable
(#precisionofbiggest will be 7 in this example so if it worked I would get
aNewNumericColumn
12345.12
123.10
)
the last statement should be dynamic to get the variable value
declare #sql nvarchar(max)
SET #sql = 'SELECT cast(myfloatcolumn AS numeric('
+ CONVERT(VARCHAR(20), #precisionofbiggest)
+ ',2)) AS anewnumericcolumn FROM dbo.mytable'
exec sp_executesql #sql
I have a query which selects some data, I pass some parameters in it:
DECLARE #FromAccDocNo INT = 1,
#ToAccDocNo INT = 999999999,
#FromDate CHAR(10) = '1900/01/01',
#ToDate CHAR(10) = '2999/12/30',
#L1Code INT = 129
SELECT ad.AccDocNo,
ad.AccDocDate,
add1.Row,
add1.RowComment,
add1.Debit,
add1.Credit
FROM AccDoc ad
INNER JOIN AccDocDetail add1
ON add1.AccDocNo = ad.AccDocNo
INNER JOIN Topic t
ON t.TopicCode = add1.TopicCode
WHERE t.L1Code = #L1Code -- here is the difference
AND add1.AccDocNo BETWEEN #FromAccDocNo AND #ToAccDocNo
AND ad.EffectiveDate BETWEEN #FromDate AND #ToDate
ORDER BY
ad.AccDocNo
In first, I write the value 129 explicitly as #L1Code (it takes 0.010 sec)
In second form I pass #L1Code into query (it takes 2.500 sec)
Can anyone explain what happens?
Please read the canonical reference: Slow in the Application, Fast in SSMS? (specifically this bit)
One way to fix, is to add OPTION (RECOMPILE) at the end of the query.