I am looking for a way to write the below procedure without using a CURSOR or just to find a better performing query.
CREATE TABLE #OrderTransaction (OrderTransactionId int, ProductId int, Quantity int);
CREATE TABLE #Product (ProductId int, MediaTypeId int);
CREATE TABLE #OrderDelivery (OrderTransactionId int, MediaTypeId int);
INSERT INTO #Product (ProductId, MediaTypeId) VALUES (1,1);
INSERT INTO #Product (ProductId, MediaTypeId) VALUES (2,2);
INSERT INTO #OrderTransaction(OrderTransactionId, ProductId, Quantity) VALUES (1,1,1);
INSERT INTO #OrderTransaction(OrderTransactionId, ProductId, Quantity) VALUES (2,2,6);
DECLARE #OrderTransactionId int, #MediaTypeId int, #Quantity int;
DECLARE ordertran CURSOR FAST_FORWARD FOR
SELECT OT.OrderTransactionId, P.MediaTypeId, OT.Quantity
FROM #OrderTransaction OT WITH (NOLOCK)
INNER JOIN #Product P WITH (NOLOCK)
ON OT.ProductId = P.ProductId
OPEN ordertran;
FETCH NEXT FROM ordertran INTO #OrderTransactionId, #MediaTypeId, #Quantity;
WHILE ##FETCH_STATUS = 0
BEGIN
WHILE #Quantity > 0
BEGIN
INSERT INTO #OrderDelivery ([OrderTransactionId], [MediaTypeId])
VALUES (#OrderTransactionId, #MediaTypeId)
SELECT #Quantity = #Quantity - 1;
END
FETCH NEXT FROM ordertran INTO #OrderTransactionId, #MediaTypeId, #Quantity;
END
CLOSE ordertran;
DEALLOCATE ordertran;
SELECT * FROM #OrderTransaction
SELECT * FROM #Product
SELECT * FROM #OrderDelivery
DROP TABLE #OrderTransaction;
DROP TABLE #Product;
DROP TABLE #OrderDelivery;
Begin with a Numbers table that is large enough to handle the maximum order amount:
CREATE TABLE Numbers (
Num int NOT NULL PRIMARY KEY CLUSTERED
)
-- SQL 2000 version
INSERT Numbers VALUES (1)
SET NOCOUNT ON
GO
INSERT Numbers (Num) SELECT Num + (SELECT Max(Num) FROM Numbers) FROM Numbers
GO 15
-- SQL 2005 and up version
WITH
L0 AS (SELECT c = 1 UNION ALL SELECT 1),
L1 AS (SELECT c = 1 FROM L0 A, L0 B),
L2 AS (SELECT c = 1 FROM L1 A, L1 B),
L3 AS (SELECT c = 1 FROM L2 A, L2 B),
L4 AS (SELECT c = 1 FROM L3 A, L3 B),
L5 AS (SELECT c = 1 FROM L4 A, L4 B),
N AS (SELECT Num = ROW_NUMBER() OVER (ORDER BY c) FROM L5)
INSERT Numbers(Num)
SELECT Num FROM N
WHERE Num <= 32768;
Then, immediately after your INSERT statements:
INSERT #OrderDelivery (OrderTransactionId, MediaTypeId)
SELECT
OT.OrderTransactionId,
P.MediaTypeId
FROM
#OrderTransaction OT
INNER JOIN #Product P ON OT.ProductId = P.ProductId
INNER JOIN Numbers N ON N.Num BETWEEN 1 AND OT.Quantity
That should do it!
If for some reason you have qualms about putting a permanent Numbers table in your database (which I don't understand as it is a wonderful tool), then you can simply join to the CTE given instead of the table itself. In SQL 2000 you can create a temp table and use a loop, but I would advise against this strongly.
A Numbers table is highly recommended. There is no concern about some future change breaking it (the set of whole numbers won't change any time soon). Some people use a Numbers table with a million numbers in it, which is only around 4MB of storage.
To answer critics of the Numbers table: if the database design uses a numbers table, then that table won't need to change. It is like any other table in the database and can be relied on. You don't worry too much about queries against an Orders table failing because some day the table might not exist, so I don't see why there would be any similar concern about another table that is required and depended on.
UPDATE
In the time since writing this answer I have learned about the master.dbo.spt_values table which has a number column. When queried with where type='P' you get 0 - 255 in SQL 2000 and 0 - 8191 in SQL 2005 and up. (There are also potentially useful low and high columns.) You can cross join this table to itself a couple of times if necessary to get, even in SQL 2000, a bunch of rows very quickly.
The trick is to introduce a table of values (named, in the example below, MyTableOfIntegers) which contains all the integer values between 1 and (at least) some value (in the case at hand, that would be the biggest possible Quantity value from OrderTransaction table).
INSERT INTO #OrderDelivery ([OrderTransactionId], [MediaTypeId])
SELECT OT.OrderTransactionId, P.MediaTypeId
FROM #OrderTransaction OT WITH (NOLOCK)
INNER JOIN #Product P WITH (NOLOCK)
ON OT.ProductId = P.ProductId
JOIN MyTableOfIntegers I ON I.Num <= OT.Quantity
--WHERE some optional conditions
Essentially the extra JOIN on MyTableOfIntegers, produces as many duplicate rows as OT.Quantity, and that seems to be what the purpose of the cursor was: to insert that many duplicated rows in the OrderDelivery table.
I didn't check the rest of the logic with the temporary tables and all (I'm assuming these are temp tables for the purpose of checking the logic rather than being part of the process proper), but it seems that the above is the type of construct needed to express the needed logic in declarative fashion only, without any cursor or even any loop.
Here is a slight variation on the previous answers, that avoids a permanent numbers table (though I am not sure why people are so afraid of this construct), and allows you to build a run-time CTE that contains exactly the set of numbers you'll need to perform the correct number of inserts (by checking for the highest quantity). I commented out the CROSS JOIN in the initial CTE, but you can use it if your quantity for any given order can exceed the number of rows in sys.columns. Hopefully that is an unlikely scenario. Note that this is for SQL Server 2005 and up ... it is always useful to let us know which specific version(s) you are targeting.
DECLARE #numsNeeded INT;
SELECT #numsNeeded = MAX(Quantity) FROM #OrderTransaction;
WITH n AS
(
SELECT TOP (#numsNeeded) i = ROW_NUMBER()
OVER (ORDER BY c.[object_id])
FROM sys.columns AS c --CROSS JOIN sys.columns AS c2
)
INSERT #OrderDelivery
(
OrderTransactionID,
MediaTypeID
)
SELECT t.OrderTransactionID, p.MediaTypeID
FROM #OrderTransaction AS t
INNER JOIN #Product AS p
ON t.ProductID = p.ProductID
INNER JOIN n
ON n.i <= t.Quantity;
INSERT INTO #OrderDelivery ([OrderTransactionId], [MediaTypeId])
SELECT OT.OrderTransactionId, P.MediaTypeId,
FROM #OrderTransaction OT
INNER JOIN #Product P
ON OT.ProductId = P.ProductId
WHERE OT.Quantity > 0
I feel like i'm misreading the logic here, but isn't that the equivelant?
This still uses a loop but it has gotten rid of the cursor. Short of creating a table of numbers to join on, I think this is the best answer.
DECLARE #Count AS INTEGER
SET #Count = 1
WHILE (1 = 1)
BEGIN
INSERT INTO #OrderDelivery ([OrderTransactionId], [MediaTypeId])
SELECT OT.OrderTransactionId, P.MediaTypeId, OT.Quantity
FROM #OrderTransaction OT WITH (NOLOCK)
INNER JOIN #Product P WITH (NOLOCK)
ON OT.ProductId = P.ProductId
WHERE OT.Quantity > #Count
IF ##ROWCOUNT = 0
BREAK
SET #COUNT = #COUNT + 1
END
Related
I have 2 tables, Table 1 (temp table in SP) has around 400 records. Table 2 has around 30,550,284 records.
I need to run a loop on table 1 for each record and get the top 1 from table 2 based on a few conditions (where clause) and then order by modified date in decreasing order.
There is an index on the modified date.
declare #iPos int;
declare #iCount int;
select #iCount = count(*) from Table1;
set #iPos = 1;
declare #Table2 table(......)
declare #timestampLocal2 datetime
while (#iPos <= #iCount)
BEGIN
select #val1 = Col1, #timestampLocal = TimeStamp
from #Table1 where ID = #iPos
set #timestampLocal2 = DATEADD(HH,-96,#timestampLocal)
INSERT INTO #Temp3 ( .... ),....)
select top 1 r.LastModified, r.[Col2], r.Col3, #iPos
from Table2 (NOLOCK) r
where Col1 =#val1 and
r.LastModified <= #timestampLocal
and r.LastModified >= #timestampLocal2
and (r.Col2 is not null and r.Col3 is not null)
order by LastModified desc
SELECT #iPos = #iPos + 1;
END
This query is very slow.
I have also thought to archive table 2, But I want to keep that as the second option for now.
Do I really need to add an index on the columns which are involved in the where clause?
So my question is, in terms of performance is there a better way to do this?
I believe a CROSS APPLY or OUTER APPLY may do the trick. These can be thought of as being similar to INNER JOIN or LEFT JOIN, except that they allow you to reference a subquery having more complex conditions such as TOP 1 and ORDER BY. Ideal for cases like this.
-- INSERT INTO #Temp3 ( .... )
select r.LastModified, r.[Col2], r.Col3, t1.ID
from #Table1 t1
cross apply (
SELECT TOP 1 r.*
from Table2 r -- Don't use (NOLOCK)
where r.Col1 = t.Col1
and r.LastModified <= t1.[TimeStamp]
and r.LastModified >= DATEADD(HH,-96,t1.[TimeStamp])
and (r.Col2 is not null and r.Col3 is not null)
order by r.LastModified desc
) r
For efficiency, I recommend an index on Table2(Col1,LastModified) or as an absolute minimum, an index on Table2(Col1).
I would strongly discourage the use of (NOLOCK) or 'READ UNCOMMITTED` in queries that update the database (like the insert into table3 above). While the query may appear to work most of the time, seemingly random occurrences of missing or duplicate rows may result.
Do you need to handle cases where no matching Table2 record is found? The above will quietly ignore such cases. Changing the CROSS APPLY to an OUTER APPLY together with logic to handle null r.xxx values could be what you need.
Here is the sql:
-- Schema
DECLARE #ModelItem TABLE (
ModelItemId UNIQUEIDENTIFIER,
MetamodelItemId UNIQUEIDENTIFIER
)
DECLARE #MetamodelItemAncestor TABLE (
MetamodelItemId UNIQUEIDENTIFIER,
ParentMetamodelItemId UNIQUEIDENTIFIER,
AncestorLevel INT
)
DECLARE #SolutionMetamodelItem TABLE (
MetamodelItemId UNIQUEIDENTIFIER,
SolutionId UNIQUEIDENTIFIER
)
INSERT INTO #ModelItem VALUES ('EC6AC6A9-684E-E611-8117-00155D026308', '2AB1F075-684E-E611-8117-00155D026308')
INSERT INTO #MetamodelItemAncestor
VALUES ('2AB1F075-684E-E611-8117-00155D026308', '2AB1F075-684E-E611-8117-00155D026308', 0),
('2AB1F075-684E-E611-8117-00155D026308', 'AA12E380-CA4D-E611-8117-00155D026308', 1)
INSERT INTO #SolutionMetamodelItem
VALUES ('2AB1F075-684E-E611-8117-00155D026308', 'f612a333-ca4d-e611-8117-00155d026308'),
('AA12E380-CA4D-E611-8117-00155D026308', 'fc160f3e-ca4d-e611-8117-00155d026308')
-- query
DECLARE #ModelItemId TABLE (EntityId UNIQUEIDENTIFIER)
DECLARE #SolutionId TABLE (EntityId UNIQUEIDENTIFIER)
INSERT INTO #ModelItemId
VALUES ('EC6AC6A9-684E-E611-8117-00155D026308')
INSERT INTO #SolutionId
VALUES ('f612a333-ca4d-e611-8117-00155d026308'), ('fc160f3e-ca4d-e611-8117-00155d026308')
SELECT mia.*
FROM (
SELECT M.EntityId AS ModelItemId, S.EntityId AS SolutionId
FROM #ModelItemId AS M
CROSS JOIN #SolutionId AS S
) AS m
CROSS APPLY (
SELECT
MI.ModelItemId,
OTA.ParentMetamodelItemId AS [MetamodelItemId],
ROW_NUMBER() OVER (PARTITION BY [MI].[ModelItemId] ORDER BY [OTA].[AncestorLevel] ASC) AS [AspectRank]
FROM #ModelItem AS MI
INNER JOIN #MetamodelItemAncestor AS OTA
ON MI.MetamodelItemId = OTA.MetamodelItemId
WHERE
MI.ModelItemId = m.ModelItemId
AND EXISTS (
SELECT 1
FROM #SolutionMetamodelItem AS MSMI
WHERE MSMI.MetamodelItemId = OTA.ParentMetamodelItemId
AND MSMI.SolutionId = m.SolutionId
)
) mia
SELECT mia.*
FROM #ModelItemId AS m
CROSS APPLY (
SELECT
MI.ModelItemId,
OTA.ParentMetamodelItemId AS [MetamodelItemId],
ROW_NUMBER() OVER (PARTITION BY [MI].[ModelItemId] ORDER BY [OTA].[AncestorLevel] ASC) AS [AspectRank]
FROM #ModelItem as MI
INNER JOIN #MetamodelItemAncestor AS OTA
ON MI.MetamodelItemId = OTA.MetamodelItemId
WHERE
MI.ModelItemId = m.EntityId
AND EXISTS (
SELECT 1
FROM #SolutionMetamodelItem MSMI
WHERE MSMI.MetamodelItemId = OTA.ParentMetamodelItemId
AND MSMI.SolutionId IN (SELECT s.EntityId FROM #SolutionId AS s)
)
) mia
Notice the AspectRank. In the second query it has correctly increased the value sequentially based on the partition.
Looking at the execution plan, for the first query it seems like the row_number (sequence project) is running concurrently to the scan of the #solution table, but I still am not fully sure why it has not increased the row number value since there a duplicate items.
Could someone explain this? I need to use the first approach because the cross apply query is in fact a UDF with the ModelItemId and SolutionId as parameters.
I would assume the cross apply is executed separately for each of the rows in your outer query -> each of the rows returned is the 1st (and only) row.
Why do you need to have the row number inside the cross apply, instead of being in the outer query, if that's actually where your data is?
Looking at :
;WITH cte AS(
SELECT 1 AS x UNION
SELECT 2 AS x UNION
SELECT 3 AS x
)
I can create permutation table for all 3 values :
SELECT T1.x , y=T2.x , z=t3.x
FROM cte T1
JOIN cte T2
ON T1.x != T2.x
JOIN cte T3
ON T2.x != T3.x AND T1.x != T3.x
This uses the power of SQL's cartesian product plus eliminating equal values.
OK.
But is it possible to enhance this recursive pseudo CTE :
;WITH cte AS(
SELECT 1 AS x , 2 AS y , 3 AS z
UNION ALL
...
)
SELECT * FROM cte
So that it will yield same result as :
NB there are other solutions in SO that uses recursive CTE , but it is not spread to columns , but string representation of the permutations
I tried to do the lot in a CTE.
However trying to "redefine" a rowset dynamically is a little tricky. While the task is relatively easy using dynamic SQL doing it without poses some issues.
While this answer may not be the most efficient or straight forward, or even correct in the sense that it's not all CTE it may give others a basis to work from.
To best understand my approach read the comments, but it might be worthwhile looking at each CTE expression in turn with by altering the bit of code below in the main block, with commenting out the section below it.
SELECT * FROM <CTE NAME>
Good luck.
IF OBJECT_ID('tempdb..#cteSchema') IS NOT NULL
DROP Table #cteSchema
GO
-- BASE CTE
;WITH cte AS( SELECT 1 AS x, 2 AS y, 3 AS z),
-- So we know what columns we have from the CTE we extract it to XML
Xml_Schema AS ( SELECT CONVERT(XML,(SELECT * FROM cte FOR XML PATH(''))) AS MySchema ),
-- Next we need to get a list of the columns from the CTE, by querying the XML, getting the values and assigning a num to the column
MyColumns AS (SELECT D.ROWS.value('fn:local-name(.)','SYSNAME') AS ColumnName,
D.ROWS.value('.','SYSNAME') as Value,
ROW_NUMBER() OVER (ORDER BY D.ROWS.value('fn:local-name(.)','SYSNAME')) AS Num
FROM Xml_Schema
CROSS APPLY Xml_Schema.MySchema.nodes('/*') AS D(ROWS) ),
-- How many columns we have in the CTE, used a coupld of times below
ColumnStats AS (SELECT MAX(NUM) AS ColumnCount FROM MyColumns),
-- create a cartesian product of the column names and values, so now we get each column with it's possible values,
-- so {x=1, x =2, x=3, y=1, y=2, y=3, z=1, z=2, z=3} -- you get the idea.
PossibleValues AS (SELECT MyC.ColumnName, MyC.Num AS ColumnNum, MyColumns.Value, MyColumns.Num,
ROW_NUMBER() OVER (ORDER BY MyC.ColumnName, MyColumns.Value, MyColumns.Num ) AS ID
FROM MyColumns
CROSS APPLY MyColumns MyC
),
-- Now we have the possibly values of each "column" we now have to concat the values together using this recursive CTE.
AllRawXmlRows AS (SELECT CONVERT(VARCHAR(MAX),'<'+ISNULL((SELECT ColumnName FROM MyColumns WHERE MyColumns.Num = 1),'')+'>'+Value) as ConcatedValue, Value,ID, Counterer = 1 FROM PossibleValues
UNION ALL
SELECT CONVERT(VARCHAR(MAX),CONVERT(VARCHAR(MAX), AllRawXmlRows.ConcatedValue)+'</'+(SELECT ColumnName FROM MyColumns WHERE MyColumns.Num = Counterer)+'><'+(SELECT ColumnName FROM MyColumns WHERE MyColumns.Num = Counterer+1)+'>'+CONVERT(VARCHAR(MAX),PossibleValues.Value)) AS ConcatedValue, PossibleValues.Value, PossibleValues.ID,
Counterer = Counterer+1
FROM AllRawXmlRows
INNER JOIN PossibleValues ON AllRawXmlRows.ConcatedValue NOT LIKE '%'+PossibleValues.Value+'%' -- I hate this, there has to be a better way of making sure we don't duplicate values....
AND AllRawXmlRows.ID <> PossibleValues.ID
AND Counterer < (SELECT ColumnStats.ColumnCount FROM ColumnStats)
),
-- The above made a list but was missing the final closing XML element. so we add it.
-- we also restict the list to the items that contain all columns, the section above builds it up over many columns
XmlRows AS (SELECT DISTINCT
ConcatedValue +'</'+(SELECT ColumnName FROM MyColumns WHERE MyColumns.Num = Counterer)+'>'
AS ConcatedValue
FROM AllRawXmlRows WHERE Counterer = (SELECT ColumnStats.ColumnCount FROM ColumnStats)
),
-- Wrap the output in row and table tags to create the final XML
FinalXML AS (SELECT (SELECT CONVERT(XML,(SELECT CONVERT(XML,ConcatedValue) FROM XmlRows FOR XML PATH('row'))) FOR XML PATH('table') )as XMLData),
-- Prepare a CTE that represents the structure of the original CTE with
DataTable AS (SELECT cte.*, XmlData
FROM FinalXML, cte)
--SELECT * FROM <CTE NAME>
-- GETS destination columns with XML data.
SELECT *
INTO #cteSchema
FROM DataTable
DECLARE #XML VARCHAR(MAX) ='';
SELECT #Xml = XMLData FROM #cteSchema --Extract XML Data from the
ALTER TABLE #cteSchema DROP Column XMLData -- Removes the superflous column
DECLARE #h INT
EXECUTE sp_xml_preparedocument #h OUTPUT, #XML
SELECT *
FROM OPENXML(#h, '/table/row', 2)
WITH #cteSchema -- just use the #cteSchema to define the structure of the xml that has been constructed
EXECUTE sp_xml_removedocument #h
How about translating 1,2,3 into a column, which will look exactly like the example you started from, and use the same approach ?
;WITH origin (x,y,z) AS (
SELECT 1,2,3
), translated (x) AS (
SELECT col
FROM origin
UNPIVOT ( col FOR cols IN (x,y,z)) AS up
)
SELECT T1.x , y=T2.x , z=t3.x
FROM translated T1
JOIN translated T2
ON T1.x != T2.x
JOIN translated T3
ON T2.x != T3.x AND T1.x != T3.x
ORDER BY 1,2,3
If I understood correctly the request, this might just do the trick.
And to run it on more columns, just need to add them origin cte definition + unpivot column list.
Now, i dont know how you pass your 1 - n values for it to be dynamic, but if you tell me, i could try edit the script to be dynamic too.
This is my first question on here, so I apologize if I break any rules.
Here's the situation. I have a table that lists all the employees and the building to which they are assigned, plus training hours, with ssn as the id column, I have another table that list all the employees in the company, also with ssn, but including name, and other personal data. The second table contains multiple records for each employee, at different points in time. What I need to do is select all the records in the first table from a certain building, then get the most recent name from the second table, plus allow the result set to be sorted by any of the columns returned.
I have this in place, and it works fine, it is just very slow.
A very simplified version of the tables are:
table1 (ssn CHAR(9), buildingNumber CHAR(7), trainingHours(DEC(5,2)) (7200 rows)
table2 (ssn CHAR(9), fName VARCHAR(20), lName VARCHAR(20), sequence INT) (708,000 rows)
The sequence column in table 2 is a number that corresponds to a predetermined date to enter these records, the higher number, the more recent the entry. It is common/expected that each employee has several records. But several may not have the most recent(i.e. '8').
My SProc is:
#BuildingNumber CHAR(7), #SortField VARCHAR(25)
BEGIN
DECLARE #returnValue TABLE(ssn CHAR(9), buildingNumber CAHR(7), fname VARCHAR(20), lName VARCHAR(20), rowNumber INT)
INSERT INTO #returnValue(...)
SELECT(ssn,buildingNum,fname,lname,rowNum)
FROM SELECT(...,CASE #SortField Row_Number() OVER (PARTITION BY buildingNumber ORDER BY {sortField column} END AS RowNumber)
FROM table1 a
OUTER APPLY(SELECT TOP 1 fName,lName FROM table2 WHERE ssn = a.ssn ORDER BY sequence DESC) AS e
where buildingNumber = #BuildingNumber
SELECT * from #returnValue ORDER BY RowNumber
END
I have indexes for the following:
table1: buildingNumber(non-unique,nonclustered)
table2: sequence_ssn(unique,nonclustered)
Like I said this gets me the correct result set, but it is rather slow. Is there a better way to go about doing this?
It's not possible to change the database structure or the way table 2 operates. Trust me if it were it would be done. Are there any indexes I could make that would help speed this up?
I've looked at the execution plans, and it has a clustered index scan on table 2(18%), then a compute scalar(0%), then an eager spool(59%), then a filter(0%), then top n sort(14%).
That's 78% of the execution so I know it's in the section to get the names, just not sure of a better(faster) way to do it.
The reason I'm asking is that table 1 needs to be updated with current data. This is done through a webpage with a radgrid control. It has a range, start index, all that, and it takes forever for the users to update their data.
I can change how the update process is done, but I thought I'd ask about the query first.
Thanks in advance.
I would approach this with window functions. The idea is to assign a sequence number to records in the table with duplicates (I think table2), such as the most recent records have a value of 1. Then just select this as the most recent record:
select t1.*, t2.*
from table1 t1 join
(select t2.*,
row_number() over (partition by ssn order by sequence desc) as seqnum
from table2 t2
) t2
on t1.ssn = t1.ssn and t2.seqnum = 1
where t1.buildingNumber = #BuildingNumber;
My second suggestion is to use a user-defined function rather than a stored procedure:
create function XXX (
#BuildingNumber int
)
returns table as
return (
select t1.ssn, t1.buildingNum, t2.fname, t2.lname, rowNum
from table1 t1 join
(select t2.*,
row_number() over (partition by ssn order by sequence desc) as seqnum
from table2 t2
) t2
on t1.ssn = t1.ssn and t2.seqnum = 1
where t1.buildingNumber = #BuildingNumber;
);
(This doesn't have the logic for the ordering because that doesn't seem to be the central focus of the question.)
You can then call it as:
select *
from dbo.XXX(<building number>);
EDIT:
The following may speed it up further, because you are only selecting a small(ish) subset of the employees:
select *
from (select t1.*, t2.*, row_number() over (partition by ssn order by sequence desc) as seqnum
from table1 t1 join
table2 t2
on t1.ssn = t1.ssn
where t1.buildingNumber = #BuildingNumber
) t
where seqnum = 1;
And, finally, I suspect that the following might be the fastest:
select t1.*, t2.*, row_number() over (partition by ssn order by sequence desc) as seqnum
from table1 t1 join
table2 t2
on t1.ssn = t1.ssn
where t1.buildingNumber = #BuildingNumber and
t2.sequence = (select max(sequence) from table2 t2a where t2a.ssn = t1.ssn)
In all these cases, an index on table2(ssn, sequence) should help performance.
Try using some temp tables instead of the table variables. Not sure what kind of system you are working on, but I have had pretty good luck. Temp tables actually write to the drive so you wont be holding and processing so much in memory. Depending on other system usage this might do the trick.
Simple define the temp table using #Tablename instead of #Tablename. Put the name sorting subquery in a temp table before everything else fires off and make a join to it.
Just make sure to drop the table at the end. It will drop the table at the end of the SP when it disconnects, but it is a good idea to make tell it to drop to be on the safe side.
I have this code
ALTER PROCEDURE [dbo].[Model_Core_BlogPost_GetLatestPaging]
#PageSize INT,
#CurrentPage INT
AS
BEGIN
DECLARE #PageStart int, #PageEnd int
SET #PageStart = #CurrentPage * #PageSize
set #PageEnd = #PageStart + #PageSize
;with C as (
SELECT
e.blogpostid,
e.PreviewText,
e.Headline,
e.URLHeadline,
u.Blogname,
u.imageurl AS ImageURL,
e.CommentsCount,
e.HitsCount,
e.Created,
ROW_NUMBER() over (order by e.created desc) as rownum
FROM BlogPosts e
INNER JOIN Users u ON e.BlogUserID = u.UserID
WHERE e.[Status] = 1 and e.Deleteddate is null
)
SELECT *
FROM C
WHERE rownum > #pagestart
AND rownum <= #pageend
END
I have issues when the #CurrentPage is a large number and I often get sql timeouts in my application.
Any ideas for a solution ?
As long as indexes are in place I would suggest splitting this query in 2 separate.
First run the ranking function and filter on blogposts, insert the result into a temporaray table and afterwards join the temporary table with users possibly by using an option loop join (the temp table will have very little rows comparing to users and loop join is perfect for this situation).
This way your join will have much less rows to parse. Also, are you sure the blogposts.created column has index? Row_number will perform ordering on this field.