Related
I'm working on a huge SQL code and unfortunately it has a CURSOR which handles another two nested CURSORS within it (totally three cursors inside a stored procedure), which handles millions of data to be DELETE,UPDATE and INSERT. This takes a whole lot of time because of row by row execution and I wish to modify this in to SET based approach
From many articles it shows use of CURSORs is not recommend and the alternate is to use WHILE loops instead, So I tried and replaced the three CUROSRs with three WHILE loops nothing more, though I get the same result but there is no improvement in performance, it took the same time as it took for CUROSRs.
Below is the basic structure of the code I'm working on (i Will try to put as simple as possible) and I will put the comments what they are supposed to do.
declare #projects table (
ProjectID INT,
fieldA int,
fieldB int,
fieldC int,
fieldD int)
INSERT INTO #projects
SELECT ProjectID,fieldA,fieldB,fieldC, fieldD
FROM ProjectTable
DECLARE projects1 CURSOR LOCAL FOR /*First cursor - fetch the cursor from ProjectaTable*/
Select ProjectID FROM #projects
OPEN projects1
FETCH NEXT FROM projects1 INTO #ProjectID
WHILE ##FETCH_STATUS = 0
BEGIN
BEGIN TRY
BEGIN TRAN
DELETE FROM T_PROJECTGROUPSDATA td
WHERE td.ID = #ProjectID
DECLARE datasets CURSOR FOR /*Second cursor - this will get the 'collectionDate'field from datasetsTable for every project fetched in above cursor*/
Select DataID, GroupID, CollectionDate
FROM datasetsTable
WHERE datasetsTable.projectID = #ProjectID /*lets say this will fetch ten records for a single projectID*/
OPEN datasets
FETCH NEXT FROM datasets INTO #DataID, #GroupID, #CollectionDate
WHILE ##FETCH_STATUS = 0
BEGIN
DECLARE period CURSOR FOR /*Third Cursor - this will process the records from another table called period with above fetched #collectionDate*/
SELECT ID, dbo.fn_GetEndOfPeriod(ID)
FROM T_PERIODS
WHERE DATEDIFF(dd,#CollectionDate,dbo.fn_GetEndOfPeriod(ID)) >= 0 /*lets say this will fetch 20 records for above fetched single #CollectionDate*/
ORDER BY [YEAR],[Quarter]
OPEN period
FETCH NEXT FROM period INTO #PeriodID, #EndDate
WHILE ##FETCH_STATUS = 0
BEGIN
IF EXISTS (some conditions No - 1 )
BEGIN
BREAK
END
IF EXISTS (some conditions No - 2 )
BEGIN
FETCH NEXT FROM period INTO #PeriodID, #EndDate
CONTINUE
END
/*get the appropirate ID from T_uploads table for the current projectID and periodID fetched*/
SET #UploadID = (SELECT ID FROM T_UPLOADS u WHERE u.project_savix_ID = #ProjectID AND u.PERIOD_ID = #PeriodID AND u.STATUS = 3)
/*Update some fields in T_uploads table for the current projectID and periodID fetched*/
UPDATE T_uploads
SET fieldA = mp.fieldA, fieldB = mp.fieldB
FROM #projects mp
WHERE T_UPLOADS.ID = #UploadID AND mp.ProjectID = #ProjectID
/*Insert some records in T_PROJECTGROUPSDATA table for the current projectID and periodID fetched*/
INSERT INTO T_PROJECTGROUPSDATA tpd ( fieldA,fieldB,fieldC,fieldD,uploadID)
SELECT fieldA,fieldB,fieldC,fieldD,#UploadID
FROM #projects
WHERE tpd.DataID = #DataID
FETCH NEXT FROM period INTO #PeriodID, #EndDate
END
CLOSE period
DEALLOCATE period
FETCH NEXT FROM datasets INTO #DataID, #GroupID, #CollectionDate, #Status, #Createdate
END
CLOSE datasets
DEALLOCATE datasets
COMMIT
END TRY
BEGIN CATCH
Error handling
IF ##TRANCOUNT > 0
ROLLBACK
END CATCH
FETCH NEXT FROM projects1 INTO #ProjectID, #FAID
END
CLOSE projects1
DEALLOCATE projects1
SELECT 1 as success
I request you to suggest any methods to rewrite this code to follow the SET based approach.
Until the table structure and expected result sample data is not provided, here are a few quick things I see that can be improved (some of those are already mentioned by others above):
WHILE Loop is also a cursor. So, changing into to while loop is not
going make things any faster.
Use LOCAL FAST_FORWARD cursor unless you have need to back track a record. This would make the execution much faster.
Yes, I agree that having a SET based approach would be the fastest in most cases, however if you must store intermediate resultset somewhere, I would suggest using a temp table instead of a table variable. Temp table is 'lesser evil' between these 2 options. Here are a few reason why you should try to avoid using a table variable:
Since SQL Server would not have any prior statistics on the table variable during building on Execution Plan, it will always consider that only one record would be returned by the table variable during construction of the execution plan. And accordingly Storage Engine would assign only as much RAM memory for execution of the query. But in reality, there could be millions of records that the table variable might hold during execution. If that happens, SQL Server would be forced spill the data to hard disk during execution (and you will see lots of PAGEIOLATCH in sys.dm_os_wait_stats) making the queries way slower.
One way to get rid of the above issue would be by providing statement level hint OPTION (RECOMPILE) at the end of each query where a table value is used. This would force SQL Server to construct the Execution Plan of those queries each time during runtime and the less memory allocation issue can be avoided. However the downside of this is: SQL Server will no longer be able to take advantage of an already cached execution plan for that stored procedure, and would require recompilation every time, which would deteriorate the performance by some extent. So, unless you know that data in the underlying table changes frequently or the stored procedure itself is not frequently executed, this approach is not recommended by Microsoft MVPs.
Replacing Cursor with While blindly, is not a recommended option, hence it would not impact your performance and might even have negative impact on the performance.
When you define the cursor using Declare C Cursor in fact you are going to create a SCROLL cursor which specifies that all fetch options (FIRST, LAST, PRIOR, NEXT, RELATIVE, ABSOLUTE) are available.
When you need just Fetch Next as scroll option, you can declare the cursor as FAST_FORWARD
Here is the quote about FAST_FORWARD cursor in Microsoft docs:
Specifies that the cursor can only move forward and be scrolled from
the first to the last row. FETCH NEXT is the only supported fetch
option. All insert, update, and delete statements made by the current
user (or committed by other users) that affect rows in the result set
are visible as the rows are fetched. Because the cursor cannot be
scrolled backward, however, changes made to rows in the database after
the row was fetched are not visible through the cursor. Forward-only
cursors are dynamic by default, meaning that all changes are detected
as the current row is processed. This provides faster cursor opening
and enables the result set to display updates made to the underlying
tables. While forward-only cursors do not support backward scrolling,
applications can return to the beginning of the result set by closing
and reopening the cursor.
So you can declare your cursors using DECLARE <CURSOR NAME> FAST_FORWARD FOR ... and you will get noticeable improvements
I think all the cursors code above can be simplified to something like this:
DROP TABLE IF EXISTS #Source;
SELECT DISTINCT p.ProjectID,p.fieldA,p.fieldB,p.fieldC,p.fieldD,u.ID AS [UploadID]
INTO #Source
FROM ProjectTable p
INNER JOIN DatasetsTable d ON d.ProjectID = p.ProjectID
INNER JOIN T_PERIODS s ON DATEDIFF(DAY,d.CollectionDate,dbo.fn_GetEndOfPeriod(s.ID)) >= 0
INNER JOIN T_UPLOADS u ON u.roject_savix_ID = p.ProjectID AND u.PERIOD_ID = s.ID AND u.STATUS = 3
WHERE NOT EXISTS (some conditions No - 1)
AND NOT EXISTS (some conditions No - 2)
;
UPDATE u SET u.fieldA = s.fieldA, u.fieldB = s.fieldB
FROM T_UPLOADS u
INNER JOIN #Source s ON s.UploadID = u.ID
;
INSERT INTO T_PROJECTGROUPSDATA (fieldA,fieldB,fieldC,fieldD,uploadID)
SELECT DISTINCT s.fieldA,s.fieldB,s.fieldC,s.fieldD,s.UploadID
FROM #Source s
;
DROP TABLE IF EXISTS #Source;
Also it would be nice to know "some conditions No" details as query can differ depends on that.
First I know that using a cursor is not the best way of doing this but the IT manager is an old SQL person and that is the way he wants it done, otherwise I wouldn't be doing it this way.
I am using a stored procedure that does among other things create a temp table and try to fill it with data from another table that has been concatenated in the stored procedure. I can not seem to get the problem description, the field that needs to be concatenated, to update correctly. Actually not all.
Here is the part of the stored procedure that builds the temp table and tries to update it.
--Build problem entry
Create Table ##tmp_problem
(
prbqarnum varchar(7),
prbdesc varchar(max)
)
--Dump problem(s) into tbl based on qar#
Insert Into ##tmp_problem(prbqarnum) Select qarnum From tbl_qarbase Where currstatus = #qarstatus
--Declare tbl cursor
Declare tbl_Cursor CURSOR For
Select tbl_problems.qarnum, tbl_problems.problemdesc
From tbl_problems
Join tbl_qarbase On tbl_problems.qarnum = tbl_qarbase.qarnum
Where tbl_qarbase.currstatus = #qarstatus
--Open the tbl cursor
Open tbl_Cursor
--Fetch first row of data
Fetch next From tbl_Cursor Into #qarparm, #desc
--Declare temp problem desc variable
Declare #tmpproblem varchar(max)
--Loop to get problem data
While ##FETCH_STATUS = 0
Begin
Set #tmpproblem = (Select prbdesc From ##tmp_problem Where prbqarnum = #qarparm) + ' ' + #desc
Update ##tmp_problem Set prbdesc = #tmpproblem
--Get Next Row of Data
Fetch next From tbl_Cursor Into #qarparm, #desc
End
--Close tbl cursor
Close tbl_Cursor
--Deallocate tbl cursor
Deallocate tbl_Cursor
I know that the temp table is working because after the procedure runs i am able to query the temp table and see that the qarnum's are being put in.
What isn't happening is that there is a description field, that may have anywhere from one to N lines, and based on the qar # I need to concatenate the descriptions into one string and then insert it into the temp table, which isn't happening.
Here is a picture on the queries out puts. The top is the temp table and the bottom is the table that the cursor is built on.
So the question is, besides using a cursor, what i am doing wrong?? I have been Googling for several hours but nothing seems to work.
One last note: I am not seeing any errors anywhere.
You could GREATLY simplify this. There is no need for an update at all. Just populate both columns in your temp table. You entire code could be reduced to this.
Create Table #tmp_problem
(
prbqarnum varchar(7),
prbdesc varchar(max)
)
--Dump problem(s) into tbl based on qar#
Insert Into #tmp_problem(prbqarnum, prbdesc) Select qarnum, problemdesc From tbl_qarbase Where currstatus = #qarstatus
You should handle NULLs:
Set #tmpproblem = ISNULL(Select prbdesc From ##tmp_problem
Where prbqarnum = #qarparm), '') + ' '
+ ISNULL(#desc, '');
Plus you need to add some condition to UPDATE
Update ##tmp_problem Set prbdesc = #tmpproblem -- updates entire table
--WHERE ...
I have a merge statement supposed to execute a trigger multiple times.
I first thought my trigger wasn't executing, but with some research I found that triggers are only triggered once per statement (a trigger being one statement).
But all the posts out there are old and I thought that there might be a simple way now to make my trigger execute multiple times.
So is there anything I can add to my trigger or my merge statement to make my trigger do so?
Thanks
TRIGGER
TRIGGER [dbo].[Sofi_TERA_Trigger]
ON [dbo].[ZZ]
AFTER INSERT,UPDATE
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
IF EXISTS(SELECT 1 FROM inserted WHERE inserted.Statut LIKE '%CLOT%' OR inserted.Statut LIKE '%CLTT%' OR inserted.Statut LIKE '%CONF%')
BEGIN
DECLARE #Id int;
DECLARE #Matricule varchar(10);
DECLARE #IdAction int;
DECLARE #NumeroOF int;
SELECT #NumeroOF = inserted.Ordre from inserted;
DECLARE OF_CURSOR CURSOR
LOCAL STATIC READ_ONLY FORWARD_ONLY
FOR
SELECT Id,Log.Matricule,IdAction from Log inner join (select max(Id) as maxID,Matricule from LOG where Log.NumeroOF = #NumeroOF group by Matricule) maxID
on maxID.maxID = Log.Id where Log.NumeroOF = #NumeroOF;
OPEN OF_CURSOR
FETCH NEXT FROM OF_CURSOR INTO #Id,#Matricule,#IdAction
WHILE ##FETCH_STATUS = 0
BEGIN
IF #IdAction!=13
BEGIN
IF #IdAction<=2
BEGIN
insert into Log(NumeroOF,Matricule,IdAction,Date,EstAdmin) values (#NumeroOF,#Matricule,13,GETDATE(),1);
END
ELSE
BEGIN
insert into Log(NumeroOF,Matricule,IdAction,Date,EstAdmin) values (#NumeroOF,#Matricule,2,GETDATE(),1);
insert into Log(NumeroOF,Matricule,IdAction,Date,EstAdmin) values (#NumeroOF,#Matricule,13,GETDATE(),1);
END
END
FETCH NEXT FROM OF_CURSOR INTO #Id,#Matricule,#IdAction
END
CLOSE OF_CURSOR;
DEALLOCATE OF_CURSOR;
END
END
MERGE STATEMENT
Merge ZZ AS TARGET USING ZZTemp AS SOURCE
ON (Target.Operation=Source.Operation AND Target.Ordre=Source.Ordre)
WHEN MATCHED THEN
UPDATE SET TARGET.DateTERA=SOURCE.DateTERA, TARGET.MatTERA=SOURCE.MatTERA, TARGET.MatTERC=SOURCE.MatTERC
WHEN NOT MATCHED THEN
INSERT(Operation,Ordre,ElementOTP,Article,DesignationOF,PosteTravail,ValeurTemps,DHT,Statut,StatutOF,TexteActivite,DateTERA,MatTERA,MatTERC,StatutMat)
VALUES(SOURCE.Operation,SOURCE.Ordre,SOURCE.ElementOTP,SOURCE.Article,SOURCE.DesignationOF,SOURCE.PosteTravail,SOURCE.ValeurTemps,SOURCE.DHT,
SOURCE.Statut,SOURCE.StatutOF,SOURCE.TexteActivite,SOURCE.DateTERA,SOURCE.MatTERA,SOURCE.MatTERC,SOURCE.StatutMat);
Your problem is the cursor is incorrectly written to handle sets of data. Any trigger setting a value form inserted or deleted to a scalar variable is incorrect and for reasons of data integrity MUST be rewritten. This trigger is buggy. Period. There is no getting around that it must be rewritten (and any others that use the same technique).
The code inside your trigger should be something like:
INSERT INTO Log(NumeroOF,Matricule,IdAction,Date,EstAdmin)
SELECT max(Id),l.Matricule,l.IdAction, 13,GETDATE(),1
FROM Log l
JOIN Inserted i ON l.NumeroOF = i.Ordre
WHERE i.Statut LIKE '%CLOT%' OR i.Statut LIKE '%CLTT%' OR i.Statut LIKE '%CONF%'
GROUP BY l.Matricule,l.IdAction
INSERT INTO Log(NumeroOF,Matricule,IdAction,Date,EstAdmin)
SELECT max(Id),l.Matricule,l.IdAction, 2,GETDATE(),1
FROM Log l
JOIN Inserted i ON l.NumeroOF = i.Ordre
WHERE IdAction<=2
WHERE i.Statut LIKE '%CLOT%' OR i.Statut LIKE '%CLTT%' OR i.Statut LIKE '%CONF%'
GROUP BY l.Matricule,l.IdAction
Make sure to test with both single record and multiple record inserts as all triggers should be tested. Then try your MERGE once you are confident the trigger is correct.
For a homework assignment, I'm trying to build a trigger that allows for multiple inserts/updates/deletes by utilizing a cursor. We have to use a cursor in order to practice the syntax. We know that there are very few practical scenarios for cursors in a production environment.
Here's what I'm trying to accomplish:
For each row inserted into the TAL_ORDER_LINE table, update the ON_HAND value in the TAL_ITEM table by subtracting the NUM_ORDERED value from the stored ON_HAND value.
Table Structure:
Current Query:
ALTER TRIGGER update_on_hand
ON TAL_ORDER_LINE
AFTER INSERT AS
DECLARE #vItemNum as char
DECLARE #vNumOrdered as int
DECLARE new_order CURSOR FOR
SELECT ITEM_NUM, NUM_ORDERED
FROM inserted
OPEN new_order;
FETCH NEXT FROM new_order INTO #vItemNum, #vNumOrdered;
WHILE ##FETCH_STATUS=0
BEGIN
UPDATE TAL_ITEM
SET ON_HAND = ON_HAND - #vNumOrdered
WHERE ITEM_NUM = #vItemNum
FETCH NEXT FROM new_order INTO #vItemNum, #vNumOrdered;
END
CLOSE new_order
DEALLOCATE new_order
My Insert Query:
INSERT INTO TAL_ORDER_LINE (ORDER_NUM, ITEM_NUM, NUM_ORDERED, QUOTED_PRICE)
VALUES (51626, 'KL78', 10, 10.95), (51626, 'DR67', 10, 29.95)
It runs successfully, but does not affect the ON_HAND value. I think the biggest problem is that I'm struggling to understand cursor syntax, especially the INTO clause in the FETCH statement and how data from the 'inserted' table is passed into the cursor. What do I need to know to get this to work? Thanks in advance!
Your problem is likely due to this:
DECLARE #vItemNum as char
it is HIGHLY unlikely that the ItemNum column is a single character. For future reference, you should always verify that you variable definitions are consistent with the values you expect to store in them. And as has been hinted - you will get better answers by posting a complete script rather than a picture.
Big question,how you gonna debug ?
Is On_Hand col NULL , then do this isnull(on_Hand,0)
DECLARE #vItemNum as char
DECLARE #vNumOrdered as int
DECLARE new_order CURSOR FOR
SELECT ITEM_NUM, NUM_ORDERED
FROM TAL_ORDER_LINE
OPEN new_order;
FETCH NEXT FROM new_order INTO #vItemNum, #vNumOrdered;
WHILE ##FETCH_STATUS=0
BEGIN
--UPDATE TAL_ITEM
--SET ON_HAND = ON_HAND - #vNumOrdered
--WHERE ITEM_NUM = #vItemNum
print #vItemNum
print vNumOrdered
FETCH NEXT FROM new_order INTO #vItemNum, #vNumOrdered;
END
CLOSE new_order
DEALLOCATE new_order
Try this :
ALTER TRIGGER update_on_hand ON TAL_ORDER_LINE
FOR INSERT AS
BEGIN
UPDATE TI
SET TI.ON_HAND = TI.ON_HAND - I.NUM_ORDERED
TAL_ITEM TI INNER JOIN
INSERTED I ON I.ITEM_NUM = TI.ITEM_NUM
END
Changed Trigger to FOR INSERT Trigger
Removed Cursor
Note: NOT Tested. ( If you post the sql scripts for create table + sample inserts I can give it a try )
On our SQL Server (Version 10.0.1600), I have a stored procedure that I wrote.
It is not throwing any errors, and it is returning the correct values after making the insert in the database.
However, the last command spSendEventNotificationEmail (which sends out email notifications) is not being run.
I can run the spSendEventNotificationEmail script manually using the same data, and the notifications show up, so I know it works.
Is there something wrong with how I call it in my stored procedure?
[dbo].[spUpdateRequest](#packetID int, #statusID int output, #empID int, #mtf nVarChar(50)) AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
DECLARE #id int
SET #id=-1
-- Insert statements for procedure here
SELECT A.ID, PacketID, StatusID
INTO #act FROM Action A JOIN Request R ON (R.ID=A.RequestID)
WHERE (PacketID=#packetID) AND (StatusID=#statusID)
IF ((SELECT COUNT(ID) FROM #act)=0) BEGIN -- this statusID has not been entered. Continue
SELECT ID, MTF
INTO #req FROM Request
WHERE PacketID=#packetID
WHILE (0 < (SELECT COUNT(ID) FROM #req)) BEGIN
SELECT TOP 1 #id=ID FROM #req
INSERT INTO Action (RequestID, StatusID, EmpID, DateStamp)
VALUES (#id, #statusID, #empID, GETDATE())
IF ((#mtf IS NOT NULL) AND (0 < LEN(RTRIM(#mtf)))) BEGIN
UPDATE Request SET MTF=#mtf WHERE ID=#id
END
DELETE #req WHERE ID=#id
END
DROP TABLE #req
SELECT #id=##IDENTITY, #statusID=StatusID FROM Action
SELECT TOP 1 #statusID=ID FROM Status
WHERE (#statusID<ID) AND (-1 < Sequence)
EXEC spSendEventNotificationEmail #packetID, #statusID, 'http:\\cpweb:8100\NextStep.aspx'
END ELSE BEGIN
SET #statusID = -1
END
DROP TABLE #act
END
Idea of how the data tables are connected:
From your comments I get you do mainly C# development. A basic test is to make sure the sproc is called with the exact same arguments you expect
PRINT '#packetID: ' + #packetID
PRINT '#statusID: ' + #statusID
EXEC spSendEventNotificationEmail #packetID, #statusID, 'http:\\cpweb:8100\NextStep.aspx'
This way you 1. know that the exec statement is reached 2. the exact values
If this all works than I very good candidate is that you have permission to run the sproc and your (C#?) code that calls it doesn't. I would expect that an error is thrown tough.
A quick test to see if the EXEC is executed fine is to do an insert in a dummy table after it.
Update 1
I suggested to add PRINT statements but indeed as you say you cannot (easily) catch them from C#. What you could do is insert the 2 variables in a log table that you newly create. This way you know the exact values that flow from the C# execution.
As to the why it now works if you add permissions I can't give you a ready answer. SQL security is not transparent to me either. But its good to research yourself a but further. Do you have to add both guest and public?
It would also help to see what's going inside spSendEventNotificationEmail. Chances are good that sproc is using a resource where it didn't have permission before. This could be an object like a table or maybe another sproc. Security is heavily dependent on context/settings and not an easy problem to tackle with a Q/A site like SO.