Delete row from cursor source SQL Server - sql-server

I have a SQL Server 2005 cursor operating over a table variable called #workingSet.
Some times rows can be related and in this case I process the row I have fetched and the related rows at the same time. I then remove the related records from #workingset as I don't need to process then in the loop.
In a #workingSet with 7 rows, the first two are related so when I process 1 I also process 2. I remove row 2 from the cursor source (#workingSet) and then fetch next. The problem is it returns the second row in #workingset (the one I deleted on the previous iteration).
I was of the impression that this could be done i.e. deleting an item from a source that a cursor operates on and it will honour the delete in subsequent fetches.

The answer appears to be that the table variable that is being used as the source of the cursor needs to have a primary key. I've added this and all works correctly.

Not massively familiar with cursors but from a quick test this end you need to avoid declaring the cursor with the STATIC or KEYSET options then the changes to the underlying table are reflected in the cursor.
SET NOCOUNT ON;
DECLARE #WorkingTable TABLE(C int)
INSERT INTO #WorkingTable VALUES (1),(2),(3)
DECLARE #C int
DECLARE wt_cursor CURSOR
DYNAMIC /*Or left blank but not STATIC or KEYSET*/
FOR
SELECT C
FROM #WorkingTable
OPEN wt_cursor;
FETCH NEXT FROM wt_cursor
INTO #C
DELETE FROM #WorkingTable
WHILE ##FETCH_STATUS = 0
BEGIN
PRINT #C;
FETCH NEXT FROM wt_cursor
INTO #C;
END
CLOSE wt_cursor;
DEALLOCATE wt_cursor;

Related

Cursor is repeating first record forever

can someone tell me what is wrong with my code please. I'm simply trying to loop through a table with 2 records and get it to return 2 records. But as you may see in the image below, it keeps just repeating the first record (forever, until I hit cancel). Thank you
SET NOCOUNT ON -- Improves performance by not returning number of rows affected
--General Variables
DECLARE #ImportGUID uniqueidentifier =NEWID() -- Declares and sets a new Unique number. Can be used to remove records at a later stage
--Cursor Variables
DECLARE #FirstNameVariable varchar(50)
DECLARE #SurnameVariable varchar(50)
PRINT 'Starting import ' + CONVERT(varchar(255), #ImportGUID); --just display this on the screen
--Declare the first cursor which will loop through a table collecting data
DECLARE NewPersonTableImportCursor CURSOR FOR
SELECT Firstname, Surname
from dbo.A_NewPersonTable
--Open NewPersonTableImportCursor
OPEN NewPersonTableImportCursor
--Start looping through the data and updating the cursor variables with data from this cursor
FETCH NEXT FROM NewPersonTableImportCursor INTO #FirstNameVariable, #SurnameVariable
WHILE ##FETCH_STATUS=0 --Fetch Status 0 means successfull so only proceed on those rows from source table that were successfull
BEGIN
PRINT #FirstNameVariable;
END
CLOSE NewPersonTableImportCursor
Add FETCH NEXT FROM NewPersonTableImportCursor INTO #FirstNameVariable, #SurnameVariable before the end of the loop to get the next record

How to convert Row by row execution in to SET based approach in SQL

I'm working on a huge SQL code and unfortunately it has a CURSOR which handles another two nested CURSORS within it (totally three cursors inside a stored procedure), which handles millions of data to be DELETE,UPDATE and INSERT. This takes a whole lot of time because of row by row execution and I wish to modify this in to SET based approach
From many articles it shows use of CURSORs is not recommend and the alternate is to use WHILE loops instead, So I tried and replaced the three CUROSRs with three WHILE loops nothing more, though I get the same result but there is no improvement in performance, it took the same time as it took for CUROSRs.
Below is the basic structure of the code I'm working on (i Will try to put as simple as possible) and I will put the comments what they are supposed to do.
declare #projects table (
ProjectID INT,
fieldA int,
fieldB int,
fieldC int,
fieldD int)
INSERT INTO #projects
SELECT ProjectID,fieldA,fieldB,fieldC, fieldD
FROM ProjectTable
DECLARE projects1 CURSOR LOCAL FOR /*First cursor - fetch the cursor from ProjectaTable*/
Select ProjectID FROM #projects
OPEN projects1
FETCH NEXT FROM projects1 INTO #ProjectID
WHILE ##FETCH_STATUS = 0
BEGIN
BEGIN TRY
BEGIN TRAN
DELETE FROM T_PROJECTGROUPSDATA td
WHERE td.ID = #ProjectID
DECLARE datasets CURSOR FOR /*Second cursor - this will get the 'collectionDate'field from datasetsTable for every project fetched in above cursor*/
Select DataID, GroupID, CollectionDate
FROM datasetsTable
WHERE datasetsTable.projectID = #ProjectID /*lets say this will fetch ten records for a single projectID*/
OPEN datasets
FETCH NEXT FROM datasets INTO #DataID, #GroupID, #CollectionDate
WHILE ##FETCH_STATUS = 0
BEGIN
DECLARE period CURSOR FOR /*Third Cursor - this will process the records from another table called period with above fetched #collectionDate*/
SELECT ID, dbo.fn_GetEndOfPeriod(ID)
FROM T_PERIODS
WHERE DATEDIFF(dd,#CollectionDate,dbo.fn_GetEndOfPeriod(ID)) >= 0 /*lets say this will fetch 20 records for above fetched single #CollectionDate*/
ORDER BY [YEAR],[Quarter]
OPEN period
FETCH NEXT FROM period INTO #PeriodID, #EndDate
WHILE ##FETCH_STATUS = 0
BEGIN
IF EXISTS (some conditions No - 1 )
BEGIN
BREAK
END
IF EXISTS (some conditions No - 2 )
BEGIN
FETCH NEXT FROM period INTO #PeriodID, #EndDate
CONTINUE
END
/*get the appropirate ID from T_uploads table for the current projectID and periodID fetched*/
SET #UploadID = (SELECT ID FROM T_UPLOADS u WHERE u.project_savix_ID = #ProjectID AND u.PERIOD_ID = #PeriodID AND u.STATUS = 3)
/*Update some fields in T_uploads table for the current projectID and periodID fetched*/
UPDATE T_uploads
SET fieldA = mp.fieldA, fieldB = mp.fieldB
FROM #projects mp
WHERE T_UPLOADS.ID = #UploadID AND mp.ProjectID = #ProjectID
/*Insert some records in T_PROJECTGROUPSDATA table for the current projectID and periodID fetched*/
INSERT INTO T_PROJECTGROUPSDATA tpd ( fieldA,fieldB,fieldC,fieldD,uploadID)
SELECT fieldA,fieldB,fieldC,fieldD,#UploadID
FROM #projects
WHERE tpd.DataID = #DataID
FETCH NEXT FROM period INTO #PeriodID, #EndDate
END
CLOSE period
DEALLOCATE period
FETCH NEXT FROM datasets INTO #DataID, #GroupID, #CollectionDate, #Status, #Createdate
END
CLOSE datasets
DEALLOCATE datasets
COMMIT
END TRY
BEGIN CATCH
Error handling
IF ##TRANCOUNT > 0
ROLLBACK
END CATCH
FETCH NEXT FROM projects1 INTO #ProjectID, #FAID
END
CLOSE projects1
DEALLOCATE projects1
SELECT 1 as success
I request you to suggest any methods to rewrite this code to follow the SET based approach.
Until the table structure and expected result sample data is not provided, here are a few quick things I see that can be improved (some of those are already mentioned by others above):
WHILE Loop is also a cursor. So, changing into to while loop is not
going make things any faster.
Use LOCAL FAST_FORWARD cursor unless you have need to back track a record. This would make the execution much faster.
Yes, I agree that having a SET based approach would be the fastest in most cases, however if you must store intermediate resultset somewhere, I would suggest using a temp table instead of a table variable. Temp table is 'lesser evil' between these 2 options. Here are a few reason why you should try to avoid using a table variable:
Since SQL Server would not have any prior statistics on the table variable during building on Execution Plan, it will always consider that only one record would be returned by the table variable during construction of the execution plan. And accordingly Storage Engine would assign only as much RAM memory for execution of the query. But in reality, there could be millions of records that the table variable might hold during execution. If that happens, SQL Server would be forced spill the data to hard disk during execution (and you will see lots of PAGEIOLATCH in sys.dm_os_wait_stats) making the queries way slower.
One way to get rid of the above issue would be by providing statement level hint OPTION (RECOMPILE) at the end of each query where a table value is used. This would force SQL Server to construct the Execution Plan of those queries each time during runtime and the less memory allocation issue can be avoided. However the downside of this is: SQL Server will no longer be able to take advantage of an already cached execution plan for that stored procedure, and would require recompilation every time, which would deteriorate the performance by some extent. So, unless you know that data in the underlying table changes frequently or the stored procedure itself is not frequently executed, this approach is not recommended by Microsoft MVPs.
Replacing Cursor with While blindly, is not a recommended option, hence it would not impact your performance and might even have negative impact on the performance.
When you define the cursor using Declare C Cursor in fact you are going to create a SCROLL cursor which specifies that all fetch options (FIRST, LAST, PRIOR, NEXT, RELATIVE, ABSOLUTE) are available.
When you need just Fetch Next as scroll option, you can declare the cursor as FAST_FORWARD
Here is the quote about FAST_FORWARD cursor in Microsoft docs:
Specifies that the cursor can only move forward and be scrolled from
the first to the last row. FETCH NEXT is the only supported fetch
option. All insert, update, and delete statements made by the current
user (or committed by other users) that affect rows in the result set
are visible as the rows are fetched. Because the cursor cannot be
scrolled backward, however, changes made to rows in the database after
the row was fetched are not visible through the cursor. Forward-only
cursors are dynamic by default, meaning that all changes are detected
as the current row is processed. This provides faster cursor opening
and enables the result set to display updates made to the underlying
tables. While forward-only cursors do not support backward scrolling,
applications can return to the beginning of the result set by closing
and reopening the cursor.
So you can declare your cursors using DECLARE <CURSOR NAME> FAST_FORWARD FOR ... and you will get noticeable improvements
I think all the cursors code above can be simplified to something like this:
DROP TABLE IF EXISTS #Source;
SELECT DISTINCT p.ProjectID,p.fieldA,p.fieldB,p.fieldC,p.fieldD,u.ID AS [UploadID]
INTO #Source
FROM ProjectTable p
INNER JOIN DatasetsTable d ON d.ProjectID = p.ProjectID
INNER JOIN T_PERIODS s ON DATEDIFF(DAY,d.CollectionDate,dbo.fn_GetEndOfPeriod(s.ID)) >= 0
INNER JOIN T_UPLOADS u ON u.roject_savix_ID = p.ProjectID AND u.PERIOD_ID = s.ID AND u.STATUS = 3
WHERE NOT EXISTS (some conditions No - 1)
AND NOT EXISTS (some conditions No - 2)
;
UPDATE u SET u.fieldA = s.fieldA, u.fieldB = s.fieldB
FROM T_UPLOADS u
INNER JOIN #Source s ON s.UploadID = u.ID
;
INSERT INTO T_PROJECTGROUPSDATA (fieldA,fieldB,fieldC,fieldD,uploadID)
SELECT DISTINCT s.fieldA,s.fieldB,s.fieldC,s.fieldD,s.UploadID
FROM #Source s
;
DROP TABLE IF EXISTS #Source;
Also it would be nice to know "some conditions No" details as query can differ depends on that.

SQL Server - Triggers & Cursors - For Each Inserted Row, Update an Associated Value on Another Table

For a homework assignment, I'm trying to build a trigger that allows for multiple inserts/updates/deletes by utilizing a cursor. We have to use a cursor in order to practice the syntax. We know that there are very few practical scenarios for cursors in a production environment.
Here's what I'm trying to accomplish:
For each row inserted into the TAL_ORDER_LINE table, update the ON_HAND value in the TAL_ITEM table by subtracting the NUM_ORDERED value from the stored ON_HAND value.
Table Structure:
Current Query:
ALTER TRIGGER update_on_hand
ON TAL_ORDER_LINE
AFTER INSERT AS
DECLARE #vItemNum as char
DECLARE #vNumOrdered as int
DECLARE new_order CURSOR FOR
SELECT ITEM_NUM, NUM_ORDERED
FROM inserted
OPEN new_order;
FETCH NEXT FROM new_order INTO #vItemNum, #vNumOrdered;
WHILE ##FETCH_STATUS=0
BEGIN
UPDATE TAL_ITEM
SET ON_HAND = ON_HAND - #vNumOrdered
WHERE ITEM_NUM = #vItemNum
FETCH NEXT FROM new_order INTO #vItemNum, #vNumOrdered;
END
CLOSE new_order
DEALLOCATE new_order
My Insert Query:
INSERT INTO TAL_ORDER_LINE (ORDER_NUM, ITEM_NUM, NUM_ORDERED, QUOTED_PRICE)
VALUES (51626, 'KL78', 10, 10.95), (51626, 'DR67', 10, 29.95)
It runs successfully, but does not affect the ON_HAND value. I think the biggest problem is that I'm struggling to understand cursor syntax, especially the INTO clause in the FETCH statement and how data from the 'inserted' table is passed into the cursor. What do I need to know to get this to work? Thanks in advance!
Your problem is likely due to this:
DECLARE #vItemNum as char
it is HIGHLY unlikely that the ItemNum column is a single character. For future reference, you should always verify that you variable definitions are consistent with the values you expect to store in them. And as has been hinted - you will get better answers by posting a complete script rather than a picture.
Big question,how you gonna debug ?
Is On_Hand col NULL , then do this isnull(on_Hand,0)
DECLARE #vItemNum as char
DECLARE #vNumOrdered as int
DECLARE new_order CURSOR FOR
SELECT ITEM_NUM, NUM_ORDERED
FROM TAL_ORDER_LINE
OPEN new_order;
FETCH NEXT FROM new_order INTO #vItemNum, #vNumOrdered;
WHILE ##FETCH_STATUS=0
BEGIN
--UPDATE TAL_ITEM
--SET ON_HAND = ON_HAND - #vNumOrdered
--WHERE ITEM_NUM = #vItemNum
print #vItemNum
print vNumOrdered
FETCH NEXT FROM new_order INTO #vItemNum, #vNumOrdered;
END
CLOSE new_order
DEALLOCATE new_order
Try this :
ALTER TRIGGER update_on_hand ON TAL_ORDER_LINE
FOR INSERT AS
BEGIN
UPDATE TI
SET TI.ON_HAND = TI.ON_HAND - I.NUM_ORDERED
TAL_ITEM TI INNER JOIN
INSERTED I ON I.ITEM_NUM = TI.ITEM_NUM
END
Changed Trigger to FOR INSERT Trigger
Removed Cursor
Note: NOT Tested. ( If you post the sql scripts for create table + sample inserts I can give it a try )

Assign result of SELECT * statement to variable SQL SERVER

I have a table with 700 rows. What I want to do is, to execute `select * from table_name' query on it and whatever result I will get want to store it in a variable and after that is done, want to traverse through each record for processing purpose? How do I achieve it? Any help??
Thanks in adv,
-saurabh
you want something which is called cursors
Cursors
You use a cursor to fetch rows returned by a query. You retrieve the rows into the cursor using a query and then fetch the rows one at a time from the cursor.
Steps
Declare variables to store the column values for a row.
Declare the cursor, which contains a query.
Open the cursor.
Fetch the rows from the cursor one at a time and store the column values in the variables declared in Step 1. You would then do something with those variables; such as display them on the screen, use them in a calculation, and so on.
Close the cursor.  
hopefully this might help you cursor
here is an example I use to start with
USE pubs
GO
-- Declare the variables to store the values returned by FETCH.
DECLARE #au_lname varchar(40), #au_fname varchar(20)
DECLARE authors_cursor CURSOR FOR
SELECT au_lname, au_fname FROM authors
--WHERE au_lname LIKE 'B%'
ORDER BY au_lname, au_fname
OPEN authors_cursor
-- Perform the first fetch and store the values in variables.
-- Note: The variables are in the same order as the columns
-- in the SELECT statement.
FETCH NEXT FROM authors_cursor
INTO #au_lname, #au_fname
-- Check ##FETCH_STATUS to see if there are any more rows to fetch.
WHILE ##FETCH_STATUS = 0
BEGIN
-- Concatenate and display the current values in the variables.
PRINT #au_fname
-- This is executed as long as the previous fetch succeeds.
FETCH NEXT FROM authors_cursor
INTO #au_lname, #au_fname
END
CLOSE authors_cursor
DEALLOCATE authors_cursor
GO

Table Variable inside cursor, strange behaviour - SQL Server

I observed a strange thing inside a stored procedure with select on table variables. It always returns the value (on subsequent iterations) that was fetched in the first iteration of cursor. Here is some sample code that proves this.
DECLARE #id AS INT;
DECLARE #outid AS INT;
DECLARE sub_cursor CURSOR FAST_FORWARD
FOR SELECT [TestColumn]
FROM testtable1;
OPEN sub_cursor;
FETCH NEXT FROM sub_cursor INTO #id;
WHILE ##FETCH_STATUS = 0
BEGIN
DECLARE #Log TABLE (LogId BIGINT NOT NULL);
PRINT 'id: ' + CONVERT (VARCHAR (10), #id);
INSERT INTO Testtable2 (TestColumn)
OUTPUT inserted.[TestColumn] INTO #Log
VALUES (#id);
IF ##ERROR = 0
BEGIN
SELECT TOP 1 #outid = LogId
FROM #Log;
PRINT 'Outid: ' + CONVERT (VARCHAR (10), #outid);
INSERT INTO [dbo].[TestTable3] ([TestColumn])
VALUES (#outid);
END
FETCH NEXT FROM sub_cursor INTO #id;
END
CLOSE sub_cursor;
DEALLOCATE sub_cursor;
However, while I was posting the code on SO and tried various combinations, I observed that removing top from the below line, gives me the right values out of table variable inside a cursor.
SELECT TOP 1 #outid = LogId FROM #Log;
which would make it like this
SELECT #outid = LogId FROM #Log;
I am not sure what is happening here. I thought TOP 1 on table variable should work, thinking that a new table is created on every iteration of the loop. Can someone throw light on the table variable scoping and lifetime.
Update: I have the solution to circumvent the strange behavior here.
As a solution, I have declared the table at the top before the loop and deleting all rows at the beginning of the loop.
There are numerous things a bit off with this code.
First off, you roll back your embedded transaction on error, but I never see you commit it on success. As written, this will leak a transaction, which could cause major issues for you in the following code.
What might be confusing you about the #Log table situation is that SQL Server doesn't use the same variable scoping and lifetime rules as C++ or other standard programming languages. Even when declaring your table variable in the cursor block you will only get a single #Log table which then lives for the remainder of the batch, and which gets multiple rows inserted into it.
As a result, your use of TOP 1 is not really meaningful, since there's no ORDER BY clause to impose any sort of deterministic ordering on the table. Without that, you get whatever order SQL Server sees fit to give you, which in this case appears to be the insertion order, giving you the first inserted element of that log table every time you run the SELECT.
If you truly want only the last ID value, you will need to provide some real ordering criterion for your #Log table -- some form of autonumber or date field alongside the data column that can be used to provide the proper ordering for what you want to do.

Resources