Cursor inside cursor: Alternative? - sql-server

So, I have the following problem:
I have 4 tables: Range Information, Range Percentage, DGRange and SGRange
Range information has a 1 - N to Range Percentage ( IE: N percentages to 1 range )
Both DGRange and SGRange connect these two tables to other external fonts.
It shouldn't be that way. But that is the way the system was built, and now I can only fix the problems that appear on that stupid decision.
In any case, we found times that both DGRange and SGRange are pointed to the same primary key when they shouldnt - and as such, once the system changes any information from that range it screws up something else in the system. As such, I must find out every time we have this duplicates ( very easy to do ) and duplicate the whole record on RangeInformation/RangePercentage and point one of them to the new record.
My problem is that right now I am thinking of using a cursor inside a cursor, and I believe that there may be a easier way to do it.
Is there a better way?
DECLARE #range nvarchar(10)
DECLARE #rangeinfoid nvarchar(10)
DECLARE #lowerlimit money
DECLARE #Upperlimit money
DECLARE #CurrentYear smallint
DECLARE #Percentage float
DECLARE subgroup_cursor CURSOR FOR
SELECT distinct a.RangeInformationId
FROM SubgroupRange a, DiscountGroupRange b
where a.RangeInformationId = b.RangeInformationId
DECLARE rangeperc_cursor CURSOR FOR
SELECT CurrentYear,
Percentage
from RangePercentage
where RangeInformationId = #range
OPEN subgroup_cursor
FETCH NEXT FROM subgroup_cursor INTO #range
WHILE ##FETCH_STATUS = 0
BEGIN
select #rangeinfoid = RangeInformationId ,
#lowerlimit = LowerLimit,
#Upperlimit = UpperLimit
from RangeInformation
where RangeInformationId = #range
--Add insert here
OPEN rangeperc_cursor
FETCH NEXT FROM subgroup_cursor INTO #CurrentYear, #Percentage
WHILE ##FETCH_STATUS = 0
BEGIN
print(#CurrentYear)
print(#Percentage)
--Add insert here
FETCH NEXT FROM rangeperc_cursor INTO #CurrentYear, #Percentage
END
FETCH NEXT FROM subgroup_cursor INTO #range
END
CLOSE subgroup_cursor
DEALLOCATE subgroup_cursor
CLOSE rangeperc_cursor
DEALLOCATE rangeperc_cursor

As I commented above this is difficult to know exactly what you are trying to do but does something like this get you the data you need?
select ri.RangeInformationId
, ri.LowerLimit
, ri.UpperLimit
from RangeInformation ri
join SubgroupRange sr on sr.RangeInformationId = ri.RangeInformationId
join DiscountGroupRange dgr on dgr.RangeInformationId = sr.RangeInformationId
You really should get on the habit of this style of join. It is a little cleaner code wise and helps prevent the accidental cross join by forgetting the join predicates as a where clause.

Here is an example of the comment I left, for clarity.
Essentially, try removing the subgroup_cursor altogether by creating the same population in a temp table:
DECLARE #rangeinfoid int
DECLARE #CurrentYear smallint
DECLARE #Percentage float
CREATE TABLE #temp_subgroup
(
RangeInformationID int
)
INSERT INTO #temp_subgroup
(RangeInformationID)
SELECT distinct a.RangeInformationId
FROM SubgroupRange a, DiscountGroupRange b
where a.RangeInformationId = b.RangeInformationId
DECLARE rangeperc_cursor CURSOR FOR
SELECT CurrentYear,
Percentage,
RangeInformationID
from RangePercentage
where RangeInformationId IN (SELECT RangeInformationID FROM #temp_subgroup)
OPEN rangeperc_cursor
WHILE (##FETCH_STATUS <> 0)
BEGIN
FETCH NEXT FROM rangeperc_cursor INTO #rangeinfoid, #CurrentYear, #Percentage
CREATE TABLE #temp_rangeupdate
(
RangeInformationID int,
LowerLimit money,
UpperLimit money
)
INSERT INTO #temp_rangeupdate
( RangeInformationID, LowerLimit, UpperLimit)
SELECT RangeInformationID,
LowerLimit,
UpperLimit
FROM RangeInformation
WHERE RangeInformationID = #rangeinfoid
-- UPDATE/INSERT #temp_rangeupdate
-- UPDATE/INSERT Production Tables from #temp_rangeupdate
DROP TABLE #temp_rangeupdate
END
DROP TABLE #temp_subgroup
CLOSE rangeperc_cursor
DEALLOCATE rangeperc_cursor

Related

FOR DO in SQL Server

Just curios can I do this in SQL Server
FOR
SELECT columns
FROM table_name
DO
---do some logic
--proc call
ENDFOR;
which means for every record from first select do something in DO block.
This perfectly works for Ingres DB, but not sure if it will work with SQL Server, or I should use only cursor?
This Syntax is not supported in SQL Server's T-SQL. But - as you mention yourself in your question - there is CURSOR:
--Some *mockup* data
DECLARE #tbl TABLE(ID INT IDENTITY, SomeData VARCHAR(100));
INSERT INTO #tbl VALUES('Row 1'),('Row 2'),('Row 3');
--Declare variables to puffer all row's values
DECLARE #WorkingVariable VARCHAR(100);
--never forget the `ORDER BY` if sort order matters!
DECLARE cur CURSOR FOR SELECT SomeData FROM #tbl ORDER BY ID;
OPEN cur;
--a first fetch outside of the loop
FETCH NEXT FROM cur INTO #WorkingVariable
--loop until nothing more to read
WHILE ##FETCH_STATUS=0
BEGIN
--Do whatever you like with the value(s) read into your variable(s).
SELECT #WorkingVariable;
--Pick the next value
FETCH NEXT FROM cur INTO #WorkingVariable
END
--Don't forget to get rid of the used resources
CLOSE cur;
DEALLOCATE cur;
But please keep in mind, that using a loop (however it is coded) is procedural thinking and against the principles of set-based thinking. There are very rare situations! where a CURSOR (or any other kind of loop) is the right choice...
SQL Server doesn't do any behind-the-scenes work for building WHILE loops.
One way to do something like this in SQL Server would look like:
declare #indexTable table (fieldIndex bigint identity(1,1), field (whatever your type of field is))
insert into #indexTable(field)
select field
from table_name
declare #pointer bigint = 1
declare #maxIndexValue bigint = (select max(fieldIndex) from #indexTable)
declare #fieldValue (fieldtype)
while #pointer <= #maxIndexValue
BEGIN
select #fieldValue = field from #indexTable where fieldIndex = #pointer
---do some logic
--proc call
set #pointer = #pointer + 1
END
This is an alternative to using a cursor to loop over your rowset.

Procedure to award every 3rd person

Im working on a stored procedure, which should reward every 3rd person with a extra bonus on his current credit. Amount of bonus and (3rd person) option should be parameterized. Among is a my current code, but when I try to execute this with SQLFiddle, I get always the error Incorrect syntax near 'INTEGER'. - but I can't find out the mistake in my code. I'm using MS SQL Server 2014.
CREATE TABLE Customer (
custnr INTEGER PRIMARY KEY IDENTITY,
name VARCHAR(40) NOT NULL,
firstname VARCHAR(40) NOT NULL,
credit DECIMAL(12,2)
);
CREATE PROCEDURE awardBonus
#position INTEGER;
#bonus DECIMAL(5,2)
AS
BEGIN
DECLARE #creditCustomer DECIMAL(12,2);
DECLARE customer_cursor CURSOR FOR
SELECT custnr
FROM Customer
ORDER BY custnr ASC;
OPEN customer_cursor;
FETCH NEXT FROM customer_cursor INTO #custnr;
WHILE ##FETCH_STATUS = 0
BEGIN
IF (#custnr % #position = 0)
BEGIN
SELECT #creditCustomer = credit
FROM Customer
WHERE custnr = #custnr;
SET #creditCustomer = #creditCustomer + #bonus;
UPDATE Customer
SET credit = #creditCustomer
WHERE custnr = #custnr;
END;
FETCH NEXT FROM customer_cursor INTO #custnr;
END;
CLOSE customer_cursor;
DEALLOCATE customer_cursor;
END;
EXECUTE awardBonus 3, 100
You need to remove the ; in the parameter list:
#position INTEGER;
Also, you should declare first #custnr:
DECLARE #custnr INT;
You also have an invalid column name error in your ORDER BY clause:
ORDER BY knr ASC;
should be:
ORDER BY custnr ASC;
Not so fast!
You can rewrite this in a set-based fashion and remove the use of CURSOR
CREATE PROCEDURE awardBonus
#position INTEGER,
#bonus DECIMAL(5,2)
AS
BEGIN
WITH Cte AS(
SELECT *,
rn = ROW_NUMBER() OVER(ORDER BY custnr)
FROM Customer
)
UPDATE Cte
SET credit = credit + #bonus
WHERE
rn % #position = 0
END
CREATE PROCEDURE awardBonus
#position INTEGER;
#bonus DECIMAL(5,2)
there is a semicolon after integer should be a comma
corrected version
CREATE PROCEDURE awardBonus
#position INTEGER,
#bonus DECIMAL(5,2)
On a different note, how are you selecting the 3rd person, should this be a random selection or ordered? And why are you using the Cursor, the set based solutions seems to be a better choice. In both random or not cases you could construct the query using ROW_NUMBER() and select 3rd record for example.
You've got a ; where you need a ,:
CREATE PROCEDURE awardBonus
#position INTEGER;
#bonus DECIMAL(5,2)
Furthermore, CREATE PROCEDURE must be the only statement in a batch. So you'll have to create the table in a separate batch.
Also, you use ORDER BY knr ASC, knr does not exist.
You also use a variable #custnr which is not declared.

SQL server Select variables showing NULL

in the code below when I run it in Degug mode I can see the variables contain values, however when I select them they show NULL, any ideas? I need to eventually do an Update back to the table [dbo].[HistData]
with the values where RecordID = some number. Any ideas welcome.
-- Declare the variables to store the values returned by FETCH.
DECLARE #HD_TckrPercent decimal(6,3) -- H2 in above formula
DECLARE #HD_CloseLater decimal(9,2) -- F2 in above formula
DECLARE #HD_CloseEarlier decimal(9,2) -- F3 in above formula
DECLARE #RowsNeeded INT
DECLARE #RecordCOUNT INT
SET #RowsNeeded = 2
set #RecordCOUNT = 0 -- to initialize it
DECLARE stocks_cursor CURSOR FOR
SELECT top (#RowsNeeded) [TCKR%], [Stock_Close] FROM [dbo].[HistData]
ORDER BY [RecordID]
OPEN stocks_cursor
-- Perform the first fetch and store the values in variables.
-- Note: The variables are in the same order as the columns
-- in the SELECT statement.
-- Check ##FETCH_STATUS to see if there are any more rows to fetch.
WHILE ##FETCH_STATUS = 0
BEGIN
-- Concatenate and display the current values in the variables.
-- This is executed as long as the previous fetch succeeds.
set #RecordCOUNT = (#RecordCOUNT + 1)
Print #HD_CloseLater
IF #RecordCOUNT = 1
BEGIN
FETCH NEXT FROM stocks_cursor
INTO #HD_TckrPercent, #HD_CloseLater
END
ELSE
BEGIN
FETCH NEXT FROM stocks_cursor
INTO #HD_TckrPercent, #HD_CloseEarlier
END
Select #HD_TckrPercent
Select #HD_CloseLater
Select #HD_CloseEarlier
END
CLOSE stocks_cursor
DEALLOCATE stocks_cursor
GO

Can I spread out a long running stored proc accross multiple CPU's?

[Also on SuperUser - https://superuser.com/questions/116600/can-i-spead-out-a-long-running-stored-proc-accross-multiple-cpus]
I have a stored procedure in SQL server the gets, and decrypts a block of data. ( Credit cards in this case. )
Most of the time, the performance is tolerable, but there are a couple customers where the process is painfully slow, taking literally 1 minute to complete. ( Well, 59377ms to return from SQL Server to be exact, but it can vary by a few hundred ms based on load )
When I watch the process, I see that SQL is only using a single proc to perform the whole process, and typically only proc 0.
Is there a way I can change my stored proc so that SQL can multi-thread the process? Is it even feasible to cheat and to break the calls in half, ( top 50%, bottom 50% ), and spread the load, as a gross hack? ( just spit-balling here )
My stored proc:
USE [Commerce]
GO
/****** Object: StoredProcedure [dbo].[GetAllCreditCardsByCustomerId] Script Date: 03/05/2010 11:50:14 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[GetAllCreditCardsByCustomerId]
#companyId UNIQUEIDENTIFIER, #DecryptionKey NVARCHAR (MAX)
AS
SET NoCount ON
DECLARE #cardId uniqueidentifier
DECLARE #tmpdecryptedCardData VarChar(MAX);
DECLARE #decryptedCardData VarChar(MAX);
DECLARE #tmpTable as Table
(
CardId uniqueidentifier,
DecryptedCard NVarChar(Max)
)
DECLARE creditCards CURSOR FAST_FORWARD READ_ONLY
FOR Select cardId from CreditCards where companyId = #companyId and Active=1 order by addedBy desc
--2
OPEN creditCards
--3
FETCH creditCards INTO #cardId -- prime the cursor
WHILE ##Fetch_Status = 0
BEGIN
--OPEN creditCards
DECLARE creditCardData CURSOR FAST_FORWARD READ_ONLY
FOR select convert(nvarchar(max), DecryptByCert(Cert_Id('Oh-Nay-Nay'), EncryptedCard, #DecryptionKey)) FROM CreditCardData where cardid = #cardId order by valueOrder
OPEN creditCardData
FETCH creditCardData INTO #tmpdecryptedCardData -- prime the cursor
WHILE ##Fetch_Status = 0
BEGIN
print 'CreditCardData'
print #tmpdecryptedCardData
set #decryptedCardData = ISNULL(#decryptedCardData, '') + #tmpdecryptedCardData
print '#decryptedCardData'
print #decryptedCardData;
FETCH NEXT FROM creditCardData INTO #tmpdecryptedCardData -- fetch next
END
CLOSE creditCardData
DEALLOCATE creditCardData
insert into #tmpTable (CardId, DecryptedCard) values ( #cardId, #decryptedCardData )
set #decryptedCardData = ''
FETCH NEXT FROM creditCards INTO #cardId -- fetch next
END
select CardId, DecryptedCard FROM #tmpTable
CLOSE creditCards
DEALLOCATE creditCards
What about using FOR XML to do concatenation in a single correlated subquery:
DECLARE #cards TABLE
(
cardid INT NOT NULL
,addedBy INT NOT NULL
)
DECLARE #data TABLE
(
cardid INT NOT NULL
,valueOrder INT NOT NULL
,encrypted VARCHAR(MAX) NOT NULL
)
INSERT INTO #cards
VALUES ( 0, 1 )
INSERT INTO #cards
VALUES ( 1, 0 )
INSERT INTO #data
VALUES ( 0, 0, '0encrypted0' )
INSERT INTO #data
VALUES ( 0, 1, '0encrypted1' )
INSERT INTO #data
VALUES ( 0, 2, '0encrypted2' )
INSERT INTO #data
VALUES ( 1, 0, '1encrypted0' )
INSERT INTO #data
VALUES ( 1, 1, '1encrypted1' )
-- INSERT INTO output_table ()
SELECT cardid, decrypted
FROM #cards AS cards
OUTER APPLY ( SELECT REPLACE(encrypted, 'encrypted', 'decrypted') + '' -- Put your UDF here
FROM #data AS data
WHERE data.cardid = cards.cardid
ORDER BY data.valueOrder
FOR
XML PATH('')
) AS data ( decrypted )
ORDER BY cards.addedBy DESC
This may be a better question for the SuperUser group (DBA's)
Consider that credit card numbers hash very nicely -- the final digit in Visa / MasterCard 16 digit CC's is a checksum value. Have you considered roll-your-own parallelism by, for example, by having each thread grab those CC numbers where modulo(4) = thread_id? Assuming n CPUs/cores/whatever they're calling them today, you'd not want more than 4 (2*cores) parallel processing threads.
Yes - rewrite the cursors as a set-based query, and the SQL Server optimizer should automatically parallelize (or not) depending on the size of the underlying data. No "special" dev work is required to make SQL Server use parallelism, except some basic best practice like avoiding cursors. It'll decide automatically if it is possible to use parallel threads on multiple procs, and if it's useful to do so, and then it can split the work for you at run time.

Why does my cursor stop in the middle of a loop?

The code posted here is 'example' code, it's not production code. I've done this to make the problem I'm explaining readable / concise.
Using code similar to that below, we're coming across a strange bug. After every INSERT the WHILE loop is stopped.
table containst 100 rows, when the insert is done after 50 rows then the cursor stops, having only touched the first 50 rows. When the insert is done after 55 it stops after 55, and so on.
-- This code is an hypothetical example written to express
-- an problem seen in production
DECLARE #v1 int
DECLARE #v2 int
DECLARE MyCursor CURSOR FAST_FORWARD FOR
SELECT Col1, Col2
FROM table
OPEN MyCursor
FETCH NEXT FROM MyCursor INTO #v1, #v2
WHILE(##FETCH_STATUS=0)
BEGIN
IF(#v1>10)
BEGIN
INSERT INTO table2(col1) VALUES (#v2)
END
FETCH NEXT FROM MyCursor INTO #v1, #v2
END
CLOSE MyCursor
DEALLOCATE MyCursor
There is an AFTER INSERT trigger on table2 which is used to log mutaties on table2 into an third table, aptly named mutations. This contains an cursor which inserts to handle the insert (mutations are logged per-column in an very specific manner, which requires the cursor).
A bit of background: this exists on an set of small support tables. It is an requirement for the project that every change made to the source data is logged, for auditing purposes. The tables with the logging contain things such as bank account numbers, into which vast sums of money will be deposited. There are maximum a few thousand records, and they should only be modified very rarely. The auditing functionality is there to discourage fraud: as we log 'what changed' with 'who did it'.
The obvious, fast and logical way to implement this would be to store the entire row each time an update is made. Then we wouldn't need the cursor, and it would perform an factor better. However the politics of the situation means my hands are tied.
Phew. Now back to the question.
Simplified version of the trigger (real version does an insert per column, and it also inserts the old value):
--This cursor is an hypothetical cursor written to express
--an problem seen in production.
--On UPDATE a new record must be added to table Mutaties for
--every row in every column in the database. This is required
--for auditing purposes.
--An set-based approach which stores the previous state of the row
--is expressly forbidden by the customer
DECLARE #col1 int
DECLARE #col2 int
DECLARE #col1_old int
DECLARE #col2_old int
--Loop through old values next to new values
DECLARE MyTriggerCursor CURSOR FAST_FORWARD FOR
SELECT i.col1, i.col2, d.col1 as col1_old, d.col2 as col2_old
FROM Inserted i
INNER JOIN Deleted d ON i.id=d.id
OPEN MyTriggerCursor
FETCH NEXT FROM MyTriggerCursor INTO #col1, #col2, #col1_old, #col2_old
--Loop through all rows which were updated
WHILE(##FETCH_STATUS=0)
BEGIN
--In production code a few more details are logged, such as userid, times etc etc
--First column
INSERT Mutaties (tablename, columnname, newvalue, oldvalue)
VALUES ('table2', 'col1', #col1, #col1_old)
--Second column
INSERT Mutaties (tablename, columnname, newvalue, oldvalue)
VALUES ('table2', 'col2', #col2, #col1_old)
FETCH NEXT FROM MyTriggerCursor INTO #col1, #col2, #col1_old, #col2_old
END
CLOSE MyTriggerCursor
DEALLOCATE MyTriggerCursor
Why is the code exiting in the middle of the loop?
Your problem is that you should NOT be using a cursor for this at all! This is the code for the example given above.
INSERT INTO table2(col1)
SELECT Col1 FROM table
where col1>10
You also should never ever use a cursor in a trigger, that will kill performance. If someone added 100,000 rows in an insert this could take minutes (or even hours) instead of millseconds or seconds. We replaced one here (that predated my coming to this job) and reduced an import to that table from 40 minites to 45 seconds.
Any production code that uses a cursor should be examined to replace it with correct set-based code. in my experience 90+% of all cursors can be reqwritten in a set-based fashion.
Ryan, your problem is that ##FETCH_STATUS is global to all cursors in an connection.
So the cursor within the trigger ends with an ##FETCH_STATUS of -1. When control returns to the code above, the last ##FETCH_STATUS was -1 so the cursor ends.
That's explained in the documentation, which can be found on MSDN here.
What you can do is use an local variable to store the ##FETCH_STATUS, and put that local variable in the loop. So you get something like this:
DECLARE #v1 int
DECLARE #v2 int
DECLARE #FetchStatus int
DECLARE MyCursor CURSOR FAST_FORWARD FOR
SELECT Col1, Col2
FROM table
OPEN MyCursor
FETCH NEXT FROM MyCursor INTO #v1, #v2
SET #FetchStatus = ##FETCH_STATUS
WHILE(#FetchStatus=0)
BEGIN
IF(#v1>10)
BEGIN
INSERT INTO table2(col1) VALUES (#v2)
END
FETCH NEXT FROM MyCursor INTO #v1, #v2
SET #FetchStatus = ##FETCH_STATUS
END
CLOSE MyCursor
DEALLOCATE MyCursor
It's worth noting that this behaviour does not apply to nested cursors. I've made an quick example, which on SqlServer 2008 returns the expected result (50).
USE AdventureWorks
GO
DECLARE #LocationId smallint
DECLARE #ProductId smallint
DECLARE #Counter int
SET #Counter=0
DECLARE MyFirstCursor CURSOR FOR
SELECT TOP 10 LocationId
FROM Production.Location
OPEN MyFirstCursor
FETCH NEXT FROM MyFirstCursor INTO #LocationId
WHILE (##FETCH_STATUS=0)
BEGIN
DECLARE MySecondCursor CURSOR FOR
SELECT TOP 5 ProductID
FROM Production.Product
OPEN MySecondCursor
FETCH NEXT FROM MySecondCursor INTO #ProductId
WHILE(##FETCH_STATUS=0)
BEGIN
SET #Counter=#Counter+1
FETCH NEXT FROM MySecondCursor INTO #ProductId
END
CLOSE MySecondCursor
DEALLOCATE MySecondCursor
FETCH NEXT FROM MyFirstCursor INTO #LocationId
END
CLOSE MyFirstCursor
DEALLOCATE MyFirstCursor
--
--Against the initial version of AdventureWorks, counter should be 50.
--
IF(#Counter=50)
PRINT 'All is good with the world'
ELSE
PRINT 'Something''s wrong with the world today'
this is a simple misunderstanding of triggers... you don't need a cursor at all for this
if UPDATE(Col1)
begin
insert into mutaties
(
tablename,
columnname,
newvalue
)
select
'table2',
coalesce(d.Col1,''),
coalesce(i.Col1,''),
getdate()
from inserted i
join deleted d on i.ID=d.ID
and coalesce(d.Col1,-666)<>coalesce(i.Col1,-666)
end
basically what this code does is it checks to see if that column's data was updated. if it was, it compares the new and old data, and if it's different it inserts into your log table.
you're first code example could easily be replaced with something like this
insert into table2 (col1)
select Col2
from table
where Col1>10
This code does not fetch any further values from the cursor, nor does it increment any values. As it is, there is no reason to implement a cursor here.
Your entire code could be rewritten as:
DECLARE #v1 int
DECLARE #v2 int
SELECT #v1 = Col1, #v2 = Col2
FROM table
IF(#v1>10)
INSERT INTO table2(col1) VALUES (#v2)
Edit: Post has been edited to fix the problem I was referring to.
You do not have to use a cursor to insert each column as a separate row.
Here is an example:
INSERT LOG.DataChanges
SELECT
SchemaName = 'Schemaname',
TableName = 'TableName',
ColumnName = CASE ColumnID WHEN 1 THEN 'Column1' WHEN 2 THEN 'Column2' WHEN 3 THEN 'Column3' WHEN 4 THEN 'Column4' END
ID = Key1,
ID2 = Key2,
ID3 = Key3,
DataBefore = CASE ColumnID WHEN 1 THEN I.Column1 WHEN 2 THEN I.Column2 WHEN 3 THEN I.Column3 WHEN 4 THEN I.Column4 END,
DataAfter = CASE ColumnID WHEN 1 THEN D.Column1 WHEN 2 THEN D.Column2 WHEN 3 THEN D.Column3 WHEN 4 THEN D.Column4 END,
DateChange = GETDATE(),
USER = WhateverFunctionYouAreUsingForThis
FROM
Inserted I
FULL JOIN Deleted D ON I.Key1 = D.Key1 AND I.Key2 = D.Key2
CROSS JOIN (
SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4
) X (ColumnID)
In the X table, you could code additional behavior with a second column that specially describes how to handle just that column (let's say you wanted some to post all the time, but others only when the value changes). What's important is that this is an example of the cross join technique of splitting rows into each column, but there is a lot more that can be done. Note that the full join allows this to work on inserts and deletes as well as updates.
I also fully agree that storing each row is FAR superior. See this forum for more about this.
As ck mentioned, you are not fetching any further values. The ##FETCH_STATUS thus get's its value from your cursor contained in your AFTER INSERT trigger.
You should change your code to
DECLARE #v1 int
DECLARE #v2 int
DECLARE MyCursor CURSOR FAST_FORWARD FOR
SELECT Col1, Col2
FROM table
OPEN MyCursor
FETCH NEXT FROM MyCursor INTO #v1, #v2
WHILE(##FETCH_STATUS=0)
BEGIN
IF(#v1>10)
BEGIN
INSERT INTO table2(col1) VALUES (#v2)
END
FETCH NEXT FROM MyCursor INTO #v1, #v2
END

Resources