First attempt at a cursor so take it easy =P The cursor is supposed to grab a list of company ids that are all under a umbrella group. Then target a specific company and copy its workflow records to the companies in the cursor.
It infinitely inserts these workflow records into all the companies ... what is the issue here?
Where is the n00b mistake?
DECLARE #GroupId int = 36;
DECLARE #CompanyToCopy int = 190
DECLARE #NextId int;
Declare #Companies CURSOR;
SET #Companies = CURSOR FOR
SELECT CompanyId
FROM Company C
INNER JOIN [Group] G
ON C.GroupID = G.GroupID
WHERE C.CompanyID != 190
AND
G.GroupId = #GroupId
AND
C.CompanyID != 0
OPEN #Companies
FETCH NEXT
FROM #Companies INTO #NextId
WHILE (##FETCH_STATUS = 0)
BEGIN
INSERT INTO COI.Workflow(CompanyID, EndOfWorkflowAction, LetterType, Name)
(SELECT
#NextId,
W.EndOfWorkflowAction,
W.LetterType,
W.Name
FROM COI.Workflow W)
FETCH NEXT
FROM #Companies INTO #NextId
END
CLOSE #Companies;
DEALLOCATE #Companies;
Edit:
I decided to attempt making this set based just because after being told to do it ... I realized I didn't really quite have the answer as to how to do it as a set based query.
Thanks for all the help everyone. I'll post the set based version for posterity.
INSERT INTO COI.Workflow(CompanyID, EndOfWorkflowAction, LetterType, Name)
(
SELECT
CG.CompanyId,
W.EndOfWorkflowAction,
W.LetterType,
W.Name
FROM COI.Workflow W
CROSS JOIN (SELECT C.CompanyID
FROM Company C
INNER JOIN [Group] G
ON G.GroupID = C.GroupID
WHERE C.CompanyID != 190
AND
C.CompanyID != 0
AND
G.GroupID = 36
) AS CG
WHERE W.CompanyID = 190
)
You have no WHERE condition on this:
SELECT
#NextId,
W.EndOfWorkflowAction,
W.LetterType,
W.Name
FROM COI.Workflow W
-- WHERE CompanyID = #CompanyToCopy -- This should be here
So you are getting a kind of doubling effect.
initial state, company 190, seed row (0)
pass one, company 2, copy of seed row (1)
now 2 rows
pass two, company 3, copy of seed row (0) - call this (2)
pass two, company 3, copy of copy of seed row (1) - call this (3)
now 4 rows
then 8 rows, etc
You are inserting a new copy of all workflow records in the workflow table for each iteration, so it will double in size each time. If you for example have 30 items in your cursor, you will end up with a workflow table with 1073741824 times more records than it had before.
I beieve your logic is wrong (it's somewhat hidden because of the use of a cursor!).
Your posted code is attempting to insert a row into into COI.Workflow for every row in COI.Workflow times the number of companies matching your first select's conditions. (Notice how your insert's SELECT statement has no condition: you are selecting the whole table). On each time through the loop, you are doubling the number of rows in COI.Workflow
So, it's not infinite but it could well be very, very long!
I suggest you rewrite as a set based statement and the logic will become clearer.
First use of cursor is OK, all problems in INSERT ... SELECT logic.
I cannot understand what do you need to insert into COI.Workflow table.
I agree with previous commentatorts that your current WHERE condition doubles records, but I cannot believe that you want to insert the full-doubled records for each company each time.
so, I think you need something like
INSERT INTO COI.Workflow(CompanyID, EndOfWorkflowAction, LetterType, Name)
(SELECT TOP 1
#NextId,
W.EndOfWorkflowAction,
W.LetterType,
W.Name
FROM COI.Workflow W)
Or, we need to know more about your logic of inserting the records.
Related
I am working on an ETL optimization problem and that requires creating a temp table that could be merged with the final table. Currently I have a couple Views that are used to load the final table and that is taking a lot of time. I tried to take the SQL logic from the view and created a temp table and noticed that the values in the temp table do not match the values in the final table. To look deeper I was running count(*) on the view couple of times and noticed that the result for total row count is different for every run by about 10/15 rows give or take. The view has 16 columns from 9 tables which load only once a day. So the time when I run the count(*) the underlying data does not change but the result of the count from the view does change.
This is on a SQL Server 2016 server. I have tried looking into the View logic and nothing stands out as odd. I have tried doing a count(*) on the tables that loads this view and the counts for the tables do not change. I have also tried to create 2 column table from the view logic to simplify the problem and tried an EXCEPT command and that still yields about 20 rows of inconsistent values between the 2 column table created from the same exact view logic.
Here is a reproduction of the VIEW definition that has the row count inconsistency
USE [PROD]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE VIEW Base_View
AS
select
concat(x, y, z)feild1
,*
,ROW_NUMBER() OVER(PARTITION BY a,b ORDER BY some_Date) AS rec_num
,count(a) OVER(PARTITION BY a) AS rec_total
from (
SELECT
case when RESULT='stored value' and e.code is not null then 'x' else '' end x
,case when RESULT='stored value 2' and r.l_id is not null then 'y' else '' end y
,case when RESULT in ('stored value 3','stored value 4') and t.amount is not null then 'z' else '' end z
,case when
CASE WHEN
(m.status = 'stored value 4' OR m.status = 'stored value 5')
AND m.bal < 0
THEN
CASE WHEN DATEDIFF(day,m.due,m.SNAP_DATE) < 0
THEN 0
ELSE DATEDIFF(day,m.due,m.SNAP_DATE)
END
ELSE 0
END=0 AND w.W_ID is null AND m.status<>'stored value 5'
then case
when RESULT in ('stored value 5','stored value 4')
then case when isnull(AMOUNT,0)<>0
then 'abc'
else 'def' end
else 'abc' end
else 'def'
end imp_feild
,result
,es.emp_id
,concat(es.fname,' ',es.lname)task_emp
,concat(e.fname,' ',e.lname)ext_emp
,case when RESULT ='stored value' then t.P_STATUS else null end p_status
,t.CREATE_DATE
,t.l_key
,t.l_id
,m.status
,cast(w.wodate as date)wo_date
,rm.balance refi_balance,rnl.LOAN_key refi_loan,r.effective refi_effective
,case trancode when 'ext' then m.payment else null end ext_amount,e.entered ext_entered,e.effective ext_effective
FROM
(
select t0.*,ROW_NUMBER() OVER(PARTITION BY t0.some_KEY,cast(t0.CREATE_DATE as date),t0.output
ORDER BY t0.some_KEY,cast(t0.CREATE_DATE as date),t0.output ) AS SEQ_NUM
from base_table_1 t0
left join base_table_2 e0
on t0.c_e_key=e0.e_key
where t0.active_rec_ind='Y'
and t0.output in (d,e,f,g)
and (t0.output2 in (j,k)
or ISNULL(e0.some_KEY,'h') in ('u','w'))
) t
join
base_table_3 l
on t.loan_sf_id=l.loan_sf_id
and t.active_rec_ind='Y'
join base_table_4 m
on
t.SOME_DATE=m.SNAP_DATE
and t.L_ID=m.L_ID
left
join base_table_5 es
on t.c_emp_key=es.emp_key
left
join base_table_6 r
on l.l_id=r.l_old_id
and r.entered between dateadd(day,0,cast(t.CREATE_DATE as date)) and dateadd(day,0,t.SOME_DATE)
left
join base_table_7 w
on l.l_id=w.l_id
and w.wodate between cast(t.CREATE_DATE_ETZ as date) and dateadd(day,0,t.SOME_DATE)
left
join base_table_8 wl
on w.l_id=wl.l_id
left
join base_table_8 rnl
on r.l_new_id=rnl.l_id
left
join base_table_8 rol
on r.l_old_id=rol.l_id
left
join base_table_4 rm
on
dateadd(day,-1,r.effective)=rm.SNAP_DATE
and rol.L_ID=rm.L_ID
left
join
(select e0.*,ew.value_1,ew.new_key,ROW_NUMBER() OVER(PARTITION BY e0.L_ID,e0.ENT ORDER BY e0.L_ID,e0.ENT) AS SEQ_NUM
from base_table_9 e0
join base_table_5 ew
on e0.EMP_ID=ew.EMP_ID
where e0.code='a'
) e
on l.sid=e.sid
and e.code='a' and RESULT='stored value 5'
and e.entered between cast(t.CREATE_DATE as date) and dateadd(day,0,t.HOLD_DATE)
AND e.SEQ_NUM=t.SEQ_NUM
and ((isnumeric(e.roll_key)=1 and isnumeric(es.roll_key)=1 and e.roll_key=es.roll_key)
or ((isnumeric(e.roll_key)=0 or isnumeric(es.roll_key)=0) and e.FNAME+e.LNAME=es.FNAME+es.LNAME))
where t.RESULT in ('abc','def')
and cast(t.CREATE_DATE as date) between cast(dateadd(month,-12,getdate()) as date) and cast(getdate() as date)
and (AGENT in ('lmn', 'pqr')
or ISNULL(es.VKEY,'stored value 8') in ('xx','yy','zz'))
)x
where imp_feild='abc'
and concat(x, y, z)<>''
or imp_feild='def'
GO
Expected result is that it should return a consistent number for the row count and that hopefully should solve the inconsistent values problem on the temp table.
Your query has between cast(dateadd(month,-12,getdate()) as date) and cast(getdate() as date) near the bottom. Of course the result of getdate() will be different with each execution and each call to getdate(). That will affect the result.
BTW, having * in your SELECT list is not a good idea. You should only return the columns needed. It makes the view results vulnerable to changes in the underlying tables.
There are a few other things that wouldn't pass code review where I work but that's kinda OT, I think.
This is too long for a comment. Using * in a view is a very bad idea. Not only does the view NOT update (unless you execute sp_refreshview) when you change the base table you can actually get some very interesting things happening.
Check this out as an example of just how bad this can be.
create table ViewExample (Col1 int, Col2 int)
go
create view ViewExampleView as select * from ViewExample
go
insert ViewExample select 1, 2
go
select * from ViewExampleView --obviously we get just a single column
alter table ViewExample add Col3 int --add a new column to the table, surely the view will pick this up?
go
insert ViewExample select 3, 4, 5 --insert a new row with data in all three columns
go
select * from ViewExampleView --what??? The view says select * but we only get Col1 and Col2?
alter table ViewExample drop column Col2 --Oops we decide to drop this column because we don't need it anymore
select * from ViewExampleView --What in the world? Col2 doesn't exist in the table, why is it in the view? And what the heck is going on here. The data from Col3 is now moved to Col2
drop view ViewExampleView
drop table ViewExample
Notice how in the last select from the view that the data from Col3 is being displayed in Col2. If this doesn't convince you to stop using * in views (and pretty much everywhere) I don't know what will.
I am looking for help with optimization techniques or hint for me to move ahead with the problem I have. Using a temp table for in clause makes my query run for more than 5 seconds, changing it a static value returns the data under a second. I am trying to understand the way to optimize this.
-- details about the number of rows in table
dept_activity table
- total rows - 17,319,666
- rows for (dept_id = 10) - 36054
-- temp table
CREATE TABLE #tbl_depts (
Id INT Identity(1, 1) PRIMARY KEY
,dept_id integer
);
-- for example I inserted one row but based on conditions multiple department numbers are inserted in this temp table
insert into #tbl_depts(dept_id) values(10);
-- this query takes more than 5 seconds
SELECT activity_type,count(1) rc
FROM dept_activity da
WHERE (
#filter_by_dept IS NULL
OR da.depart_id IN (
SELECT td.dept_id
FROM #tbl_depts td
)
)
group by activity_type;
-- this query takes less than 500 milli seconds
SELECT activity_type,count(1) rc
FROM dept_activity da
WHERE (
#filter_by_dept IS NULL
OR da.depart_id IN (
10 -- changed to static value
)
)
group by activity_type;
What ways I can optimize to return data for first query under a second.
You're testing this with just one value, but isn't your real case different?
The problem that optimizer has here is that it can't know how many rows the temp. table in -clause will actually find, so it'll have to make a guess, and probably that why the result is different. Looking at estimated row counts (+vs actual) might give some insight on this.
If your clause only contains this one criteria:
#filter_by_dept IS NULL OR da.depart_id IN
It might be good to test what happens if you separate your logic with if blocks, into the one that fetches all, and the other that filters the data.
If that's not the real case, you might want to test both option (recompile), which could result into a better plan, but will use (little bit) more CPU since the plan is re-generated every time. Or by constructing the clause with dynamic SQL (either just with the temp table but optimizing away the or statements, or doing a full in clause if there isn't a ridiculous amount of values), but that might get really ugly.
There are different ways of writing same thing. Use as per your requirements -
Separate Block
IF #filter_by_dept IS NULL
BEGIN
SELECT da.activity_type, count(1) rc
FROM dept_activity da
GROUP BY da.activity_ty
END
ELSE
BEGIN
SELECT da.activity_type,COUNT(1) rc
FROM dept_activity da
INNER JOIN #tbl_depts td ON td.dept_id = da.depart_id
GROUP BY da.activity_ty
END
Dynamic Query
DECLARE #sql_stmt VARCHAR(5000)
SET #sql_stmt = '
SELECT activity_type, COUNT(1) rc
FROM dept_activity da
'
IF #filter_by_dept IS NOT NULL
SET #sql_stmt = #sql_stmt + ' INNER JOIN #tbl_depts td ON td.dept_id = da.depart_id'
SET #sql_stmt = #sql_stmt + ' GROUP BY da.activity_type '
EXEC(#sql_stmt)
Simple Left Join
Comparatively, it can be slower that above two options.
SELECT da.activity_type, count(1) rc
FROM dept_activity da
LEFT JOIN #tbl_depts td ON td.dept_id = da.depart_id
WHERE #filter_by_dept IS NULL OR td.id IS NOT NULL
GROUP BY da.activity_type
The biggest issue is most likely the use of an "optional parameter". The query optimizer has no idea weather or not #filter_by_dept is going to have a value the next time it's executed to it chooses to play it safe opts for an index scan, rather than an index seek. The is where OPTION(RECOMPILE) can be your friend. Especially on simple, easy to compile queries like this one.
Also, there are potential gains from using a WHERE EXISTS in place of the IN.
Try the following...
DECLARE #filter_by_dept INT = 10;
SELECT
da.activity_type,
rc = COUNT(1)
FROM
dbo.dept_activity da
WHERE
#filter_by_dept IS NULL
OR
EXISTS (SELECT 1 FROM #tbl_depts td WHERE da.depart_id = td.dept_id)
GROUP BY
da.activity_type
OPTION (RECOMPILE);
HTH, Jason
I have a table for bookings (table_b) that has around 1.3M rows. A second table (table_s) is used to note when these rows are needed to be accessed by a separate application.
Currently there are triggers to make a record in table_s but this doesn't help with all existing data.
I believe I need to have a query that selects the rows that exists in table_b but not table_s and then insert a row for each line.
Here is my current syntax but don't think it has been formed correctly
DECLARE #b_id [INT] = 0;
WHILE(1 = 1)
BEGIN
SELECT TOP 10
#b_id = MIN([b].[b_id])
FROM
[table_b] AS [b]
LEFT JOIN
[table_s] AS [s] ON [b].[b_id] = [s].[b_id]
WHERE
[s].[b_id] IS NULL;
IF #b_id IS NULL
BREAK;
INSERT INTO [table_s] ([b_id], [processed])
VALUES (#b_id, 0);
END;
Syntactically everything is fine. But there are some misconceptions present in your query
select top 10 #b_id = MIN(b.b_id)
a variable can hold just one value, even though you select top 10 it will assign single value to variable. Your current approach will loop for each non existing record
I don't think for 1 million records insert we need to split the insert into batches. Try this way
INSERT INTO table_s
(b_id,
processed)
SELECT b_id,
0
FROM table_b AS b
WHERE NOT EXISTS (SELECT 1
FROM table_s AS s
WHERE b.b_id = s.b_id)
I have a problem which my limited SQL knowledge is keeping me from understanding.
First the problem:
I have a database which I need to run a report on, it contains configurations of a users entitlements. The report needs to show a distinct list of these configurations and a count against each one.
So a line in my DB looks like this:
USER_ID SALE_ITEM_ID SALE_ITEM_NAME PRODUCT_NAME CURRENT_LINK_NUM PRICE_SHEET_ID
37715 547 CultFREE CultPlus 0 561
the above line is one row of a users configuration, for every user ID there can be 1-5 of these lines. So the definition of a configuration is multiple rows of data sharing a common User ID with variable attributes..
I need to get a distinct list of these configurations across the whole table, leaving me just one configuration set for every instance where > 1 has that configuration and a count of instances of that configuration.
Hope this is clear?
Any ideas?!?!
I have tried various group by's and unions, also the grouping sets function to no avail.
Will be very greatful if anyone can give me some pointers!
Ouch that hurt ...
Ok so problem:
a row represents a configurable line
users may be linked to more than 1 row of configuration
configuration rows when grouped together form a configuration set
we want to figure out all of the distinct configuration sets
we want to know what users are using them.
Solution (its a bit messy but the idea is there, copy and paste in to SQL management studio) ...
-- ok so i imported the data to a table named SampleData ...
-- 1. import the data
-- 2. add a new column
-- 3. select all the values of the config in to the new column (Configuration_id)
--UPDATE [dbo].[SampleData]
--SET [Configuration_ID] = SALE_ITEM_ID + SALE_ITEM_NAME + [PRODUCT_NAME] + [CURRENT_LINK_NUM] + [PRICE_SHEET_ID] + [Configuration_ID]
-- 4. i then selected just the distinct values of those and found 6 distinct Configuration_id's
--SELECT DISTINCT [Configuration_ID] FROM [dbo].[SampleData]
-- 5. to make them a bit easier to read and work with i gave them int values instead
-- for me it was easy to do this manually but you might wanna do some trickery here to autonumber them or something
-- basic idea is to run the step 4 statement but select into a new table then add a new primary key column and set identity spec on it
-- that will generate u a bunch of incremental numbers for your config id's so u can then do something like ...
--UPDATE [dbo].[SampleData] sd
--SET Configuration_ID = (SELECT ID FROM TempConfigTable WHERE Config_ID = sd.Configuration_ID)
-- at this point you have all your existing rows with a unique ident for the values combined in each row.
-- so for example in my dataset i have several rows where only the user_id has changed but all look like this ...
--SALE_ITEM_ID SALE_ITEM_NAME PRODUCT_NAME CURRENT_LINK_NUM PRICE_SHEET_ID Configuration_ID
--54101 TravelFREE TravelPlus 0 56101 1
-- now you have a config id you can start to work on building sets up ...
-- each user is now matched with 1 or more config id
-- 6. we use a CTE (common table expression) to link the possibles (keeps the join small) ...
--WITH Temp (ConfigID)
--AS
--(
-- SELECT DISTINCT SD.Configuration_Id --SD2.Configuration_Id, SD3.Configuration_Id, SD4.Configuration_Id, SD5.Configuration_Id,
-- FROM [dbo].[SampleData] SD
--)
-- this extracts all the possible combinations using the CTE
-- on the basis of what you told me, max rows per user is 6, in the result set i have i only have 5 distinct configs
-- meaning i gain nothing by doing a 6th join.
-- cross joins basically give you every combination of unique values from the 2 tables but we joined back on the same table
-- so its every possible combination of Temp + Temp (ConfigID + ConfigID) ... per cross join so with 5 joins its every combination of
-- Temp + Temp + Temp + Temp + Temp .. good job temp only has 1 column with 5 values in it
-- 7. uncomment both this and the CTE above ... need to use them together
--SELECT DISTINCT T.ConfigID C1, T2.ConfigID C2, T3.ConfigID C3, T4.ConfigID C4, T5.ConfigID C5
--INTO [SETS]
--FROM Temp T
--CROSS JOIN Temp T2
--CROSS JOIN Temp T3
--CROSS JOIN Temp T4
--CROSS JOIN Temp T5
-- notice the INTO clause ... this dumps me out a new [SETS] table in my db
-- if i go add a primary key to this and set its ident spec i now have unique set id's
-- for each row in the table.
--SELECT *
--FROM [dbo].[SETS]
-- now here's where it gets interesting ... row 1 defines a set as being config id 1 and nothing else
-- row 2 defines set 2 as being config 1 and config 2 and nothing else ... and so on ...
-- the problem here of course is that 1,2,1,1,1 is technically the same set as 1,1,1,2,1 from our point of view
-- ok lets assign a set to each userid ...
-- 8. first we pull the distinct id's out ...
--SELECT DISTINCT USER_ID usr, null SetID
--INTO UserSets
--FROM SampleData
-- now we need to do bit a of operating on these that's a bit much for a single update or select so ...
-- 9. process findings in a loop
DECLARE #currentUser int
DECLARE #set int
-- while theres a userid not linked to a set
WHILE EXISTS(#currentUser = SELECT TOP 1 usr FROM UserSets WHERE SetId IS NULL)
BEGIN
-- figure out a set to link it to
SET #set = (
SELECT TOP 1 ID
FROM [SETS]
-- shouldn't really do this ... basically need to refactor in to a table variable then compare to that
-- that way the table lookup on ur main data is only 1 per User_id
WHERE C1 IN (SELECT DISTINCT Configuration_id FROM SampleData WHERE USER_ID = #currentUser)
AND C2 IN (SELECT DISTINCT Configuration_id FROM SampleData WHERE USER_ID = #currentUser)
AND C3 IN (SELECT DISTINCT Configuration_id FROM SampleData WHERE USER_ID = #currentUser)
AND C4 IN (SELECT DISTINCT Configuration_id FROM SampleData WHERE USER_ID = #currentUser)
AND C5 IN (SELECT DISTINCT Configuration_id FROM SampleData WHERE USER_ID = #currentUser)
)
-- hopefully that worked
IF(#set IS NOT NULL)
BEGIN
-- tell the usersets table
UPDATE UserSets SET SetId = #set WHERE usr = #currentUser
set #set = null
END
ELSE -- something went wrong ... set to 0 to prevent endless loop but any userid linked to set 0 is a problem u need to look at
UPDATE UserSets SET SetId = 0 WHERE usr = #currentUser
-- and round we go again ... until we are done
END
SELECT
USER_ID,
SALE_ITEM_ID, ETC...,
COUNT(*) WhateverYouWantToNameCount
FROM TableNAme
GROUP BY USER_ID
I am looking for a way to write the below procedure without using a CURSOR or just to find a better performing query.
CREATE TABLE #OrderTransaction (OrderTransactionId int, ProductId int, Quantity int);
CREATE TABLE #Product (ProductId int, MediaTypeId int);
CREATE TABLE #OrderDelivery (OrderTransactionId int, MediaTypeId int);
INSERT INTO #Product (ProductId, MediaTypeId) VALUES (1,1);
INSERT INTO #Product (ProductId, MediaTypeId) VALUES (2,2);
INSERT INTO #OrderTransaction(OrderTransactionId, ProductId, Quantity) VALUES (1,1,1);
INSERT INTO #OrderTransaction(OrderTransactionId, ProductId, Quantity) VALUES (2,2,6);
DECLARE #OrderTransactionId int, #MediaTypeId int, #Quantity int;
DECLARE ordertran CURSOR FAST_FORWARD FOR
SELECT OT.OrderTransactionId, P.MediaTypeId, OT.Quantity
FROM #OrderTransaction OT WITH (NOLOCK)
INNER JOIN #Product P WITH (NOLOCK)
ON OT.ProductId = P.ProductId
OPEN ordertran;
FETCH NEXT FROM ordertran INTO #OrderTransactionId, #MediaTypeId, #Quantity;
WHILE ##FETCH_STATUS = 0
BEGIN
WHILE #Quantity > 0
BEGIN
INSERT INTO #OrderDelivery ([OrderTransactionId], [MediaTypeId])
VALUES (#OrderTransactionId, #MediaTypeId)
SELECT #Quantity = #Quantity - 1;
END
FETCH NEXT FROM ordertran INTO #OrderTransactionId, #MediaTypeId, #Quantity;
END
CLOSE ordertran;
DEALLOCATE ordertran;
SELECT * FROM #OrderTransaction
SELECT * FROM #Product
SELECT * FROM #OrderDelivery
DROP TABLE #OrderTransaction;
DROP TABLE #Product;
DROP TABLE #OrderDelivery;
Begin with a Numbers table that is large enough to handle the maximum order amount:
CREATE TABLE Numbers (
Num int NOT NULL PRIMARY KEY CLUSTERED
)
-- SQL 2000 version
INSERT Numbers VALUES (1)
SET NOCOUNT ON
GO
INSERT Numbers (Num) SELECT Num + (SELECT Max(Num) FROM Numbers) FROM Numbers
GO 15
-- SQL 2005 and up version
WITH
L0 AS (SELECT c = 1 UNION ALL SELECT 1),
L1 AS (SELECT c = 1 FROM L0 A, L0 B),
L2 AS (SELECT c = 1 FROM L1 A, L1 B),
L3 AS (SELECT c = 1 FROM L2 A, L2 B),
L4 AS (SELECT c = 1 FROM L3 A, L3 B),
L5 AS (SELECT c = 1 FROM L4 A, L4 B),
N AS (SELECT Num = ROW_NUMBER() OVER (ORDER BY c) FROM L5)
INSERT Numbers(Num)
SELECT Num FROM N
WHERE Num <= 32768;
Then, immediately after your INSERT statements:
INSERT #OrderDelivery (OrderTransactionId, MediaTypeId)
SELECT
OT.OrderTransactionId,
P.MediaTypeId
FROM
#OrderTransaction OT
INNER JOIN #Product P ON OT.ProductId = P.ProductId
INNER JOIN Numbers N ON N.Num BETWEEN 1 AND OT.Quantity
That should do it!
If for some reason you have qualms about putting a permanent Numbers table in your database (which I don't understand as it is a wonderful tool), then you can simply join to the CTE given instead of the table itself. In SQL 2000 you can create a temp table and use a loop, but I would advise against this strongly.
A Numbers table is highly recommended. There is no concern about some future change breaking it (the set of whole numbers won't change any time soon). Some people use a Numbers table with a million numbers in it, which is only around 4MB of storage.
To answer critics of the Numbers table: if the database design uses a numbers table, then that table won't need to change. It is like any other table in the database and can be relied on. You don't worry too much about queries against an Orders table failing because some day the table might not exist, so I don't see why there would be any similar concern about another table that is required and depended on.
UPDATE
In the time since writing this answer I have learned about the master.dbo.spt_values table which has a number column. When queried with where type='P' you get 0 - 255 in SQL 2000 and 0 - 8191 in SQL 2005 and up. (There are also potentially useful low and high columns.) You can cross join this table to itself a couple of times if necessary to get, even in SQL 2000, a bunch of rows very quickly.
The trick is to introduce a table of values (named, in the example below, MyTableOfIntegers) which contains all the integer values between 1 and (at least) some value (in the case at hand, that would be the biggest possible Quantity value from OrderTransaction table).
INSERT INTO #OrderDelivery ([OrderTransactionId], [MediaTypeId])
SELECT OT.OrderTransactionId, P.MediaTypeId
FROM #OrderTransaction OT WITH (NOLOCK)
INNER JOIN #Product P WITH (NOLOCK)
ON OT.ProductId = P.ProductId
JOIN MyTableOfIntegers I ON I.Num <= OT.Quantity
--WHERE some optional conditions
Essentially the extra JOIN on MyTableOfIntegers, produces as many duplicate rows as OT.Quantity, and that seems to be what the purpose of the cursor was: to insert that many duplicated rows in the OrderDelivery table.
I didn't check the rest of the logic with the temporary tables and all (I'm assuming these are temp tables for the purpose of checking the logic rather than being part of the process proper), but it seems that the above is the type of construct needed to express the needed logic in declarative fashion only, without any cursor or even any loop.
Here is a slight variation on the previous answers, that avoids a permanent numbers table (though I am not sure why people are so afraid of this construct), and allows you to build a run-time CTE that contains exactly the set of numbers you'll need to perform the correct number of inserts (by checking for the highest quantity). I commented out the CROSS JOIN in the initial CTE, but you can use it if your quantity for any given order can exceed the number of rows in sys.columns. Hopefully that is an unlikely scenario. Note that this is for SQL Server 2005 and up ... it is always useful to let us know which specific version(s) you are targeting.
DECLARE #numsNeeded INT;
SELECT #numsNeeded = MAX(Quantity) FROM #OrderTransaction;
WITH n AS
(
SELECT TOP (#numsNeeded) i = ROW_NUMBER()
OVER (ORDER BY c.[object_id])
FROM sys.columns AS c --CROSS JOIN sys.columns AS c2
)
INSERT #OrderDelivery
(
OrderTransactionID,
MediaTypeID
)
SELECT t.OrderTransactionID, p.MediaTypeID
FROM #OrderTransaction AS t
INNER JOIN #Product AS p
ON t.ProductID = p.ProductID
INNER JOIN n
ON n.i <= t.Quantity;
INSERT INTO #OrderDelivery ([OrderTransactionId], [MediaTypeId])
SELECT OT.OrderTransactionId, P.MediaTypeId,
FROM #OrderTransaction OT
INNER JOIN #Product P
ON OT.ProductId = P.ProductId
WHERE OT.Quantity > 0
I feel like i'm misreading the logic here, but isn't that the equivelant?
This still uses a loop but it has gotten rid of the cursor. Short of creating a table of numbers to join on, I think this is the best answer.
DECLARE #Count AS INTEGER
SET #Count = 1
WHILE (1 = 1)
BEGIN
INSERT INTO #OrderDelivery ([OrderTransactionId], [MediaTypeId])
SELECT OT.OrderTransactionId, P.MediaTypeId, OT.Quantity
FROM #OrderTransaction OT WITH (NOLOCK)
INNER JOIN #Product P WITH (NOLOCK)
ON OT.ProductId = P.ProductId
WHERE OT.Quantity > #Count
IF ##ROWCOUNT = 0
BREAK
SET #COUNT = #COUNT + 1
END