Does anyone have or know of a SQL script that will generate test data for a given table?
Ideally it will look at the schema of the table and create row(s) with test data based on the datatype for each column.
If this doesn't exist, would anyone else find it useful? If so I'll pull my finger out and write one.
Well I thought I would pull my finger out and write myself a light weight data generator:
declare #select varchar(max), #insert varchar(max), #column varchar(100),
#type varchar(100), #identity bit, #db nvarchar(100)
set #db = N'Orders'
set #select = 'select '
set #insert = 'insert into ' + #db + ' ('
declare crD cursor fast_forward for
select column_name, data_type,
COLUMNPROPERTY(
OBJECT_ID(
TABLE_SCHEMA + '.' + TABLE_NAME),
COLUMN_NAME, 'IsIdentity') AS COLUMN_ID
from Northwind.INFORMATION_SCHEMA.COLUMNS
where table_name = #db
open crD
fetch crD into #column, #type, #identity
while ##fetch_status = 0
begin
if #identity = 0 or #identity is null
begin
set #insert = #insert + #column + ', '
set #select = #select +
case #type
when 'int' then '1'
when 'varchar' then '''test'''
when 'nvarchar' then '''test'''
when 'smalldatetime' then 'getdate()'
when 'bit' then '0'
else 'NULL'
end + ', '
end
fetch crD into #column, #type, #identity
end
set #select = left(#select, len(#select) - 1)
set #insert = left(#insert, len(#insert) - 1) + ')'
exec(#insert + #select)
close crD
deallocate crD
Given any table, the script will create one record with some arbitrary values for the types; int, varchar, nvarchar, smalldatetime and bit. The case statement could be replaced with a function. It won't travel down dependencies but it will skip any seeded columns.
My motivation for creating this is to test my NHibernate mapping files against a table with some 50 columns so I was after a quick a simple script which can be re-used.
Have you tried ApexSQL Generate: https://www.apexsql.com/sql_tools_generate.aspx ?
I stumbled upon it during my own search for the similar thing, and it did the job quite well. It’s not free, but you get a free trial with all features available, so you can try before you buy.
I think it will suite your needs quite well, since it keeps track of your relations between tables, column types and even constraints (for a more complex databases).
One thing I liked (and needed, actually) was that it has built-in values for actual names, addresses etc. It helps so much when querying created test data and not get a random strings.
Also, you can export to SQL (or few other formats) and use the created data at any time to repopulate the database.
There is a program from red gate software which will do this for you. It's called SQL Data Generator.
We need step by step create query for tables need entry data. i used below codes, step by step for insert test data:
1. Create a table :
CREATE TABLE dbo.TestTableSize
(
MyKeyField VARCHAR(10) NOT NULL,
MyDate1 DATETIME NOT NULL,
MyDate2 DATETIME NOT NULL,
MyDate3 DATETIME NOT NULL,
MyDate4 DATETIME NOT NULL,
MyDate5 DATETIME NOT NULL
)
2. Variable Declarations
DECLARE #RowCount INT
DECLARE #RowString VARCHAR(10)
DECLARE #Random INT
DECLARE #Upper INT
DECLARE #Lower INT
DECLARE #InsertDate DATETIME
3.Set on time :
SET #Lower = -730
SET #Upper = -1
SET #RowCount = 0
4.Populate the Table :
WHILE #RowCount < 3000000
BEGIN
5.Preparing Values
SET #RowString = CAST(#RowCount AS VARCHAR(10))
SELECT #Random = ROUND(((#Upper - #Lower -1) * RAND() + #Lower), 0)
SET #InsertDate = DATEADD(dd, #Random, GETDATE())
6. Write insert statment :
INSERT INTO TestTableSize
(MyKeyField
,MyDate1
,MyDate2
,MyDate3
,MyDate4
,MyDate5)
VALUES
(REPLICATE('0', 10 - DATALENGTH(#RowString)) + #RowString
, #InsertDate
,DATEADD(dd, 1, #InsertDate)
,DATEADD(dd, 2, #InsertDate)
,DATEADD(dd, 3, #InsertDate)
,DATEADD(dd, 4, #InsertDate))
SET #RowCount = #RowCount + 1
END
7. Complete code :
DECLARE #RowCount INT
DECLARE #RowString VARCHAR(10)
DECLARE #Random INT
DECLARE #Upper INT
DECLARE #Lower INT
DECLARE #InsertDate DATETIME
SET #Lower = -730
SET #Upper = -1
SET #RowCount = 0
WHILE #RowCount < 3000000
BEGIN
SET #RowString = CAST(#RowCount AS VARCHAR(10))
SELECT #Random = ROUND(((#Upper - #Lower -1) * RAND() + #Lower), 0)
SET #InsertDate = DATEADD(dd, #Random, GETDATE())
INSERT INTO TestTableSize
(MyKeyField
,MyDate1
,MyDate2
,MyDate3
,MyDate4
,MyDate5)
VALUES
(REPLICATE('0', 10 - DATALENGTH(#RowString)) + #RowString
, #InsertDate
,DATEADD(dd, 1, #InsertDate)
,DATEADD(dd, 2, #InsertDate)
,DATEADD(dd, 3, #InsertDate)
,DATEADD(dd, 4, #InsertDate))
SET #RowCount = #RowCount + 1
END
Some flavours of Visual Studio have data generation built in.
If you use database projects in it you can create data generation plans. Here's the MSDN article
I used following way it basically copies data from itself , the data grows exponentially with every execution.Claveat is that You have to have some sample data at first and also you have to execute the query eg I had 327680 rows of data when i started with 10 rows of data .by executing the query just 16 times.Execute one more time and i will hage 655360 rows of data!
insert into mytable select [col1], [col2], [col3] from mytable
Related
I have a quite large script which is shrunk and simplified in this question.The overall principal is that I have some code that need to be run several times with only small adjustments for every iteration. The script is built with a major loop that has several subloops in it. Today the whole select-statement is hard coded in the loops. My thought was that I could write the select-statement once and only let the parts that needs to be changed for every loop be the only thing that changes in the loop. The purpose is easier maintaining.
Example of the script:
declare
#i1 int,
#i2 int,
#t nvarchar(50),
#v nvarchar(50),
#s nvarchar(max)
set #i1 = 1
while #i1 < 3
begin
if #i1 = 1
begin
set #i2 = 1
set #t = 'Ansokningsomgang'
set #s = '
select ' + #v + '_Ar,count(*) as N
from (
select left(' + #v + ',4) as ' + #v + '_Ar
from Vinnova_' + #t + '
) a
group by ' + #v + '_Ar
order by ' + #v + '_Ar
'
while #i2 < 4
begin
if #i2 = 1
begin
set #v = 'diarienummer'
exec sp_executesql
#stmt = #s,
#params = N'#tab as nvarchar(50), #var as nvarchar(50)',
#tab = #t, #var = #v
end
else if #i2 = 2
begin
set #v = 'utlysning_diarienummer'
exec sp_executesql
#stmt = #s,
#params = N'#tab as nvarchar(50), #var as nvarchar(50)',
#tab = #t, #var = #v
end
else if #i2 = 3
begin
set #v = 'utlysning_program_diarienummer'
exec sp_executesql
#stmt = #s,
#params = N'#tab as nvarchar(50), #var as nvarchar(50)',
#tab = #t, #var = #v
end
set #i2 = #i2 + 1
end
end
else
print('Nr: ' + cast(#i1 as char))
set #i1 = #i1 + 1
end
This script doesn't work. It runs through but have no outputs. If I declare #v above the declaration of #s it works, but then I need to declare #s for every time I need to change the value for #v. Then there is no point in doing this.
#i1 iterates far more times than what is shown here.
The else statement to "if #i1" doesn't exist in the real script. It replaces a bunch of subloops that run for every value that is aloud for #i1 in this example.
I also tried to just execute #s like:
exec(#s)
in every loop. Same result.
So what am I missing?
Database engine is MS SQL Server.
Your parallel-structured tables are not 'normalized' to any degree,
and you are now suffering the consequence. Typically, the best
approach is to go ahead and make the data more normalized before you
take any other action.
Dynamic sql could work for making this task easier, and it is okay
as long as it's an ad-hoc task that hopefully you use to begin
building permanent tables in the name of making your various
parallel tables obsolete. It is not okay if it is part of a
regular process because someone could enter in some malicious
code into one of your table values and do some damage. This is
particularly true in your case because your use of left
functions imply that you're columns are character based.
Here's some code to put your data in more normal form. It can be
made more normal after this, so it would only be the first step.
But it gets you to the point where using it for your purpose is
far easier, and so hopefully will motivate you to redesign.
-- plug in the parallel tables you want to normalize
declare #tablesToNormalize table (id int identity(1,1), tbl sysname);
insert #tablesToNormalize values ('Ansokningsomgang', 'Ansokningsomgang2');
-- create a table that will hold the restructured data
create table ##normalized (
tbl sysname,
rowKey int, -- optional, but needed if restructure is permanent
col sysname,
category varchar(50),
value varchar(50)
);
-- create template code to restructure and insert a table's data
-- into the normalized table (notice the use of #tbl as a string,
-- not as a variable)
declare #templateSql nvarchar(max) = '
insert ##normalized
select tbl = ''Vinnova_#tbl'',
rowKey = t.somePrimaryKey, -- optional, but needed if restructure is permanent
ap.col,
category = left(ap.value, 4),
ap.value
from Vinnova_#tbl t
cross apply (values
(''diarienummer'', diarienummer),
(''utlysning_diarienummer'', utlysning_diarienummer),
(''utlysning_program_diarienummer'', utlysning_program_diarienummer)
// ... and so on (much better than writing a nested loop for ever row)
) ap (col, value)
';
-- loop the table names and run the template (notice the 'replace' function)
declare #id int = 1;
while #id <= (select max(id) from #tablesToNormalize)
begin
declare #tbl sysname = (select tbl from #tablesToNormalize where id = #id);
declare #sql nvarchar(max) = replace(#templateSql, '#t', #tbl);
exec (#tbl);
end
Now that your data is in a more normal form, code for your purpose
is much simpler, and the output far cleaner.
select tbl, col, category, n = count(value)
from ##normalized
group by tbl, col, category
order by tbl, col, category;
I am having trouble converting an UDF into a stored procedure.
Here is what I've got: this is the stored procedure that calls the function (I am using it to search for and remove all UNICODE characters that are not between 32 and 126):
ALTER PROCEDURE [dbo].[spRemoveUNICODE]
#FieldList varchar(250) = '',
#Multiple int = 0,
#TableName varchar(100) = ''
AS
BEGIN
SET NOCOUNT ON;
DECLARE #SQL VARCHAR(MAX), #counter INT = 0
IF #Multiple > 0
BEGIN
DECLARE #Field VARCHAR(100)
SELECT splitdata
INTO #TempValue
FROM dbo.fnSplitString(#FieldList,',')
WHILE (SELECT COUNT(*) FROM #TempValue) >= 1
BEGIN
DECLARE #Column VARCHAR(100) = (SELECT TOP 1 splitdata FROM #TempValue)
SET #SQL = 'UPDATE ' + #TableName + ' SET ' + #Column + ' = dbo.RemoveNonASCII(' + #Column + ')'
EXEC (#SQL)
--print #SQL
SET #counter = #counter + 1
PRINT #column + ' was checked for ' + #counter + ' rows.'
DELETE FROM #TempValue
WHERE splitdata = #Column
END
END
ELSE IF #Multiple = 0
BEGIN
SET #SQL = 'UPDATE ' + #TableName + ' SET ' + #FieldList + ' = dbo.RemoveNonASCII(' + #FieldList + ')'
EXEC (#SQL)
--print #SQL
SET #counter = #counter + 1
PRINT #column + ' was checked for ' + #counter + ' rows.'
END
END
And here is the UDF that I created to help with the update (RemoveNonASCII):
ALTER FUNCTION [dbo].[RemoveNonASCII]
(#nstring nvarchar(max))
RETURNS varchar(max)
AS
BEGIN
-- Variables
DECLARE #Result varchar(max) = '',#nchar nvarchar(1), #position int
-- T-SQL statements to compute the return value
set #position = 1
while #position <= LEN(#nstring)
BEGIN
set #nchar = SUBSTRING(#nstring, #position, 1)
if UNICODE(#nchar) between 32 and 127
set #Result = #Result + #nchar
set #position = #position + 1
set #Result = REPLACE(#Result,'))','')
set #Result = REPLACE(#Result,'?','')
END
if (#Result = '')
set #Result = null
-- Return the result
RETURN #Result
END
I've been trying to convert it into a stored procedure. I want to track how many rows actually get updated when this is run. Right now it just says that all rows, however many I run this on, are updated. I want to know if say only half of them had bad characters. The stored procedure is already set up so that it tells me which column it is looking at, I want to include how many rows were updated. Here is what I've tried so far:
DECLARE #Result varchar(max) = '',#nchar nvarchar(1), #position int, #nstring nvarchar(max), #counter int = 0, #CountRows int = 0, #Length int
--select Notes from #Temp where Notes is not null order by Notes OFFSET #counter ROWS FETCH NEXT 1 ROWS ONLY
set #nstring = (select Notes from #Temp where Notes is not null order by Notes OFFSET #counter ROWS FETCH NEXT 1 ROWS ONLY)
set #Length = LEN(#nstring)
if #Length = 0 set #Length = 1
-- Add the T-SQL statements to compute the return value here
set #position = 1
while #position <= #Length
BEGIN
print #counter
print #CountRows
select #nstring
set #nchar = SUBSTRING(#nstring, #position, 1)
if UNICODE(#nchar) between 32 and 127
begin
print unicode(#nchar)
set #Result = #Result + #nchar
set #counter = #counter + 1
end
if UNICODE(#nchar) not between 32 and 127
begin
set #CountRows = #CountRows + 1
end
set #position = #position + 1
END
print 'Rows found with invalid UNICODE: ' + convert(varchar,#CountRows)
Right now I'm purposely creating a temp table and adding a bunch of notes and then adding in a bunch of invalid characters.
I created a list of 700+ Notes and then updated 2 of them with some invalid characters (outside the 32 - 127). There are a few that are null and a few that are not null, but that doesn't have anything in them. What happens is that I get 0 updates.
Rows found with invalid UNICODE: 0
Though it does see that the UNICODE for the one that it pulls is 32.
Obviously I'm missing something I just don't see what it is.
Here is a set based solution to handle your bulk replacements. Instead of a slow scalar function this is utilizing an inline table valued function. These are far faster than their scalar ancestors. I am using a tally table here. I keep this as a view on my system like this.
create View [dbo].[cteTally] as
WITH
E1(N) AS (select 1 from (values (1),(1),(1),(1),(1),(1),(1),(1),(1),(1))dt(n)),
E2(N) AS (SELECT 1 FROM E1 a, E1 b), --10E+2 or 100 rows
E4(N) AS (SELECT 1 FROM E2 a, E2 b), --10E+4 or 10,000 rows max
cteTally(N) AS
(
SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM E4
)
select N from cteTally
If you are interested about tally tables here is an excellent article on the topic. http://www.sqlservercentral.com/articles/T-SQL/62867/
create function RemoveNonASCII
(
#SearchVal nvarchar(max)
) returns table as
RETURN
with MyValues as
(
select substring(#SearchVal, N, 1) as MyChar
, t.N
from cteTally t
where N <= len(#SearchVal)
and UNICODE(substring(#SearchVal, N, 1)) between 32 and 127
)
select distinct MyResult = STUFF((select MyChar + ''
from MyValues mv2
order by mv2.N
--for xml path('')), 1, 0, '')
FOR XML PATH(''),TYPE).value('.','NVARCHAR(MAX)'), 1, 0, '')
from MyValues mv
;
Now instead of being forced to call this every single row you can utilize cross apply. The performance benefit of just this portion of your original question should be pretty huge.
I also eluded to your string splitter also being a potential performance issue. Here is an excellent article with a number of very fast set based string splitters. http://sqlperformance.com/2012/07/t-sql-queries/split-strings
The last step here would be eliminate the first loop in your procedure. This can be done also but I am not entirely certain what your code is doing there. I will look closer and see what I can find out. In the meantime parse through this and feel free to ask questions about any parts you don't understand.
Here is what I've got working based on the great help from Sean Lange:
How I call the Stored Procedure:
exec spRemoveUNICODE #FieldList='Notes,Notes2,Notes3,Notes4,Notes5',#Multiple=1,#TableName='#Temp'
The #Temp table is created:
create table #Temp (ID int,Notes nvarchar(Max),Notes2 nvarchar(max),Notes3 nvarchar(max),Notes4 nvarchar(max),Notes5 nvarchar(max))
Then I fill it with comments from 5 fields from a couple of different tables that range in length from NULL to blank (but not null) to 5000 characters.
I then insert some random characters like this:
update #Temp
set Notes2 = SUBSTRING(Notes2,1,LEN(Notes2)/2) + N'㹊潮Ņࢹᖈư㹨ƶ槹鎤⻄ƺ綐ڌ⸀ƺ삸)䀤ƍ샄)Ņᛡ鎤ꗘᖃᒨ쬵Ğᘍ鎤ᐜᏰ>֔υ赸Ƹ쳰డ촜)鉀촜)쮜)Ἡ屰山舰霡ࣆ 耏Аం畠Ư놐ᓜતᏛ֔Ꮫ֨Ꮫᓜƒ 邰厰ఆ邰드)抉鎤듄)繟Ĺ띨)ࢹ䮸ࣉࢹ䮸ࣉ샰)ԌƏ
I am using Dynamic SQL to retrieve datasets from multiple tables in order to monitor our daily data extraction from the iSeries system.
I have the below dynamic SQL code which works fine, but I want to only run the data to get each tables records if data has been extracted for the day
-- Create a table variable to store user data
DECLARE #myTable TABLE
(
docID INT IDENTITY(1,1),
docRef VARCHAR(50),
letterDir VARCHAR(500)
);
insert #myTable select docRef, saveDir from alpsMaster.dbo.uConfigData
-- Get the number of rows in the looping table
DECLARE #RowCount INT, #SQL nvarchar(500), #LoopSQL nvarchar(2000), #Date varchar(20)
set #Date='29 Oct 2013'
SET #RowCount = (SELECT COUNT(docID) FROM #myTable)
-- Declare an iterator
DECLARE #I INT
-- Initialize the iterator
SET #I = 1
-- Loop through the rows of a table #myTable
WHILE (#I <= #RowCount)
BEGIN
-- Declare variables to hold the data which we get after looping each record
DECLARE #docRef VARCHAR(10), #saveDir VARCHAR(500)
-- Get the data from table and set to variables
SELECT #docRef = docref FROM #myTable WHERE docID = #I
SELECT #saveDir = letterDir FROM #myTable WHERE docID = #I
-- Display the looped data
--PRINT 'Row No = ' + CONVERT(VARCHAR(2), #I) + '; docRef = ' + #docRef
select #LoopSQL='
use alpsProduction;
declare #SQL nvarchar(500);
IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(''[dbo].['+#docRef+']''))
begin
if exists(select * from sys.columns
where Name = ''YPTMPID'' and Object_ID = OBJECT_ID(''[dbo].['+#docRef+']''))
begin
set #SQL=''SELECT t.template_name,'''''+#saveDir+''''', Y.*
FROM [alpsProduction].[dbo].'+#docRef+' Y, alpsMaster.dbo.uDocumentTemplates t
where DTEINP='''''+#Date+''''' and t.template_Id=y.YPTMPID and t.docRef='''''+#docRef+'''''''
exec sp_executesql #SQL
end
end
'
--print #LoopSQL
exec sp_executesql #LoopSQL
-- Increment the iterator
SET #I = #I + 1
END
so I tried using
IF ##ROWCOUNT >0
Begin
exec sp_executesql #SQL
end
but it seems to never populate the ##Rowcount.
Whats the best way to only run that statement (exec sp_executesql #SQL) if the current table (identified by #docRef) has records in it for todays date (in the format dd mmm yyyy)
Create job to execute a sql script in which u must check inserted data on current day then execute your sp. like this.
IF EXISTS ( SELECT * FROM #TABLE T WHERE DATEDIFF(DD, GETUTCDATE(), T.CREATEDON) = 0 )
BEGIN
EXEC SP_EXECUTESQL #SQL
END
I have an arbitrary list of values and I want to delete records across multiple tables using T-SQL. I would like to re-use the script in the future with different lists of values. This is for debugging purposes only (I just want to clear out records so they can be re-imported with the new version of the software), so it doesn't need to be pretty.
So far I have:
DECLARE #RequestIDList table(Request_ID nvarchar(50) NOT NULL)
INSERT INTO #RequestIDList (Request_ID) VALUES
('00987172'),
('01013218'),
('01027886'),
('01029552'),
('01031476'),
('01032882'),
('01033085'),
('01034446'),
('01039261')
DELETE FROM Request WHERE Request_ID IN (SELECT Request_ID FROM #RequestIDList)
DELETE FROM RequestTest WHERE Request_ID IN (SELECT Request_ID FROM #RequestIDList)
It seems to work, but is there a better way? I can't seem to work out how to use a variable directly with an IN clause (e.g. "WHERE Request_ID IN #RequestIDList").
Quick script:
SET NOCOUNT ON
-- Temp table so it can be joined against in dynamic SQL
IF OBJECT_ID('tempdb..#RequestIDList') IS NOT NULL
DROP TABLE #RequestIDList
GO
CREATE TABLE #RequestIDList (Request_ID nvarchar(50) NOT NULL)
INSERT INTO #RequestIDList (Request_ID) VALUES
('00987172'),('01013218'),('01027886'),('01029552'),
('01031476'),('01032882'),('01033085'),('01034446'),
('01039261')
DECLARE #TableList TABLE (TableName NVARCHAR(128) NOT NULL)
INSERT #TableList VALUES
('Request'),
('RequestTest')
DECLARE
#sqlcmd VARCHAR(4000),
#table VARCHAR(128)
-- Loop through the tables in your delete list
DECLARE c CURSOR LOCAL FORWARD_ONLY STATIC READ_ONLY FOR
SELECT TableName
FROM #TableList
ORDER BY TableName
OPEN c
FETCH NEXT FROM c INTO #table
WHILE ##FETCH_STATUS = 0
BEGIN
-- Assuming all tables in schema dbo
-- Assuming all tables have column Request_ID
SET #sqlcmd = 'DELETE FROM t FROM ' + QUOTENAME(#table)
+ ' t JOIN #RequestIDList r ON r.Request_ID = t.Request_ID'
-- PRINT #sqlcmd
EXEC (#sqlcmd)
FETCH NEXT FROM c INTO #table
END
CLOSE c
DEALLOCATE c
-- Clean up
DROP TABLE #RequestIDList
First you need to create a function which parses the input
CREATE FUNCTION inputParser (#list nvarchar(MAX))
RETURNS #tbl TABLE (number int NOT NULL) AS
BEGIN
DECLARE #pos int,
#nextpos int,
#valuelen int
SELECT #pos = 0, #nextpos = 1
WHILE #nextpos > 0
BEGIN
SELECT #nextpos = charindex(',', #list, #pos + 1)
SELECT #valuelen = CASE WHEN #nextpos > 0
THEN #nextpos
ELSE len(#list) + 1
END - #pos - 1
INSERT #tbl (number)
VALUES (convert(int, substring(#list, #pos + 1, #valuelen)))
SELECT #pos = #nextpos
END
RETURN
END
Then use that function in the SP
CREATE PROCEDURE usp_delete
#RequestIDList varchar(50)
AS
Begin
DELETE FROM Request as req inner join
inputParser (#RequestIDList) i on req.Request_ID = i.number
End
EXEC usp_delete '1, 2, 3, 4'
For furthur details please have a look at this article .It explains differnt methods depending on the sql server version .For SQl server 2008 it uses TVP which further simplifies the input parser
I'm attempting to remove the cursors from this stored procedure but not sure of the Best latest best practise for this kind of operational to run in a efficient statement.
Can anyone offer any pseudo code on what to implement to eliminate these from a Dev perspective?
--Generate the channel date from a specified date
DECLARE #ConvDate DATETIME
SET #ConvDate = DateAdd(day,-100,getDate())
WHILE DateDiff(day,GetDate(), #ConvDate ) < 0
BEGIN
EXEC mltGenerateChannelData #ConvDate
SET #ConvDate = DateAdd(day, 1, #ConvDate)
END
CREATE PROCEDURE [dbo].[mltGenerateChannelData] (#ConvDate DATETIME) AS
BEGIN
DECLARE #ChannelId INT,
#URLSignature Varchar(30),
#RawSQL VARCHAR(2000),
#SQLQuery VARCHAR(4000),
#ThisUTMId BIGINT
DECLARE cursChannels CURSOR STATIC FOR
SELECT
ChannelId,
URLSignature,
RawSQL
FROM dbo.TrackingChannel_tbl (NOLOCK)
WHERE ProcessVisitDate = 1
SET #ConvDate = dbo.datePart_fn(#ConvDate)
--Clear out any existing data for this conversion date
DELETE FROM TrackingChannelDailyTotal_tbl
WHERE TrackingDate = #ConvDate
OPEN cursChannels
FETCH cursChannels INTO #ChannelId, #URLSignature, #RawSQL
CREATE TABLE #UTM
(trpUTMID BIGINT PRIMARY KEY,
TotalMArgin MONEY,
RawURLRequest Varchar(2000),
Keywords VARCHAR(1000),
VisitDate DATETIME,
RefererURL VARCHAR(2000))
INSERT INTO #UTM (trputmid, TotalMargin)
SELECT trpUTMID, SUM(b.TotalMArgin)
FROM TrackingConversion_tbl c(NOLOCK), Booking_tbl b(NOLOCK)
WHERE c.BookingId = b.BookingId
AND c.BookedDate >= #ConvDate
GROUP BY trputmid
UPDATE u
SET RawURLRequest = v.RawURLRequest,
Keywords = v.Keywords,
VisitDate = v.VisitDate,
RefererURL = v.RefererURL
FROM #UTM u,
TrackingVisit_tbl (NOLOCK) v
WHERE v.trpUTMID = u.trpUTMId
CREATE TABLE #UTM2 (trpUTMID BIGINT PRIMARY KEY)
WHILE ##FETCH_STATUS = 0
BEGIN
Print 'Processing Channel Id : ' + Convert(varchar(10), #ChannelId)
TRUNCATE TABLE #UTM2
SET #SQLQuery = ' INSERT INTO #UTM2 (trpUTMId)
SELECT u.TrpUTMId
FROM #UTM u
WHERE u.VisitDate >= ''' + COnvert(varchar,#ConvDate) + '''
AND u.VisitDate < DateAdd(day,1,''' + Convert(varchar,#ConvDate) + ''') '
IF #URLSignature <> ''
BEGIN
SET #SQLQuery = #SQLQuery + 'AND u.RawURLRequest like ''%' + #URLSignature + '%'' '
END
IF #RawSQL <> ''
BEGIN
SET #SQLQuery = #SQLQuery + #RawSQL
END
EXEC (#SQLQuery)
INSERT INTO TrackingChannelDailyTotal_tbl (ChannelId, TrackingDate, Conversions, TotalMargin)
SELECT #ChannelId, #ConvDate, Count(u1.trpUTMID), IsNUll(SUM(TotalMargin),0)
FROM #UTM u1, #UTM2 u2
WHERE u1.TRputmid = u2.trputmid
FETCH cursChannels INTO #ChannelId, #URLSignature, #RawSQL
END
CLOSE cursChannels
DEALLOCATE cursChannels
If you use SSMS Tools (http://www.ssmstoolspack.com/) and 'Include Actual Execution plan' with the query, you can break down the bottle-necks in the query, take each part and try to isolate and improve the query.
You may find that the issue is with a different part of the query, and not the cursor.
if by old server you mean low memory or processing power
and if it is feasible to upgrade the hardware
being a DBA i will suggest to upgrade the hardware.