SQL Server inserting huge number of rows to a table with default values and identity column in it - sql-server

I need to insert about 6400000 rows a table with 2 columns
CREATE TABLE [DBName].[DBO].[BigList]
(
[All_ID] [int] identity(1,1) NOT NULL,
[Is_It_Occupied] [int] default(0) not null
)
I am using the following code today, which takes very long time about 100 minutes.
SET #NumberOfRecordsToInsert = 6400000;
WHILE (#NumberOfRecordsToInsert > 0)
BEGIN
INSERT [DBName].[DBO].[BigList] DEFAULT VALUES;
SET #NumberOfRecordsToInsert = #NumberOfRecordsToInsert - 1
END
Does anyone have a better way to do this?

Grab a hold of 6400000 rows from somewhere and insert them all at once.
insert into BigList(Is_It_Occupied)
select top(6400000) 0
from sys.all_objects as o1
cross join sys.all_objects as o2
cross join sys.all_objects as o3
Did some testing on how long time the different solutions took on my computer.
Solution Seconds
-------------------------------------------------- -----------
Mikael Eriksson 13
Naresh 832
Dd2 25
TToni 92
Milica Medic 90
marc_s 2239

Your main problem is that each statement runs within a separate transaction. Putting everything in one transaction isn't advisable because very large transactions create their own problems.
But the biggest bottleneck in your code will be the I/O on the transaction log. The following code achieves a 14 MB/s overall write rate on my Laptop (with a Samsung 840 SSD) and runs in 75 seconds:
DECLARE #NumberOfRecordsToInsert INT = 6400000;
DECLARE #Inner INT = 10000;
SET NOCOUNT ON
WHILE (#NumberOfRecordsToInsert > 0)
BEGIN
BEGIN TRAN
SET #Inner = 0
WHILE (#Inner < 10000)
BEGIN
INSERT [BigList] DEFAULT VALUES;
SET #Inner = #Inner+1
END
COMMIT
SET #NumberOfRecordsToInsert = #NumberOfRecordsToInsert - #Inner
END

Why don't you use this:
INSERT [DBName].[DBO].[BigList] DEFAULT VALUES;
GO 6400000
in SQL Server Management STudio, this will execute the command as many times as you specify in the GO xxxx statement
But even so: inserting over 6 million rows will take some time!

You can try something like this, it took me less time than running your query:
SET NOCOUNT ON
BEGIN TRAN
DECLARE #i INT
SET #i = 1
WHILE #i <= 6400000
BEGIN
INSERT INTO [DBName].[DBO].[BigList] DEFAULT VALUES
SET #i = #i + 1
END
COMMIT TRAN
Hope it helps

DECLARE #NoRows INT
DECLARE #temp AS TABLE (Is_It_Occupied INT)
SET #NoRows = 1000
WHILE (#NoRows > 0)
BEGIN
INSERT INTO #temp (Is_It_Occupied) VALUES (0)
SET #NoRows = #NoRows - 1
END
SET #NoRows = 6400
WHILE (#NoRows > 0)
BEGIN
INSERT INTO BigList (Is_It_Occupied)
SELECT Is_It_Occupied FROM #temp
SET #NoRows = #NoRows - 1
END

Related

delete in small chunks from cursor data - T-SQL stored Procedure

i have written this t-sql stored procedure.
This is working fine , but since this stored procedure will be used to delete a lot of data (for example 1M to 2M) , I think, this can cause some locks in the table, or cause some db performance etc. So I am thinking, if we delete in batch for example at a time delete 1000 records. I am not totally sure about how to do this without cause any issue in db.
ALTER PROCEDURE [schema].[purge_data] #count INT---(count input can be in millions)
AS
DECLARE #p_number VARCHAR(22)
,#p_r_number VARCHAR(5)
DECLARE data_cursor CURSOR
FOR
SELECT TOP (#count) JRC_policy_number
,jrc_part_range_nbr
FROM [staging].[test].[p_location]
WHERE JRC_POLICY_TERM_DT < CAST('19950101 00:00:00.000' AS DATETIME)
AND jrc_policy_status = 'T'
OPEN data_cursor
FETCH NEXT
FROM data_cursor
INTO #p_number
,#p_r_number
WHILE ##FETCH_STATUS = 0
BEGIN
DELETE
FROM [staging].[test].[p_location]
WHERE JRC_policy_number = #p_number
AND jrc_part_range_nbr = #p_r_number
FETCH NEXT
FROM data_cursor
INTO #p_number
,#p_r_number
END
CLOSE data_cursor
DEALLOCATE data_cursor
Edit :
I had already tried without cursor - direct delete query like below.
DELETE TOP (1000) FROM [MyTab] WHERE YourConditions
It was very fast , it took 34 seconds to delete 1M records, but , during the 34 seconds, the table was locked completely. In production p_locator table is being used 24/7 , and being used by a very critical application, which expects response time in milliseconds, our purge script should not impact the the main application in any way. that's why I have chosen this cursor approach. pls guide
With some of your references I've written the below stored proc. Ofcourse there will be ALOT of scope for improvements. Pls share.
ALTER PROCEDURE [dbo].[purge_data] #count INT
AS
DECLARE #iteration INT
,#remainder INT
,#current_count INT
BEGIN
SELECT #current_count = count(*)
FROM PROD_TBL
WHERE JRC_POLICY_TERM_DT < CAST('19950101 00:00:00.000' AS DATETIME)
AND JRC_POLICY_STATUS = 'T'
AND JRC_PLCY_ADMIN_SYS_CD = 'X'
IF (#current_count < #count)
BEGIN
SET #count = #current_count
END
SET #iteration = #count / 10000
SET #remainder = #count % 10000
WHILE (#iteration > 0)
BEGIN
DELETE
FROM PROD_TBL
FROM (
SELECT TOP 10000 JRC_POLICY_NUMBER
,JRC_PART_RANGE_NBR
,JRC_PLCY_ADMIN_SYS_CD
FROM PROD_TBL
WHERE JRC_POLICY_TERM_DT < CAST('19950101 00:00:00.000' AS DATETIME)
AND JRC_POLICY_STATUS = 'T'
AND JRC_PLCY_ADMIN_SYS_CD = 'X'
) pol_locator_tbl
WHERE PROD_TBL.JRC_POLICY_NUMBER = pol_locator_tbl.JRC_POLICY_NUMBER
AND PROD_TBL.JRC_PART_RANGE_NBR = pol_locator_tbl.JRC_PART_RANGE_NBR
AND PROD_TBL.JRC_PLCY_ADMIN_SYS_CD=pol_locator_tbl.JRC_PLCY_ADMIN_SYS_CD
SET #iteration = #iteration - 1
END
IF (#remainder > 0)
BEGIN
DELETE
FROM PROD_TBL
FROM (
SELECT TOP (#remainder) JRC_POLICY_NUMBER
,JRC_PART_RANGE_NBR
,JRC_PLCY_ADMIN_SYS_CD
FROM PROD_TBL
WHERE JRC_POLICY_TERM_DT < CAST('19950101 00:00:00.000' AS DATETIME)
AND JRC_POLICY_STATUS = 'T'
AND JRC_PLCY_ADMIN_SYS_CD = 'X'
) pol_locator_tbl
WHERE PROD_TBL.JRC_POLICY_NUMBER = pol_locator_tbl.JRC_POLICY_NUMBER
AND PROD_TBL.JRC_PART_RANGE_NBR = pol_locator_tbl.JRC_PART_RANGE_NBR
AND PROD_TBL.JRC_PLCY_ADMIN_SYS_CD=pol_locator_tbl.JRC_PLCY_ADMIN_SYS_CD
END
END
END

Is there a better way to DELETE 80 million+ rows from a table?

Is there a better way to DELETE 80 million+ rows from a table?
WHILE EXISTS (SELECT TOP 1 * FROM large_table)
BEGIN
WITH LT AS
(
SELECT TOP 60000 *
FROM large_table
)
DELETE FROM LT
END
This does the job of keeping my transaction logs from becoming too large, but I need to know if there is a way to make this process go faster? I've had my computer on for 5+ days now running this script and I haven't gotten very far, very fast.
You can truncate the table simply by.
TRUNCATE TABLE large_table
GO
You can also use delete by using where condition. The time taken by delete depends on various aspects. You can reduce the cost by eliminating SELECT query in the condition of WHILE loop.
DECLARE #rows INT = 1
WHILE (#rows>0)
BEGIN
DELETE TOP 1000 *
FROM large_table
#rows = ##ROWCOUNT
END
Bulk deletion will create a lots of logs and rollback happen if the log file is full.
you can do the delete as batches and ensure every transaction is committed.
DECLARE #IDCollection TABLE (ID INT)
DECLARE #Batch INT = 1000;
DECLARE #ROWCOUNT INT;
WHILE (1 = 1)
BEGIN
BEGIN TRANSACTION;
INSERT INTO #IDCollection
SELECT TOP (#Batch) ID
FROM table
ORDER BY id
DELETE
FROM table
WHERE id IN (
SELECT *
FROM #IDCollection
)
SET #ROWCOUNT = ##ROWCOUNT
IF (#ROWCOUNT = 0)
BREAK
COMMIT TRANSACTION;
END

Resume a WHILE loop from where it stopped SQL

I have a while loop query that I only want to run until 11PM everyday - I'm aware this can be achieved with a WAITFOR statement, and then just END the query.
However, on the following day, once I re-run my query, I want it to continue from where it stopped on the last run. So I'm thinking of creating a log table that will contain the last processed ID.
How can I achieve this?
DECLARE #MAX_Value BIGINT = ( SELECT MAX(ID) FROM dbo.TableA )
DECLARE #MIN_Value BIGINT = ( SELECT MIN(ID) FROM dbo.TableA )
WHILE (#MIN_Value < #MAX_Value )
BEGIN
INSERT INTO dbo.MyResults
/* Do some processing*/
….
….
….
SET #MIN_Value = MIN_Value + 1
/*I only want the above processing to run until 11PM*/
/* Once it’s 11PM, I want to save the last used #MIN_Value
into my LoggingTable (dbo.Logging) and kill the above processing.*/
/* Once I re-run the query I want my processing to restart from the
above #MIN_Value which is recorded in dbo.Logging */
END
Disclaimer: I do not recommend using WHILE loops in SQL Server but considering the comment that you want a solution in SQL, here you go:
-- First of all, I strongly recommend using a different way of assigning variable values to avoid scenarios with the variable being NULL when the table is empty, also you can do it in a single select.
-- Also, if something started running at 10:59:59 it will let the processing for the value finish and will not simply rollback at 11.
CREATE TABLE dbo.ProcessingValueLog (
LogEntryId BIGINT IDENTITY(1,1) NOT NULL,
LastUsedValue BIGINT NOT NULL,
LastUsedDateTime DATETIME NOT NULL DEFAULT(GETDATE()),
CompletedProcessing BIT NOT NULL DEFAULT(0)
)
DECLARE #MAX_Value BIGINT = 0;
DECLARE #MIN_Value BIGINT = 0;
SELECT
#MIN_Value = MIN(ID),
#MAX_Value = MAX(ID)
FROM
dbo.TableA
SELECT TOP 1
#MIN_Value = LastUsedValue
FROM
dbo.ProcessingValueLog
WHERE
CompletedProcessing = 1
ORDER BY
LastUsedDateTime DESC
DECLARE #CurrentHour TINYINT = HOUR(GETDATE());
DECLARE #LogEntryID BIGINT;
WHILE (#MIN_Value < #MAX_Value AND #CurrentHour < 23)
BEGIN
INSERT INTO dbo.ProcessingValueLog (LastUsedValue)
VALUE(#MIN_Value)
SELECT #LogEntryID = SCOPE_IDENTITY()
// Do some processing...
SET #MIN_Value = #MIN_Value + 1;
UPDATE dbo.ProcessingValueLog
SET CompletedProcessing = 1
WHERE LogEntryId = #LogEntryID
SET #CurrentHour = HOUR(GETDATE())
END

Copy data from one table to another using cursors

i have a really large table (about 13 million rows) called Book. I want to set the primary key in a column of the Book table but as it is a very large table the server crashes during updating. It runs out of memory. So i created a BookTemp table, i set all primary keys in this empty table and then i want to insert the data from Book to BookTemp table. But if i do it at once the memory again runs out. So i thought to use cursors in order to insert 10,000 rows each time and then erase RAM but i am really new to cursors so in that point i would like your help.
I use SQL Server 2008 R2
I would suggest using a while loop to iterate over your temporary table. The example here should get you started.
Or you could just modify this:
DECLARE #counter AS INT = 0;
DECLARE #batch_size AS INT = 10000;
WHILE (#counter < (SELECT MAX(id) FROM temp_table))
BEGIN
INSERT INTO the_table
SELECT * FROM temp_table
WHERE id BETWEEN #counter AND (#counter + #batch_size - 1);
SET #counter = #counter + #batch_size;
END
When executed following three commands will free up memory for SQL Server by cleaning up its cache.
DBCC FREESYSTEMCACHE
DBCC FREESESSIONCACHE
DBCC FREEPROCCACHE
However, it can be used in a on going operation, so please see blow query, after inserting each 10000 the memory is cleared by executing the above DBCC command.
DECLARE #counter INT = 1
DECLARE cur_Data_Transfer CURSOR FOR -- Cursor declared
SELECT column1, column2 -- select desired columns
FROM Book
OPEN cur_Data_Transfer --Opening Cursor
FETCH NEXT FROM cur_Data_Transfer INTO #column1, #column2 --Put values to variable
WHILE ##FETCH_STATUS = 0 -- Faching is success
BEGIN
INSERT INTO BookTemp (column1, column2) -- Inserting to temptable
VALUES(#column1, #column2)
IF #counter = 10000
BEGIN
DBCC FREESYSTEMCACHE -- Clear System Cache
DBCC FREEPROCCACHE -- Clear Proc Cache
SET #counter = 0 -- Restarting counter
END
FETCH NEXT FROM cur_Data_Transfer INTO #column1, #column2
SET #counter = #counter + 1
END
CLOSE cur_Data_Transfer -- Closing cursor
DEALLOCATE cur_Data_Transfer -- De-allocating

Is there a way to make a TSQL variable constant?

Is there a way to make a TSQL variable constant?
No, but you can create a function and hardcode it in there and use that.
Here is an example:
CREATE FUNCTION fnConstant()
RETURNS INT
AS
BEGIN
RETURN 2
END
GO
SELECT dbo.fnConstant()
One solution, offered by Jared Ko is to use pseudo-constants.
As explained in SQL Server: Variables, Parameters or Literals? Or… Constants?:
Pseudo-Constants are not variables or parameters. Instead, they're simply views with one row, and enough columns to support your constants. With these simple rules, the SQL Engine completely ignores the value of the view but still builds an execution plan based on its value. The execution plan doesn't even show a join to the view!
Create like this:
CREATE SCHEMA ShipMethod
GO
-- Each view can only have one row.
-- Create one column for each desired constant.
-- Each column is restricted to a single value.
CREATE VIEW ShipMethod.ShipMethodID AS
SELECT CAST(1 AS INT) AS [XRQ - TRUCK GROUND]
,CAST(2 AS INT) AS [ZY - EXPRESS]
,CAST(3 AS INT) AS [OVERSEAS - DELUXE]
,CAST(4 AS INT) AS [OVERNIGHT J-FAST]
,CAST(5 AS INT) AS [CARGO TRANSPORT 5]
Then use like this:
SELECT h.*
FROM Sales.SalesOrderHeader h
JOIN ShipMethod.ShipMethodID const
ON h.ShipMethodID = const.[OVERNIGHT J-FAST]
Or like this:
SELECT h.*
FROM Sales.SalesOrderHeader h
WHERE h.ShipMethodID = (SELECT TOP 1 [OVERNIGHT J-FAST] FROM ShipMethod.ShipMethodID)
My workaround to missing constans is to give hints about the value to the optimizer.
DECLARE #Constant INT = 123;
SELECT *
FROM [some_relation]
WHERE [some_attribute] = #Constant
OPTION( OPTIMIZE FOR (#Constant = 123))
This tells the query compiler to treat the variable as if it was a constant when creating the execution plan. The down side is that you have to define the value twice.
No, but good old naming conventions should be used.
declare #MY_VALUE as int
There is no built-in support for constants in T-SQL. You could use SQLMenace's approach to simulate it (though you can never be sure whether someone else has overwritten the function to return something else…), or possibly write a table containing constants, as suggested over here. Perhaps write a trigger that rolls back any changes to the ConstantValue column?
Prior to using a SQL function run the following script to see the differences in performance:
IF OBJECT_ID('fnFalse') IS NOT NULL
DROP FUNCTION fnFalse
GO
IF OBJECT_ID('fnTrue') IS NOT NULL
DROP FUNCTION fnTrue
GO
CREATE FUNCTION fnTrue() RETURNS INT WITH SCHEMABINDING
AS
BEGIN
RETURN 1
END
GO
CREATE FUNCTION fnFalse() RETURNS INT WITH SCHEMABINDING
AS
BEGIN
RETURN ~ dbo.fnTrue()
END
GO
DECLARE #TimeStart DATETIME = GETDATE()
DECLARE #Count INT = 100000
WHILE #Count > 0 BEGIN
SET #Count -= 1
DECLARE #Value BIT
SELECT #Value = dbo.fnTrue()
IF #Value = 1
SELECT #Value = dbo.fnFalse()
END
DECLARE #TimeEnd DATETIME = GETDATE()
PRINT CAST(DATEDIFF(ms, #TimeStart, #TimeEnd) AS VARCHAR) + ' elapsed, using function'
GO
DECLARE #TimeStart DATETIME = GETDATE()
DECLARE #Count INT = 100000
DECLARE #FALSE AS BIT = 0
DECLARE #TRUE AS BIT = ~ #FALSE
WHILE #Count > 0 BEGIN
SET #Count -= 1
DECLARE #Value BIT
SELECT #Value = #TRUE
IF #Value = 1
SELECT #Value = #FALSE
END
DECLARE #TimeEnd DATETIME = GETDATE()
PRINT CAST(DATEDIFF(ms, #TimeStart, #TimeEnd) AS VARCHAR) + ' elapsed, using local variable'
GO
DECLARE #TimeStart DATETIME = GETDATE()
DECLARE #Count INT = 100000
WHILE #Count > 0 BEGIN
SET #Count -= 1
DECLARE #Value BIT
SELECT #Value = 1
IF #Value = 1
SELECT #Value = 0
END
DECLARE #TimeEnd DATETIME = GETDATE()
PRINT CAST(DATEDIFF(ms, #TimeStart, #TimeEnd) AS VARCHAR) + ' elapsed, using hard coded values'
GO
If you are interested in getting optimal execution plan for a value in the variable you can use a dynamic sql code. It makes the variable constant.
DECLARE #var varchar(100) = 'some text'
DECLARE #sql varchar(MAX)
SET #sql = 'SELECT * FROM table WHERE col = '''+#var+''''
EXEC (#sql)
For enums or simple constants, a view with a single row has great performance and compile time checking / dependency tracking ( cause its a column name )
See Jared Ko's blog post https://blogs.msdn.microsoft.com/sql_server_appendix_z/2013/09/16/sql-server-variables-parameters-or-literals-or-constants/
create the view
CREATE VIEW ShipMethods AS
SELECT CAST(1 AS INT) AS [XRQ - TRUCK GROUND]
,CAST(2 AS INT) AS [ZY - EXPRESS]
,CAST(3 AS INT) AS [OVERSEAS - DELUXE]
, CAST(4 AS INT) AS [OVERNIGHT J-FAST]
,CAST(5 AS INT) AS [CARGO TRANSPORT 5]
use the view
SELECT h.*
FROM Sales.SalesOrderHeader
WHERE ShipMethodID = ( select [OVERNIGHT J-FAST] from ShipMethods )
Okay, lets see
Constants are immutable values which are known at compile time and do not change for the life of the program
that means you can never have a constant in SQL Server
declare #myvalue as int
set #myvalue = 5
set #myvalue = 10--oops we just changed it
the value just changed
Since there is no build in support for constants, my solution is very simple.
Since this is not supported:
Declare Constant #supplement int = 240
SELECT price + #supplement
FROM what_does_it_cost
I would simply convert it to
SELECT price + 240/*CONSTANT:supplement*/
FROM what_does_it_cost
Obviously, this relies on the whole thing (the value without trailing space and the comment) to be unique. Changing it is possible with a global search and replace.
There are no such thing as "creating a constant" in database literature. Constants exist as they are and often called values. One can declare a variable and assign a value (constant) to it. From a scholastic view:
DECLARE #two INT
SET #two = 2
Here #two is a variable and 2 is a value/constant.
SQLServer 2022 (currently only as Preview available) is now able to Inline the function proposed by SQLMenace, this should prevent the performance hit described by some comments.
CREATE FUNCTION fnConstant() RETURNS INT AS BEGIN RETURN 2 END GO
SELECT is_inlineable FROM sys.sql_modules WHERE [object_id]=OBJECT_ID('dbo.fnConstant');
is_inlineable
1
SELECT dbo.fnConstant()
ExecutionPlan
To test if it also uses the value coming from the Function, I added a second function returning value "1"
CREATE FUNCTION fnConstant1()
RETURNS INT
AS
BEGIN
RETURN 1
END
GO
Create Temp Table with about 500k rows with Value 1 and 4 rows with Value 2:
DROP TABLE IF EXISTS #temp ;
create table #temp (value_int INT)
DECLARE #counter INT;
SET #counter = 0
WHILE #counter <= 500000
BEGIN
INSERT INTO #temp VALUES (1);
SET #counter = #counter +1
END
SET #counter = 0
WHILE #counter <= 3
BEGIN
INSERT INTO #temp VALUES (2);
SET #counter = #counter +1
END
create index i_temp on #temp (value_int);
Using the describe plan we can see that the Optimizer expects 500k values for
select * from #temp where value_int = dbo.fnConstant1(); --Returns 500001 rows
Constant 1
and 4 rows for
select * from #temp where value_int = dbo.fnConstant(); --Returns 4rows
Constant 2
Robert's performance test is interesting. And even in late 2022, the scalar functions are much slower (by an order of magnitude) than variables or literals. A view (as suggested mbobka) is somewhere in-between when used for this same test.
That said, using a loop like that in SQL Server is not something I'd ever do, because I'd normally be operating on a whole set.
In SQL 2019, if you use schema-bound functions in a set operation, the difference is much less noticeable.
I created and populated a test table:
create table #testTable (id int identity(1, 1) primary key, value tinyint);
And changed the test so that instead of looping and changing a variable, it queries the test table and returns true or false depending on the value in the test table, e.g.:
insert #testTable(value)
select case when value > 127
then #FALSE
else #TRUE
end
from #testTable with(nolock)
I tested 5 scenarios:
hard-coded values
local variables
scalar functions
a view
a table-valued function
running the test 10 times, yielded the following results:
scenario
min
max
avg
scalar functions
233
259
240
hard-coded values
236
265
243
local variables
235
278
245
table-valued function
243
272
253
view
244
267
254
Suggesting to me, that for set-based work in (at least) 2019 and better, there's not much in it.
set nocount on;
go
-- create test data table
drop table if exists #testTable;
create table #testTable (id int identity(1, 1) primary key, value tinyint);
-- populate test data
insert #testTable (value)
select top (1000000) convert(binary (1), newid())
from sys.all_objects a
, sys.all_objects b
go
-- scalar function for True
drop function if exists fnTrue;
go
create function dbo.fnTrue() returns bit with schemabinding as
begin
return 1
end
go
-- scalar function for False
drop function if exists fnFalse;
go
create function dbo.fnFalse () returns bit with schemabinding as
begin
return 0
end
go
-- table-valued function for booleans
drop function if exists dbo.tvfBoolean;
go
create function tvfBoolean() returns table with schemabinding as
return
select convert(bit, 1) as true, convert(bit, 0) as false
go
-- view for booleans
drop view if exists dbo.viewBoolean;
go
create view dbo.viewBoolean with schemabinding as
select convert(bit, 1) as true, convert(bit, 0) as false
go
-- create table for results
drop table if exists #testResults
create table #testResults (id int identity(1,1), test int, elapsed bigint, message varchar(1000));
-- define tests
declare #tests table(testNumber int, description nvarchar(100), sql nvarchar(max))
insert #tests values
(1, N'hard-coded values', N'
declare #testTable table (id int, value bit);
insert #testTable(id, value)
select id, case when t.value > 127
then 0
else 1
end
from #testTable t')
, (2, N'local variables', N'
declare #FALSE as bit = 0
declare #TRUE as bit = 1
declare #testTable table (id int, value bit);
insert #testTable(id, value)
select id, case when t.value > 127
then #FALSE
else #TRUE
end
from #testTable t'),
(3, N'scalar functions', N'
declare #testTable table (id int, value bit);
insert #testTable(id, value)
select id, case when t.value > 127
then dbo.fnFalse()
else dbo.fnTrue()
end
from #testTable t'),
(4, N'view', N'
declare #testTable table (id int, value bit);
insert #testTable(id, value)
select id, case when value > 127
then b.false
else b.true
end
from #testTable t with(nolock), viewBoolean b'),
(5, N'table-valued function', N'
declare #testTable table (id int, value bit);
insert #testTable(id, value)
select id, case when value > 127
then b.false
else b.true
end
from #testTable with(nolock), dbo.tvfBoolean() b')
;
declare #testNumber int, #description varchar(100), #sql nvarchar(max)
declare #testRuns int = 10;
-- execute tests
while #testRuns > 0 begin
set #testRuns -= 1
declare testCursor cursor for select testNumber, description, sql from #tests;
open testCursor
fetch next from testCursor into #testNumber, #description, #sql
while ##FETCH_STATUS = 0 begin
declare #TimeStart datetime2(7) = sysdatetime();
execute sp_executesql #sql;
declare #TimeEnd datetime2(7) = sysdatetime()
insert #testResults(test, elapsed, message)
select #testNumber, datediff_big(ms, #TimeStart, #TimeEnd), #description
fetch next from testCursor into #testNumber, #description, #sql
end
close testCursor
deallocate testCursor
end
-- display results
select test, message, count(*) runs, min(elapsed) as min, max(elapsed) as max, avg(elapsed) as avg
from #testResults
group by test, message
order by avg(elapsed);
The best answer is from SQLMenace according to the requirement if that is to create a temporary constant for use within scripts, i.e. across multiple GO statements/batches.
Just create the procedure in the tempdb then you have no impact on the target database.
One practical example of this is a database create script which writes a control value at the end of the script containing the logical schema version. At the top of the file are some comments with change history etc... But in practice most developers will forget to scroll down and update the schema version at the bottom of the file.
Using the above code allows a visible schema version constant to be defined at the top before the database script (copied from the generate scripts feature of SSMS) creates the database but used at the end. This is right in the face of the developer next to the change history and other comments, so they are very likely to update it.
For example:
use tempdb
go
create function dbo.MySchemaVersion()
returns int
as
begin
return 123
end
go
use master
go
-- Big long database create script with multiple batches...
print 'Creating database schema version ' + CAST(tempdb.dbo.MySchemaVersion() as NVARCHAR) + '...'
go
-- ...
go
-- ...
go
use MyDatabase
go
-- Update schema version with constant at end (not normally possible as GO puts
-- local #variables out of scope)
insert MyConfigTable values ('SchemaVersion', tempdb.dbo.MySchemaVersion())
go
-- Clean-up
use tempdb
drop function MySchemaVersion
go

Resources