Table variables inside while loop not initializing everytime : SQL Server - sql-server

I am wondering why the table variables inside while loop does not behave like other variables. Table variables created only once and will be used across through out whole looping. but other variables getting initialized every time when loop increases.
Check out the below code for more info
declare #tt int
set #tt =10
while #tt>0
begin
declare #temptable table(id int identity(1,1),sid bigint)
insert into #temptable
select #tt union all
select #tt + 1
select * from #temptable
--delete from #temptable
set #tt=#tt-1
end
is this a bug??

Your premise is wrong. Other variables don't get reinitialised every time the declare statement is encountered either.
set nocount on
declare #tt int
set #tt =10
while #tt>0
begin
declare #i int
set #i = isnull(#i,0) + 1
print #i
set #tt=#tt-1
end
Prints
1
2
...
9
10

As expected
SQL Server variable scope is per batch or the entire function/procedure/trigger, not per black/nested construct
http://msdn.microsoft.com/en-us/library/ms187953.aspx:
The scope of a variable is the range
of Transact-SQL statements that can
reference the variable. The scope of a
variable lasts from the point it is
declared until the end of the batch or
stored procedure in which it is
declared.

Though it is old post just wann add my comments
set nocount on
declare #tt int
set #tt =10
while #tt>0
begin
declare #i int=0
set #i = #i + 1
print #i
set #tt=#tt-1
end
Results:
1
1
1
1
1
1
1
1
1
1

If you want to load the table variable each time the loop executes. DROP FROM #Tablevariable once work done within the loop.

Related

using script in a variable

Can you declare a variable, set it as a script definition and then execute it repeatedly throughout your script? I understand how to set a variable to the result of a script, but I want to re-use the definition itself. This is because I want to occasionally get the count from a script and sometimes the top result throughout the rest of my script and I want to make it so the script is easily customized by only needing to change the script once at the beginning.
An example:
declare #RepeatScript nvarchar(200)
declare #count int
declare #topresult int
set #RepeatScript = ' from Table1 where something = 1 and something else > getdate()-5'
set #count = select count(ID) & #RepeatScript
set #topresult = select top 1 (ID) & #RepeatScript
This very simple case would be simple to fix, but if I wanted to reference the same set of information multiple times without having to create and drop a temp_table over and over, this would be very helpful. I do this kind of thing in MS Access all the time, but I can't seem to figure out how to do it in SSMS.
You don't need to repeatedly run these queries. You don't even need to run more than 1 query to capture this information. This will capture both pieces of data in a single query. You can then reference that information anywhere else within the current batch. This meets your criteria of simply changing the script at the beginning.
declare #count int
, #topresult int
select #count = count(ID)
, #topresult = MAX(ID) --MAX would the same thing as top 1 order by ID desc
from Table1
where something = 1
declare #RepeatScript nvarchar(200)
declare #count varchar(200)
declare #topresult varchar(200)
set #RepeatScript = ' from Table1 where something = 1 and something else > getdate()-5'
set #count = 'select count(ID) '+#RepeatScript+''
set #topresult = 'select top 1 (ID)'+#RepeatScript+''
print (#count)
print (#topresult)
Something like that? but instead of using print you would be using exec to run the select statement. Does that help?

T-SQL stored procedure - Detecting if a parameter is supplied as OUTPUT

Consider the following T-SQL code snippet:
CREATE PROC dbo.SquareNum(#i INT OUTPUT)
AS
BEGIN
SET #i = #i * #i
--SELECT #i
END
GO
DECLARE #a INT = 3, #b INT = 5
EXEC dbo.SquareNum #a OUTPUT
EXEC dbo.SquareNum #b
SELECT #a AS ASQUARE, #b AS BSQUARE
GO
DROP PROC dbo.SquareNum
The result set is:
ASQUARE BSQUARE
----------- -----------
9 5
As can be seen, #b is not squared, b/c it was not passed-in as output parameter (no OUTPUT qualifier when passing in the parameter).
I would like to know if there is a way I could check within stored procedure body (dbo.SquareNum body in this case) to see if a parameter has indeed been passed in as an OUTPUT parameter?
------ THIS WILL GIVE YOU THE BOTH VALUE IN squared------
CREATE PROC dbo.SquareNum(#i INT OUTPUT)
AS
BEGIN
SET #i = #i * #i
--SELECT #i
END
GO
DECLARE #a INT = 3, #b INT = 5
EXEC dbo.SquareNum #a OUTPUT
EXEC dbo.SquareNum #b OUTPUT
SELECT #a AS ASQUARE, #b AS BSQUARE
GO
DROP PROC dbo.SquareNum
-----TO CHECK STORED PROCEDURE BODY-----
SELECT OBJECT_NAME(object_id),
OBJECT_DEFINITION(object_id)
FROM sys.procedures
WHERE OBJECT_DEFINITION(object_id) =(SP_NAME)
Actually, there is a very simple way!
Make the parameter optional by setting a default value (#Qty AS Money = 0 Below)
Then, pass a value OTHER THAN THE DEFAULT when calling the procedure. Then immediately test the value and if it is other than the default value you know the variable has been passed.
Create Procedure MyProcedure(#PN AS NVarchar(50), #Rev AS NVarchar(5), #Qty AS Money = 0 OUTPUT) AS BEGIN
DECLARE #QtyPassed AS Bit = 0
IF #Qty <> 0 SET #QtyPassed = 1
Of course that means the variable cannot be used for anything other than OUTPUT unless you have a default value that you know will never be used as an INPUT value.
You can do this by query to sys views:
select
p.name as proc_name,
par.name as parameter_name,
par.is_output
from sys.procedures p
inner join sys.parameters par on par.object_id=p.object_id
where p.name = 'SquareNum'
or check in Management Studio in database tree:
[database] -> Programmability -> Stored Procedures -> [procedure] -> Parameters
Maybe I'm wrong but I don't believe it's possible. OUTPUT is part of the stored procedure definition so you should know when a parameter is or not OUTPUT. There is no way to set it dynamically so I think it's pointless to determine by code when a parameter is output or not because you already know it.
If you are trying to write a dynamic code, Piotr Lasota's answer should drive you to the correct way to realize when a parameter is Output.
Use the following query to get the name of all the parameters and to check if it is a output parameter:
select name, is_output from sys.parameters

why block scope variables exist outside of block?

i have the following stored procedure to test scope of variables
alter proc updatePrereq
#pcntr int,
#pmax int
as
begin
if #pcntr = 1
begin
declare #i int
set #i= #pcntr
end
set #i=#pcntr
select #i;
end
go
In the above script #i is declare in the if block only when the #pcntr value is 1. Assume the above stored procedure is called 5 times from this script
declare #z int
set #z = 1
declare #max int
set #max = 5
while #z <= #max
begin
exec dbo.updatePrereq #z, #max
set #z = #z + 1
end
go
As i said earlier #i variable exists only when #pcntr is 1. Therefore when i call the stored procedure for the second time and so forth the control cannot enter the if block therefore #i variable wouldn't even exist. But the script prints the value in #i in each iteration, How comes this is possible should it throw an error saying #i variable does not exist when #pcntr values is greater than 1?
here is video showing this issues
thanks
The scope of a variable is the range of Transact-SQL statements that can reference the variable. The scope of a variable lasts from the point it is declared until the end of the batch or stored procedure in which it is declared. (Source :MSDN)
Its scope doesn't end at the If satement

Must declare the scalar variable

I wrote this SQL in a stored procedure but not working,
declare #tableName varchar(max) = 'TblTest'
declare #col1Name varchar(max) = 'VALUE1'
declare #col2Name varchar(max) = 'VALUE2'
declare #value1 varchar(max)
declare #value2 varchar(200)
execute('Select TOP 1 #value1='+#col1Name+', #value2='+#col2Name+' From '+ #tableName +' Where ID = 61')
select #value1
execute('Select TOP 1 #value1=VALUE1, #value2=VALUE2 From TblTest Where ID = 61')
This SQL throws this error:
Must declare the scalar variable "#value1".
I am generating the SQL dynamically and I want to get value in a variable. What should I do?
The reason you are getting the DECLARE error from your dynamic statement is because dynamic statements are handled in separate batches, which boils down to a matter of scope. While there may be a more formal definition of the scopes available in SQL Server, I've found it sufficient to generally keep the following three in mind, ordered from highest availability to lowest availability:
Global:
Objects that are available server-wide, such as temporary tables created with a double hash/pound sign ( ##GLOBALTABLE, however you like to call # ). Be very wary of global objects, just as you would with any application, SQL Server or otherwise; these types of things are generally best avoided altogether. What I'm essentially saying is to keep this scope in mind specifically as a reminder to stay out of it.
IF ( OBJECT_ID( 'tempdb.dbo.##GlobalTable' ) IS NULL )
BEGIN
CREATE TABLE ##GlobalTable
(
Val BIT
);
INSERT INTO ##GlobalTable ( Val )
VALUES ( 1 );
END;
GO
-- This table may now be accessed by any connection in any database,
-- assuming the caller has sufficient privileges to do so, of course.
Session:
Objects which are reference locked to a specific spid. Off the top of my head, the only type of session object I can think of is a normal temporary table, defined like #Table. Being in session scope essentially means that after the batch ( terminated by GO ) completes, references to this object will continue to resolve successfully. These are technically accessible by other sessions, but it would be somewhat of a feat do to so programmatically as they get sort of randomized names in tempdb and accessing them is a bit of a pain in the ass anyway.
-- Start of session;
-- Start of batch;
IF ( OBJECT_ID( 'tempdb.dbo.#t_Test' ) IS NULL )
BEGIN
CREATE TABLE #t_Test
(
Val BIT
);
INSERT INTO #t_Test ( Val )
VALUES ( 1 );
END;
GO
-- End of batch;
-- Start of batch;
SELECT *
FROM #t_Test;
GO
-- End of batch;
Opening a new session ( a connection with a separate spid ), the second batch above would fail, as that session would be unable to resolve the #t_Test object name.
Batch:
Normal variables, such as your #value1 and #value2, are scoped only for the batch in which they are declared. Unlike #Temp tables, as soon as your query block hits a GO, those variables stop being available to the session. This is the scope level which is generating your error.
-- Start of session;
-- Start of batch;
DECLARE #test BIT = 1;
PRINT #test;
GO
-- End of batch;
-- Start of batch;
PRINT #Test; -- Msg 137, Level 15, State 2, Line 2
-- Must declare the scalar variable "#Test".
GO
-- End of batch;
Okay, so what?
What is happening here with your dynamic statement is that the EXECUTE() command effectively evaluates as a separate batch, without breaking the batch you executed it from. EXECUTE() is good and all, but since the introduction of sp_executesql(), I use the former only in the most simple of instances ( explicitly, when there is very little "dynamic" element of my statements at all, primarily to "trick" otherwise unaccommodating DDL CREATE statements to run in the middle of other batches ). #AaronBertrand's answer above is similar and will be similar in performance to the following, leveraging the function of the optimizer when evaluating dynamic statements, but I thought it might be worthwhile to expand on the #param, well, parameter.
IF NOT EXISTS ( SELECT 1
FROM sys.objects
WHERE name = 'TblTest'
AND type = 'U' )
BEGIN
--DROP TABLE dbo.TblTest;
CREATE TABLE dbo.TblTest
(
ID INTEGER,
VALUE1 VARCHAR( 1 ),
VALUE2 VARCHAR( 1 )
);
INSERT INTO dbo.TblTest ( ID, VALUE1, VALUE2 )
VALUES ( 61, 'A', 'B' );
END;
SET NOCOUNT ON;
DECLARE #SQL NVARCHAR( MAX ),
#PRM NVARCHAR( MAX ),
#value1 VARCHAR( MAX ),
#value2 VARCHAR( 200 ),
#Table VARCHAR( 32 ),
#ID INTEGER;
SET #Table = 'TblTest';
SET #ID = 61;
SET #PRM = '
#_ID INTEGER,
#_value1 VARCHAR( MAX ) OUT,
#_value2 VARCHAR( 200 ) OUT';
SET #SQL = '
SELECT #_value1 = VALUE1,
#_value2 = VALUE2
FROM dbo.[' + REPLACE( #Table, '''', '' ) + ']
WHERE ID = #_ID;';
EXECUTE dbo.sp_executesql #statement = #SQL, #param = #PRM,
#_ID = #ID, #_value1 = #value1 OUT, #_value2 = #value2 OUT;
PRINT #value1 + ' ' + #value2;
SET NOCOUNT OFF;
Declare #v1 varchar(max), #v2 varchar(200);
Declare #sql nvarchar(max);
Set #sql = N'SELECT #v1 = value1, #v2 = value2
FROM dbo.TblTest -- always use schema
WHERE ID = 61;';
EXEC sp_executesql #sql,
N'#v1 varchar(max) output, #v2 varchar(200) output',
#v1 output, #v2 output;
You should also pass your input, like wherever 61 comes from, as proper parameters (but you won't be able to pass table and column names that way).
Here is a simple example :
Create or alter PROCEDURE getPersonCountByLastName (
#lastName varchar(20),
#count int OUTPUT
)
As
Begin
select #count = count(personSid) from Person where lastName like #lastName
End;
Execute below statements in one batch (by selecting all)
1. Declare #count int
2. Exec getPersonCountByLastName kumar, #count output
3. Select #count
When i tried to execute statements 1,2,3 individually, I had the same error.
But when executed them all at one time, it worked fine.
The reason is that SQL executes declare, exec statements in different sessions.
Open to further corrections.
This will occur in SQL Server as well if you don't run all of the statements at once. If you are highlighting a set of statements and executing the following:
DECLARE #LoopVar INT
SET #LoopVar = (SELECT COUNT(*) FROM SomeTable)
And then try to highlight another set of statements such as:
PRINT 'LoopVar is: ' + CONVERT(NVARCHAR(255), #LoopVar)
You will receive this error.
-- CREATE OR ALTER PROCEDURE
ALTER PROCEDURE out (
#age INT,
#salary INT OUTPUT)
AS
BEGIN
SELECT #salary = (SELECT SALARY FROM new_testing where AGE = #age ORDER BY AGE OFFSET 0 ROWS FETCH NEXT 1 ROWS ONLY);
END
-----------------DECLARE THE OUTPUT VARIABLE---------------------------------
DECLARE #test INT
---------------------THEN EXECUTE THE QUERY---------------------------------
EXECUTE out 25 , #salary = #test OUTPUT
print #test
-------------------same output obtain without procedure-------------------------------------------
SELECT * FROM new_testing where AGE = 25 ORDER BY AGE OFFSET 0 ROWS FETCH NEXT 1 ROWS ONLY

Is there a way to make a TSQL variable constant?

Is there a way to make a TSQL variable constant?
No, but you can create a function and hardcode it in there and use that.
Here is an example:
CREATE FUNCTION fnConstant()
RETURNS INT
AS
BEGIN
RETURN 2
END
GO
SELECT dbo.fnConstant()
One solution, offered by Jared Ko is to use pseudo-constants.
As explained in SQL Server: Variables, Parameters or Literals? Or… Constants?:
Pseudo-Constants are not variables or parameters. Instead, they're simply views with one row, and enough columns to support your constants. With these simple rules, the SQL Engine completely ignores the value of the view but still builds an execution plan based on its value. The execution plan doesn't even show a join to the view!
Create like this:
CREATE SCHEMA ShipMethod
GO
-- Each view can only have one row.
-- Create one column for each desired constant.
-- Each column is restricted to a single value.
CREATE VIEW ShipMethod.ShipMethodID AS
SELECT CAST(1 AS INT) AS [XRQ - TRUCK GROUND]
,CAST(2 AS INT) AS [ZY - EXPRESS]
,CAST(3 AS INT) AS [OVERSEAS - DELUXE]
,CAST(4 AS INT) AS [OVERNIGHT J-FAST]
,CAST(5 AS INT) AS [CARGO TRANSPORT 5]
Then use like this:
SELECT h.*
FROM Sales.SalesOrderHeader h
JOIN ShipMethod.ShipMethodID const
ON h.ShipMethodID = const.[OVERNIGHT J-FAST]
Or like this:
SELECT h.*
FROM Sales.SalesOrderHeader h
WHERE h.ShipMethodID = (SELECT TOP 1 [OVERNIGHT J-FAST] FROM ShipMethod.ShipMethodID)
My workaround to missing constans is to give hints about the value to the optimizer.
DECLARE #Constant INT = 123;
SELECT *
FROM [some_relation]
WHERE [some_attribute] = #Constant
OPTION( OPTIMIZE FOR (#Constant = 123))
This tells the query compiler to treat the variable as if it was a constant when creating the execution plan. The down side is that you have to define the value twice.
No, but good old naming conventions should be used.
declare #MY_VALUE as int
There is no built-in support for constants in T-SQL. You could use SQLMenace's approach to simulate it (though you can never be sure whether someone else has overwritten the function to return something else…), or possibly write a table containing constants, as suggested over here. Perhaps write a trigger that rolls back any changes to the ConstantValue column?
Prior to using a SQL function run the following script to see the differences in performance:
IF OBJECT_ID('fnFalse') IS NOT NULL
DROP FUNCTION fnFalse
GO
IF OBJECT_ID('fnTrue') IS NOT NULL
DROP FUNCTION fnTrue
GO
CREATE FUNCTION fnTrue() RETURNS INT WITH SCHEMABINDING
AS
BEGIN
RETURN 1
END
GO
CREATE FUNCTION fnFalse() RETURNS INT WITH SCHEMABINDING
AS
BEGIN
RETURN ~ dbo.fnTrue()
END
GO
DECLARE #TimeStart DATETIME = GETDATE()
DECLARE #Count INT = 100000
WHILE #Count > 0 BEGIN
SET #Count -= 1
DECLARE #Value BIT
SELECT #Value = dbo.fnTrue()
IF #Value = 1
SELECT #Value = dbo.fnFalse()
END
DECLARE #TimeEnd DATETIME = GETDATE()
PRINT CAST(DATEDIFF(ms, #TimeStart, #TimeEnd) AS VARCHAR) + ' elapsed, using function'
GO
DECLARE #TimeStart DATETIME = GETDATE()
DECLARE #Count INT = 100000
DECLARE #FALSE AS BIT = 0
DECLARE #TRUE AS BIT = ~ #FALSE
WHILE #Count > 0 BEGIN
SET #Count -= 1
DECLARE #Value BIT
SELECT #Value = #TRUE
IF #Value = 1
SELECT #Value = #FALSE
END
DECLARE #TimeEnd DATETIME = GETDATE()
PRINT CAST(DATEDIFF(ms, #TimeStart, #TimeEnd) AS VARCHAR) + ' elapsed, using local variable'
GO
DECLARE #TimeStart DATETIME = GETDATE()
DECLARE #Count INT = 100000
WHILE #Count > 0 BEGIN
SET #Count -= 1
DECLARE #Value BIT
SELECT #Value = 1
IF #Value = 1
SELECT #Value = 0
END
DECLARE #TimeEnd DATETIME = GETDATE()
PRINT CAST(DATEDIFF(ms, #TimeStart, #TimeEnd) AS VARCHAR) + ' elapsed, using hard coded values'
GO
If you are interested in getting optimal execution plan for a value in the variable you can use a dynamic sql code. It makes the variable constant.
DECLARE #var varchar(100) = 'some text'
DECLARE #sql varchar(MAX)
SET #sql = 'SELECT * FROM table WHERE col = '''+#var+''''
EXEC (#sql)
For enums or simple constants, a view with a single row has great performance and compile time checking / dependency tracking ( cause its a column name )
See Jared Ko's blog post https://blogs.msdn.microsoft.com/sql_server_appendix_z/2013/09/16/sql-server-variables-parameters-or-literals-or-constants/
create the view
CREATE VIEW ShipMethods AS
SELECT CAST(1 AS INT) AS [XRQ - TRUCK GROUND]
,CAST(2 AS INT) AS [ZY - EXPRESS]
,CAST(3 AS INT) AS [OVERSEAS - DELUXE]
, CAST(4 AS INT) AS [OVERNIGHT J-FAST]
,CAST(5 AS INT) AS [CARGO TRANSPORT 5]
use the view
SELECT h.*
FROM Sales.SalesOrderHeader
WHERE ShipMethodID = ( select [OVERNIGHT J-FAST] from ShipMethods )
Okay, lets see
Constants are immutable values which are known at compile time and do not change for the life of the program
that means you can never have a constant in SQL Server
declare #myvalue as int
set #myvalue = 5
set #myvalue = 10--oops we just changed it
the value just changed
Since there is no build in support for constants, my solution is very simple.
Since this is not supported:
Declare Constant #supplement int = 240
SELECT price + #supplement
FROM what_does_it_cost
I would simply convert it to
SELECT price + 240/*CONSTANT:supplement*/
FROM what_does_it_cost
Obviously, this relies on the whole thing (the value without trailing space and the comment) to be unique. Changing it is possible with a global search and replace.
There are no such thing as "creating a constant" in database literature. Constants exist as they are and often called values. One can declare a variable and assign a value (constant) to it. From a scholastic view:
DECLARE #two INT
SET #two = 2
Here #two is a variable and 2 is a value/constant.
SQLServer 2022 (currently only as Preview available) is now able to Inline the function proposed by SQLMenace, this should prevent the performance hit described by some comments.
CREATE FUNCTION fnConstant() RETURNS INT AS BEGIN RETURN 2 END GO
SELECT is_inlineable FROM sys.sql_modules WHERE [object_id]=OBJECT_ID('dbo.fnConstant');
is_inlineable
1
SELECT dbo.fnConstant()
ExecutionPlan
To test if it also uses the value coming from the Function, I added a second function returning value "1"
CREATE FUNCTION fnConstant1()
RETURNS INT
AS
BEGIN
RETURN 1
END
GO
Create Temp Table with about 500k rows with Value 1 and 4 rows with Value 2:
DROP TABLE IF EXISTS #temp ;
create table #temp (value_int INT)
DECLARE #counter INT;
SET #counter = 0
WHILE #counter <= 500000
BEGIN
INSERT INTO #temp VALUES (1);
SET #counter = #counter +1
END
SET #counter = 0
WHILE #counter <= 3
BEGIN
INSERT INTO #temp VALUES (2);
SET #counter = #counter +1
END
create index i_temp on #temp (value_int);
Using the describe plan we can see that the Optimizer expects 500k values for
select * from #temp where value_int = dbo.fnConstant1(); --Returns 500001 rows
Constant 1
and 4 rows for
select * from #temp where value_int = dbo.fnConstant(); --Returns 4rows
Constant 2
Robert's performance test is interesting. And even in late 2022, the scalar functions are much slower (by an order of magnitude) than variables or literals. A view (as suggested mbobka) is somewhere in-between when used for this same test.
That said, using a loop like that in SQL Server is not something I'd ever do, because I'd normally be operating on a whole set.
In SQL 2019, if you use schema-bound functions in a set operation, the difference is much less noticeable.
I created and populated a test table:
create table #testTable (id int identity(1, 1) primary key, value tinyint);
And changed the test so that instead of looping and changing a variable, it queries the test table and returns true or false depending on the value in the test table, e.g.:
insert #testTable(value)
select case when value > 127
then #FALSE
else #TRUE
end
from #testTable with(nolock)
I tested 5 scenarios:
hard-coded values
local variables
scalar functions
a view
a table-valued function
running the test 10 times, yielded the following results:
scenario
min
max
avg
scalar functions
233
259
240
hard-coded values
236
265
243
local variables
235
278
245
table-valued function
243
272
253
view
244
267
254
Suggesting to me, that for set-based work in (at least) 2019 and better, there's not much in it.
set nocount on;
go
-- create test data table
drop table if exists #testTable;
create table #testTable (id int identity(1, 1) primary key, value tinyint);
-- populate test data
insert #testTable (value)
select top (1000000) convert(binary (1), newid())
from sys.all_objects a
, sys.all_objects b
go
-- scalar function for True
drop function if exists fnTrue;
go
create function dbo.fnTrue() returns bit with schemabinding as
begin
return 1
end
go
-- scalar function for False
drop function if exists fnFalse;
go
create function dbo.fnFalse () returns bit with schemabinding as
begin
return 0
end
go
-- table-valued function for booleans
drop function if exists dbo.tvfBoolean;
go
create function tvfBoolean() returns table with schemabinding as
return
select convert(bit, 1) as true, convert(bit, 0) as false
go
-- view for booleans
drop view if exists dbo.viewBoolean;
go
create view dbo.viewBoolean with schemabinding as
select convert(bit, 1) as true, convert(bit, 0) as false
go
-- create table for results
drop table if exists #testResults
create table #testResults (id int identity(1,1), test int, elapsed bigint, message varchar(1000));
-- define tests
declare #tests table(testNumber int, description nvarchar(100), sql nvarchar(max))
insert #tests values
(1, N'hard-coded values', N'
declare #testTable table (id int, value bit);
insert #testTable(id, value)
select id, case when t.value > 127
then 0
else 1
end
from #testTable t')
, (2, N'local variables', N'
declare #FALSE as bit = 0
declare #TRUE as bit = 1
declare #testTable table (id int, value bit);
insert #testTable(id, value)
select id, case when t.value > 127
then #FALSE
else #TRUE
end
from #testTable t'),
(3, N'scalar functions', N'
declare #testTable table (id int, value bit);
insert #testTable(id, value)
select id, case when t.value > 127
then dbo.fnFalse()
else dbo.fnTrue()
end
from #testTable t'),
(4, N'view', N'
declare #testTable table (id int, value bit);
insert #testTable(id, value)
select id, case when value > 127
then b.false
else b.true
end
from #testTable t with(nolock), viewBoolean b'),
(5, N'table-valued function', N'
declare #testTable table (id int, value bit);
insert #testTable(id, value)
select id, case when value > 127
then b.false
else b.true
end
from #testTable with(nolock), dbo.tvfBoolean() b')
;
declare #testNumber int, #description varchar(100), #sql nvarchar(max)
declare #testRuns int = 10;
-- execute tests
while #testRuns > 0 begin
set #testRuns -= 1
declare testCursor cursor for select testNumber, description, sql from #tests;
open testCursor
fetch next from testCursor into #testNumber, #description, #sql
while ##FETCH_STATUS = 0 begin
declare #TimeStart datetime2(7) = sysdatetime();
execute sp_executesql #sql;
declare #TimeEnd datetime2(7) = sysdatetime()
insert #testResults(test, elapsed, message)
select #testNumber, datediff_big(ms, #TimeStart, #TimeEnd), #description
fetch next from testCursor into #testNumber, #description, #sql
end
close testCursor
deallocate testCursor
end
-- display results
select test, message, count(*) runs, min(elapsed) as min, max(elapsed) as max, avg(elapsed) as avg
from #testResults
group by test, message
order by avg(elapsed);
The best answer is from SQLMenace according to the requirement if that is to create a temporary constant for use within scripts, i.e. across multiple GO statements/batches.
Just create the procedure in the tempdb then you have no impact on the target database.
One practical example of this is a database create script which writes a control value at the end of the script containing the logical schema version. At the top of the file are some comments with change history etc... But in practice most developers will forget to scroll down and update the schema version at the bottom of the file.
Using the above code allows a visible schema version constant to be defined at the top before the database script (copied from the generate scripts feature of SSMS) creates the database but used at the end. This is right in the face of the developer next to the change history and other comments, so they are very likely to update it.
For example:
use tempdb
go
create function dbo.MySchemaVersion()
returns int
as
begin
return 123
end
go
use master
go
-- Big long database create script with multiple batches...
print 'Creating database schema version ' + CAST(tempdb.dbo.MySchemaVersion() as NVARCHAR) + '...'
go
-- ...
go
-- ...
go
use MyDatabase
go
-- Update schema version with constant at end (not normally possible as GO puts
-- local #variables out of scope)
insert MyConfigTable values ('SchemaVersion', tempdb.dbo.MySchemaVersion())
go
-- Clean-up
use tempdb
drop function MySchemaVersion
go

Resources