Avoid while loop to check row "state change" - sql-server

I have a table that stores an Id, a datetime and an int crescent value. This value increases until it "breaks" and returns to a 0-near value. Ex: ...1000, 1200, 1350, 8, 10, 25...
I need to count how many times this "overflow" happens, BUT... I'm talking about a table that stores 200k rows per day!
I had already solved it! But using a procedure with a cursor that iterates over it with a while-loop. But I KNOW it isn't the faster way to do it.
Can someone help me to find some another way?
Thanks!
->
Table structure:
Id Bigint Primary Key, CreatedAt DateTime, Value Not Null Int.
Problem:
If Delta-Value between two consecutive rows is < 0, increase a counter.
Table has 200k new rows every-day.
No trigger allowed.
[FIRST EDIT]
Table has the actual structure:
CREATE TABLE ValuesLog (
Id BIGINT PRIMARY KEY,
Machine BIGINT,
CreatedAt DATETIME,
Value INT
)
I need:
To check when the [Value] of some [Machine] suddenly decreases.
Some users said to used LEAD/LAG. But it has a problem... if I chose many machines, the LEAD/LAG fuctions doesn't care about "what machine it is". So, if I find for machine-1 and machine-2, if machine-1 increase but the machine-2 descrease, LEAD/LAG will give me a false positive.
So, how my table actually looks:
Many rows of the actual table
(The image above are selecting for 3 ou 4 machines. But, IN THIS EXAMPLE, the machines are not messed up. But can occurs! And in this case, LEAD/LAG doesn't care if the line above are machine-1 or machine-2)
What I want:
In that line 85, the [value] breaks and restart. Id like to count every occorrence when it happens, the selected machines.
So:
"Machine-1 restarted 6 times... Machine-9 restarted 10 times..."
I had done something LIKE this:
CREATE PROCEDURE CountProduction #Machine INT_ARRAY READONLY, #Start DATETIME, #End DATETIME AS
SET NOCOUNT ON
-- Declare counter and insert start values
DECLARE #Counter TABLE(
machine INT PRIMARY KEY,
lastValue BIGINT DEFAULT 0,
count BIGINT DEFAULT 0
)
INSERT INTO #Counter(machine) SELECT n FROM #Machine
-- Declare cursor to iteract over results of values log
DECLARE valueCursor CURSOR LOCAL FOR
SELECT
Value,
Aux.LastValue,
Aux.count
FROM
ValueLog,
#Machine AS Machine,
#Counter AS Counter
WHERE
ValueLog.Machine = Machine.n
AND Counter.machine = ValueLog.Machine
AND ValueLog.DateCreate BETWEEN #Start AND #End;
-- Start iteration
OPEN valueCursor
DECLARE #RowMachine INT
DECLARE #RowValue BIGINT
DECLARE #RowLastValue BIGINT
DECLARE #RowCount BIGINT
FETCH NEXT FROM valueCursor INTO #RowMachine, #RowValue, #RowLastValue, #RowCount
-- Iteration
DECLARE #increment INT
WHILE ##FETCH_STATUS = 0
BEGIN
IF #RowValue < #RowLastValue
SET #increment = 1
ELSE
SET #increment = 0
-- Update counters
UPDATE
#Counter
SET
lastValue = #RowValue,
count = count + #increment
WHERE
inj = #RowMachine
-- Proceed to iteration
FETCH NEXT FROM valueCursor INTO #RowMachine, #RowValue, #RowLastValue, #RowCount
END
-- Closing iteration
CLOSE valueCursor
DEALLOCATE valueCursor
SELECT machine, count FROM #Counter

Use LEAD(). If the next row < current row, count that occurrence.

Solved using #jeroen-mostert suggested
DECLARE #Start DATETIME
DECLARE #End DATETIME
SET #Start = '2019-01-01'
SET #End = GETDATE()
SELECT
Machine,
COUNT(DeltaValue) AS Prod
FROM
(SELECT
Log.Machine,
Log.Value - LAG(Log.Value) OVER (PARTITION BY Log.Machine ORDER BY Log.Id) AS DeltaValue
FROM
ValueLog AS Log,
(SELECT
Id,
Machine,
Value
FROM
ValueLog
) AS AuxLog
WHERE
AuxLog.Id = Log.Id
AND Proto.DateCreate BETWEEN #Start AND #End
AND Proto.Machine IN (1, 9, 10)
) as TB1
WHERE
DeltaValue < 0
GROUP BY
Machine
ORDER BY
Machine
In this case, the inner LAG/LEAD function didn't mess up the content (what happened for some reason when I created a view... I'll try to understand later).
Thanks everybody! I'm new at DB, and this question make me crazy for a whole day.

Related

Create a number of new rows in SQL Server DB with null values

I would like to create a set of new rows in a DB where ID is = to 10,11,12,13,14,15 but all other values are null. This is assuming rows 1 through 9 already exist (in this example). My application will set the first and last row parameters.
Here's my query to create one row but I need a way to loop through rows 10 through 15 until all five rows are created:
#FirstRow int = 10 --will be set by application
,#LastRow int = 15 --will be set by application
,#FileName varchar(100) = NULL
,#CreatedDate date = NULL
,#CreatedBy varchar (50) = NULL
AS
BEGIN
INSERT INTO TABLE(TABLE_ID, FILENAME, CREATED_BY, CREATED_DATE)
VALUES (#FirstRow, #FileName, #CreatedBy, #CreatedDate)
END
The reason I need blank rows is because the application needs to update an existing row in a table. My application will be uploading thousands of documents to rows in a table based on file ID. The application requires that the rows already be inserted. The files are inserted after rows are added. The app then deletes all rows that are null.
Assuming the rows you're inserting are always consecutive, you can use a ghetto FOR loop like the one below to accomplish your goal:
--put all the other variable assignments above this line
DECLARE #i int = #FirstRow
WHILE (#i <= #LastRow)
BEGIN
INSERT INTO TABLE(TABLE_ID, FILENAME, CREATED_BY, CREATED_DATE)
VALUES (#i, #FileName, #CreatedBy, #CreatedDate)
SET #i = #i + 1;
END
Basically, we assigned #i to the lowest index, and then just iterate through, one by one, until we're at the max index.
If performance is a concern, the above approach will not be ideal.
If you don't have a numbers table as SMor mentioned, you can use an ad-hoc tally table
Example
Declare #FirstRow int = 10 --will be set by application
,#LastRow int = 15 --will be set by application
,#FileName varchar(100) = NULL
,#CreatedDate date = NULL
,#CreatedBy varchar (50) = NULL
INSERT INTO TABLE(TABLE_ID, FILENAME, CREATED_BY, CREATED_DATE)
Select Top (#LastRow-#FirstRow+1)
#FirstRow-1+Row_Number() Over (Order By (Select NULL))
,#FileName
,#CreatedBy
,#CreatedDate
From master..spt_values n1, master..spt_values n2
Data Generated

Simple logic but not working as it should

This is what i want to achieve:
So trigger fires on opportunities table when a record with opp_type = 0 is inserted.
The next part of the code just does the calculation which is to pick up the last used number from your custom table and add 1 to it. Stores new value in a variable.
The next part is to do the insert into the user field.
Finally update the custom table to record the last used number.
I am getting the number to increment by one in the NEXTEXP1 table however the user field called O_Quote is not populating via the GUI.
is the code below doing what it should in terms of the explanation above?
by the steps in my trigger it seems the same but the user field is not populating with last number used:
alter TRIGGER [dbo].[Q2] ON [dbo].[AMGR_opportunity_Tbl] AFTER INSERT
AS
BEGIN
Declare #Opp_Type int
Select #Opp_Type = 0 from inserted
If #Opp_Type = 0
BEGIN
SET NOCOUNT ON;
DECLARE #Client_Id varchar(24)
DECLARE #Contact_Number int
DECLARE #NewNumber varchar(250)
DECLARE #NextQNo float
DECLARE #UDFName varchar(50)
DECLARE #GeneratorPrefix varchar(10)
DECLARE #GeneratorLength float
DECLARE #Opptype int
DECLARE #Type_id int
DECLARE #Oppid varchar (24)
--select top 1 nextqno = nextqno from nextexp1
SELECT #NewNumber = NextQno + 1 from dbo.NextEXP1
----insert into user field
insert into O_Quote(Client_Id, Contact_Number, Type_Id, Code_Id, [O_Quote])
values (#Client_Id,0,15,0,#NextQNo)
-------update table with last number used
UPDATE [dbo].[NextEXP1] SET NextQNo = #NewNumber
End
End
GO
#Leonidas199x is right with all points. I can also say that there are too many things that are unclear with that questions and lot's of the data is missing, however this is what I can suggest (this code handles bulk inserts also):
alter TRIGGER [dbo].[Q2] ON [dbo].[AMGR_opportunity_Tbl] AFTER INSERT
AS
BEGIN
DECLARE #NewNumber varchar(250);
SELECT #NewNumber = MAX(NextQno) FROM dbo.NextEXP1; -- I guess that's what you want
insert into O_Quote(Client_Id, Contact_Number, Type_Id, Code_Id, [O_Quote])
select Client_Id, Contact_Number, Type_Id, Code_Id, #NewNumber + row_num
FROM (
SELECT Client_Id, -- once again do not know where this value is taken from
0 Contact_Number,
15 Type_Id,
0 AS Code_Id,
ROW_NUMBER() OVER(order by client_id) row_num
FROM INSERTED WHERE Opp_Type = 0 --I guess that's the right column name
) a;
SELECT #NewNumber = MAX(O_Quote) FROM O_Quote;
UPDATE [dbo].[NextEXP1] SET NextQNo = #NewNumber;
END
Looking into this, I think your logic is a bit off:
Select #Opp_Type = 0 from inserted
This will always evaluate as 0.
You want to use:
SELECT #Opp_Type = i.Opp_Type
FROM inserted AS i;
Where i.Opp_Type is your column name.
Secondly, you declare a bunch of variables, but never set them:
DECLARE #Client_Id varchar(24)
DECLARE #Contact_Number int
DECLARE #NewNumber varchar(250)
DECLARE #NextQNo float
DECLARE #UDFName varchar(50)
DECLARE #GeneratorPrefix varchar(10)
DECLARE #GeneratorLength float
DECLARE #Opptype int
DECLARE #Type_id int
DECLARE #Oppid varchar (24)
And then go on to insert them. You need to set these, if you want to use them later. Should this be:
insert into O_Quote(Client_Id, Contact_Number, Type_Id, Code_Id, [O_Quote])
values (#Client_Id,0,15,0,#NewNumber)
Or do you need to set #NextQNo to be:
SELECT #NextQNo = NextQno from dbo.NextEXP1;
SELECT #NewNumber = #NextQNo + 1;
And lastly, the way this is written will cause you issues if you insert more than one record at a time. You would need to think about a loop, to get that MaxID, which isn't ideal. Can you look at using IDENTITY columns instead?

Resume a WHILE loop from where it stopped SQL

I have a while loop query that I only want to run until 11PM everyday - I'm aware this can be achieved with a WAITFOR statement, and then just END the query.
However, on the following day, once I re-run my query, I want it to continue from where it stopped on the last run. So I'm thinking of creating a log table that will contain the last processed ID.
How can I achieve this?
DECLARE #MAX_Value BIGINT = ( SELECT MAX(ID) FROM dbo.TableA )
DECLARE #MIN_Value BIGINT = ( SELECT MIN(ID) FROM dbo.TableA )
WHILE (#MIN_Value < #MAX_Value )
BEGIN
INSERT INTO dbo.MyResults
/* Do some processing*/
….
….
….
SET #MIN_Value = MIN_Value + 1
/*I only want the above processing to run until 11PM*/
/* Once it’s 11PM, I want to save the last used #MIN_Value
into my LoggingTable (dbo.Logging) and kill the above processing.*/
/* Once I re-run the query I want my processing to restart from the
above #MIN_Value which is recorded in dbo.Logging */
END
Disclaimer: I do not recommend using WHILE loops in SQL Server but considering the comment that you want a solution in SQL, here you go:
-- First of all, I strongly recommend using a different way of assigning variable values to avoid scenarios with the variable being NULL when the table is empty, also you can do it in a single select.
-- Also, if something started running at 10:59:59 it will let the processing for the value finish and will not simply rollback at 11.
CREATE TABLE dbo.ProcessingValueLog (
LogEntryId BIGINT IDENTITY(1,1) NOT NULL,
LastUsedValue BIGINT NOT NULL,
LastUsedDateTime DATETIME NOT NULL DEFAULT(GETDATE()),
CompletedProcessing BIT NOT NULL DEFAULT(0)
)
DECLARE #MAX_Value BIGINT = 0;
DECLARE #MIN_Value BIGINT = 0;
SELECT
#MIN_Value = MIN(ID),
#MAX_Value = MAX(ID)
FROM
dbo.TableA
SELECT TOP 1
#MIN_Value = LastUsedValue
FROM
dbo.ProcessingValueLog
WHERE
CompletedProcessing = 1
ORDER BY
LastUsedDateTime DESC
DECLARE #CurrentHour TINYINT = HOUR(GETDATE());
DECLARE #LogEntryID BIGINT;
WHILE (#MIN_Value < #MAX_Value AND #CurrentHour < 23)
BEGIN
INSERT INTO dbo.ProcessingValueLog (LastUsedValue)
VALUE(#MIN_Value)
SELECT #LogEntryID = SCOPE_IDENTITY()
// Do some processing...
SET #MIN_Value = #MIN_Value + 1;
UPDATE dbo.ProcessingValueLog
SET CompletedProcessing = 1
WHERE LogEntryId = #LogEntryID
SET #CurrentHour = HOUR(GETDATE())
END

Why is the natural ID generation in this SQL Stored Proc creating duplicates?

I am incrementing the alphanumeric value by 1 for the productid using stored procedure. My procedure incrementing the values up to 10 records, once its reaching to 10th say for PRD0010...no more its incrementing... however, the problem is it is repeating
the same values PRD0010.. for each SP call.
What could be the cause of this?
create table tblProduct
(
id varchar(15)
)
insert into tblProduct(id)values('PRD00')
create procedure spInsertInProduct
AS
Begin
DECLARE #PId VARCHAR(15)
DECLARE #NId INT
DECLARE #COUNTER INT
SET #PId = 'PRD00'
SET #COUNTER = 0
SELECT #NId = cast(substring(MAX(id), 4, len(MAX(id))) as int)
FROM tblProduct group by left(id, 3) order by left(id, 3)
--here increse the vlaue to numeric id by 1
SET #NId = #NId + 1
--GENERATE ACTUAL APHANUMERIC ID HERE
SET #PId = #PId + cast(#NId AS VARCHAR)
INSERT INTO tblProduct(id)values (#PId)
END
Change
SELECT #NId = cast(substring(MAX(id), 4, len(MAX(id))) as int)
FROM tblProduct group by left(id, 3) order by left(id, 3)
To
SELECT TOP 1
#NId = cast(substring(id, 4, len(id)) as int)
FROM tblProduct order by LEN(id) DESC, ID DESC
You have to remember that
PRD009
is always greater than
PRD0010
or
PRD001
All in all, I think your approach is incorrect.
Your values will be
PRD00
PRD001
...
PRD009
PRD0010
PRD0011
...
PRD0099
PRD00100
This will make sorting a complete nightmare.
In addition to astander's analysis, you also have a concurrency issue.
The simple fix would be to add this at the beginning of your proc:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRANSACTION
And add a COMMIT at the end. Otherwise, two callers of this stored proc will get the same MAX/TOP 1 value from your table, and insert the same value.
Also, you can and should prevent these duplicates from existing by adding a key to your table, for this column. If you already have a PRIMARY KEY on this table, you can add an additional key using a UNIQUE constraint. This will prevent duplicates occurring in the future, no matter what programming errors occur. E.g.
ALTER TABLE tblProduct ADD CONSTRAINT UQ_Product_ID UNIQUE (ID)

Is there a way to make a TSQL variable constant?

Is there a way to make a TSQL variable constant?
No, but you can create a function and hardcode it in there and use that.
Here is an example:
CREATE FUNCTION fnConstant()
RETURNS INT
AS
BEGIN
RETURN 2
END
GO
SELECT dbo.fnConstant()
One solution, offered by Jared Ko is to use pseudo-constants.
As explained in SQL Server: Variables, Parameters or Literals? Or… Constants?:
Pseudo-Constants are not variables or parameters. Instead, they're simply views with one row, and enough columns to support your constants. With these simple rules, the SQL Engine completely ignores the value of the view but still builds an execution plan based on its value. The execution plan doesn't even show a join to the view!
Create like this:
CREATE SCHEMA ShipMethod
GO
-- Each view can only have one row.
-- Create one column for each desired constant.
-- Each column is restricted to a single value.
CREATE VIEW ShipMethod.ShipMethodID AS
SELECT CAST(1 AS INT) AS [XRQ - TRUCK GROUND]
,CAST(2 AS INT) AS [ZY - EXPRESS]
,CAST(3 AS INT) AS [OVERSEAS - DELUXE]
,CAST(4 AS INT) AS [OVERNIGHT J-FAST]
,CAST(5 AS INT) AS [CARGO TRANSPORT 5]
Then use like this:
SELECT h.*
FROM Sales.SalesOrderHeader h
JOIN ShipMethod.ShipMethodID const
ON h.ShipMethodID = const.[OVERNIGHT J-FAST]
Or like this:
SELECT h.*
FROM Sales.SalesOrderHeader h
WHERE h.ShipMethodID = (SELECT TOP 1 [OVERNIGHT J-FAST] FROM ShipMethod.ShipMethodID)
My workaround to missing constans is to give hints about the value to the optimizer.
DECLARE #Constant INT = 123;
SELECT *
FROM [some_relation]
WHERE [some_attribute] = #Constant
OPTION( OPTIMIZE FOR (#Constant = 123))
This tells the query compiler to treat the variable as if it was a constant when creating the execution plan. The down side is that you have to define the value twice.
No, but good old naming conventions should be used.
declare #MY_VALUE as int
There is no built-in support for constants in T-SQL. You could use SQLMenace's approach to simulate it (though you can never be sure whether someone else has overwritten the function to return something else…), or possibly write a table containing constants, as suggested over here. Perhaps write a trigger that rolls back any changes to the ConstantValue column?
Prior to using a SQL function run the following script to see the differences in performance:
IF OBJECT_ID('fnFalse') IS NOT NULL
DROP FUNCTION fnFalse
GO
IF OBJECT_ID('fnTrue') IS NOT NULL
DROP FUNCTION fnTrue
GO
CREATE FUNCTION fnTrue() RETURNS INT WITH SCHEMABINDING
AS
BEGIN
RETURN 1
END
GO
CREATE FUNCTION fnFalse() RETURNS INT WITH SCHEMABINDING
AS
BEGIN
RETURN ~ dbo.fnTrue()
END
GO
DECLARE #TimeStart DATETIME = GETDATE()
DECLARE #Count INT = 100000
WHILE #Count > 0 BEGIN
SET #Count -= 1
DECLARE #Value BIT
SELECT #Value = dbo.fnTrue()
IF #Value = 1
SELECT #Value = dbo.fnFalse()
END
DECLARE #TimeEnd DATETIME = GETDATE()
PRINT CAST(DATEDIFF(ms, #TimeStart, #TimeEnd) AS VARCHAR) + ' elapsed, using function'
GO
DECLARE #TimeStart DATETIME = GETDATE()
DECLARE #Count INT = 100000
DECLARE #FALSE AS BIT = 0
DECLARE #TRUE AS BIT = ~ #FALSE
WHILE #Count > 0 BEGIN
SET #Count -= 1
DECLARE #Value BIT
SELECT #Value = #TRUE
IF #Value = 1
SELECT #Value = #FALSE
END
DECLARE #TimeEnd DATETIME = GETDATE()
PRINT CAST(DATEDIFF(ms, #TimeStart, #TimeEnd) AS VARCHAR) + ' elapsed, using local variable'
GO
DECLARE #TimeStart DATETIME = GETDATE()
DECLARE #Count INT = 100000
WHILE #Count > 0 BEGIN
SET #Count -= 1
DECLARE #Value BIT
SELECT #Value = 1
IF #Value = 1
SELECT #Value = 0
END
DECLARE #TimeEnd DATETIME = GETDATE()
PRINT CAST(DATEDIFF(ms, #TimeStart, #TimeEnd) AS VARCHAR) + ' elapsed, using hard coded values'
GO
If you are interested in getting optimal execution plan for a value in the variable you can use a dynamic sql code. It makes the variable constant.
DECLARE #var varchar(100) = 'some text'
DECLARE #sql varchar(MAX)
SET #sql = 'SELECT * FROM table WHERE col = '''+#var+''''
EXEC (#sql)
For enums or simple constants, a view with a single row has great performance and compile time checking / dependency tracking ( cause its a column name )
See Jared Ko's blog post https://blogs.msdn.microsoft.com/sql_server_appendix_z/2013/09/16/sql-server-variables-parameters-or-literals-or-constants/
create the view
CREATE VIEW ShipMethods AS
SELECT CAST(1 AS INT) AS [XRQ - TRUCK GROUND]
,CAST(2 AS INT) AS [ZY - EXPRESS]
,CAST(3 AS INT) AS [OVERSEAS - DELUXE]
, CAST(4 AS INT) AS [OVERNIGHT J-FAST]
,CAST(5 AS INT) AS [CARGO TRANSPORT 5]
use the view
SELECT h.*
FROM Sales.SalesOrderHeader
WHERE ShipMethodID = ( select [OVERNIGHT J-FAST] from ShipMethods )
Okay, lets see
Constants are immutable values which are known at compile time and do not change for the life of the program
that means you can never have a constant in SQL Server
declare #myvalue as int
set #myvalue = 5
set #myvalue = 10--oops we just changed it
the value just changed
Since there is no build in support for constants, my solution is very simple.
Since this is not supported:
Declare Constant #supplement int = 240
SELECT price + #supplement
FROM what_does_it_cost
I would simply convert it to
SELECT price + 240/*CONSTANT:supplement*/
FROM what_does_it_cost
Obviously, this relies on the whole thing (the value without trailing space and the comment) to be unique. Changing it is possible with a global search and replace.
There are no such thing as "creating a constant" in database literature. Constants exist as they are and often called values. One can declare a variable and assign a value (constant) to it. From a scholastic view:
DECLARE #two INT
SET #two = 2
Here #two is a variable and 2 is a value/constant.
SQLServer 2022 (currently only as Preview available) is now able to Inline the function proposed by SQLMenace, this should prevent the performance hit described by some comments.
CREATE FUNCTION fnConstant() RETURNS INT AS BEGIN RETURN 2 END GO
SELECT is_inlineable FROM sys.sql_modules WHERE [object_id]=OBJECT_ID('dbo.fnConstant');
is_inlineable
1
SELECT dbo.fnConstant()
ExecutionPlan
To test if it also uses the value coming from the Function, I added a second function returning value "1"
CREATE FUNCTION fnConstant1()
RETURNS INT
AS
BEGIN
RETURN 1
END
GO
Create Temp Table with about 500k rows with Value 1 and 4 rows with Value 2:
DROP TABLE IF EXISTS #temp ;
create table #temp (value_int INT)
DECLARE #counter INT;
SET #counter = 0
WHILE #counter <= 500000
BEGIN
INSERT INTO #temp VALUES (1);
SET #counter = #counter +1
END
SET #counter = 0
WHILE #counter <= 3
BEGIN
INSERT INTO #temp VALUES (2);
SET #counter = #counter +1
END
create index i_temp on #temp (value_int);
Using the describe plan we can see that the Optimizer expects 500k values for
select * from #temp where value_int = dbo.fnConstant1(); --Returns 500001 rows
Constant 1
and 4 rows for
select * from #temp where value_int = dbo.fnConstant(); --Returns 4rows
Constant 2
Robert's performance test is interesting. And even in late 2022, the scalar functions are much slower (by an order of magnitude) than variables or literals. A view (as suggested mbobka) is somewhere in-between when used for this same test.
That said, using a loop like that in SQL Server is not something I'd ever do, because I'd normally be operating on a whole set.
In SQL 2019, if you use schema-bound functions in a set operation, the difference is much less noticeable.
I created and populated a test table:
create table #testTable (id int identity(1, 1) primary key, value tinyint);
And changed the test so that instead of looping and changing a variable, it queries the test table and returns true or false depending on the value in the test table, e.g.:
insert #testTable(value)
select case when value > 127
then #FALSE
else #TRUE
end
from #testTable with(nolock)
I tested 5 scenarios:
hard-coded values
local variables
scalar functions
a view
a table-valued function
running the test 10 times, yielded the following results:
scenario
min
max
avg
scalar functions
233
259
240
hard-coded values
236
265
243
local variables
235
278
245
table-valued function
243
272
253
view
244
267
254
Suggesting to me, that for set-based work in (at least) 2019 and better, there's not much in it.
set nocount on;
go
-- create test data table
drop table if exists #testTable;
create table #testTable (id int identity(1, 1) primary key, value tinyint);
-- populate test data
insert #testTable (value)
select top (1000000) convert(binary (1), newid())
from sys.all_objects a
, sys.all_objects b
go
-- scalar function for True
drop function if exists fnTrue;
go
create function dbo.fnTrue() returns bit with schemabinding as
begin
return 1
end
go
-- scalar function for False
drop function if exists fnFalse;
go
create function dbo.fnFalse () returns bit with schemabinding as
begin
return 0
end
go
-- table-valued function for booleans
drop function if exists dbo.tvfBoolean;
go
create function tvfBoolean() returns table with schemabinding as
return
select convert(bit, 1) as true, convert(bit, 0) as false
go
-- view for booleans
drop view if exists dbo.viewBoolean;
go
create view dbo.viewBoolean with schemabinding as
select convert(bit, 1) as true, convert(bit, 0) as false
go
-- create table for results
drop table if exists #testResults
create table #testResults (id int identity(1,1), test int, elapsed bigint, message varchar(1000));
-- define tests
declare #tests table(testNumber int, description nvarchar(100), sql nvarchar(max))
insert #tests values
(1, N'hard-coded values', N'
declare #testTable table (id int, value bit);
insert #testTable(id, value)
select id, case when t.value > 127
then 0
else 1
end
from #testTable t')
, (2, N'local variables', N'
declare #FALSE as bit = 0
declare #TRUE as bit = 1
declare #testTable table (id int, value bit);
insert #testTable(id, value)
select id, case when t.value > 127
then #FALSE
else #TRUE
end
from #testTable t'),
(3, N'scalar functions', N'
declare #testTable table (id int, value bit);
insert #testTable(id, value)
select id, case when t.value > 127
then dbo.fnFalse()
else dbo.fnTrue()
end
from #testTable t'),
(4, N'view', N'
declare #testTable table (id int, value bit);
insert #testTable(id, value)
select id, case when value > 127
then b.false
else b.true
end
from #testTable t with(nolock), viewBoolean b'),
(5, N'table-valued function', N'
declare #testTable table (id int, value bit);
insert #testTable(id, value)
select id, case when value > 127
then b.false
else b.true
end
from #testTable with(nolock), dbo.tvfBoolean() b')
;
declare #testNumber int, #description varchar(100), #sql nvarchar(max)
declare #testRuns int = 10;
-- execute tests
while #testRuns > 0 begin
set #testRuns -= 1
declare testCursor cursor for select testNumber, description, sql from #tests;
open testCursor
fetch next from testCursor into #testNumber, #description, #sql
while ##FETCH_STATUS = 0 begin
declare #TimeStart datetime2(7) = sysdatetime();
execute sp_executesql #sql;
declare #TimeEnd datetime2(7) = sysdatetime()
insert #testResults(test, elapsed, message)
select #testNumber, datediff_big(ms, #TimeStart, #TimeEnd), #description
fetch next from testCursor into #testNumber, #description, #sql
end
close testCursor
deallocate testCursor
end
-- display results
select test, message, count(*) runs, min(elapsed) as min, max(elapsed) as max, avg(elapsed) as avg
from #testResults
group by test, message
order by avg(elapsed);
The best answer is from SQLMenace according to the requirement if that is to create a temporary constant for use within scripts, i.e. across multiple GO statements/batches.
Just create the procedure in the tempdb then you have no impact on the target database.
One practical example of this is a database create script which writes a control value at the end of the script containing the logical schema version. At the top of the file are some comments with change history etc... But in practice most developers will forget to scroll down and update the schema version at the bottom of the file.
Using the above code allows a visible schema version constant to be defined at the top before the database script (copied from the generate scripts feature of SSMS) creates the database but used at the end. This is right in the face of the developer next to the change history and other comments, so they are very likely to update it.
For example:
use tempdb
go
create function dbo.MySchemaVersion()
returns int
as
begin
return 123
end
go
use master
go
-- Big long database create script with multiple batches...
print 'Creating database schema version ' + CAST(tempdb.dbo.MySchemaVersion() as NVARCHAR) + '...'
go
-- ...
go
-- ...
go
use MyDatabase
go
-- Update schema version with constant at end (not normally possible as GO puts
-- local #variables out of scope)
insert MyConfigTable values ('SchemaVersion', tempdb.dbo.MySchemaVersion())
go
-- Clean-up
use tempdb
drop function MySchemaVersion
go

Resources