Looking for a General "Minimum" User Defined Function - sql-server

I created the following function to simplify a piece of particularly complex code.
CREATE FUNCTION [dbo].[DSGetMinimumInt] (#First INT, #Second INT)
RETURNS INT
AS
BEGIN
IF #First < #Second
RETURN #First
RETURN #Second
END
However, it only works for the INT datatype. I know I could create one for numeric and possibly for Varchar and Datetime.
Is it possible to create one master "Minimum" function to deal with them all? Has anyone done this?
I've Googled it, but come up empty.

here is a basic one you can work with, I'd be careful using this in queries, as it will slow them down in proportion to the number of rows it is used on:
CREATE FUNCTION [dbo].[DSGetMinimum] (#First sql_variant, #Second sql_variant)
RETURNS varchar(8000)
AS
BEGIN
DECLARE #Value varchar(8000)
IF SQL_VARIANT_PROPERTY(#First,'BaseType')=SQL_VARIANT_PROPERTY(#Second,'BaseType')
OR #First IS NULL OR #Second IS NULL
BEGIN
IF SQL_VARIANT_PROPERTY(#First,'BaseType')='datetime'
BEGIN
IF CONVERT(datetime,#First)<CONVERT(datetime,#Second)
BEGIN
SET #Value=CONVERT(char(23),#First,121)
END
ELSE
BEGIN
SET #Value=CONVERT(char(23),#Second,121)
END
END --IF datetime
ELSE
BEGIN
IF #First < #Second
SET #Value=CONVERT(varchar(8000),#First)
ELSE
SET #Value=CONVERT(varchar(8000),#Second)
END
END --IF types the same
RETURN #Value
END
GO
EDIT
Test Code:
DECLARE #D1 datetime , #D2 datetime
DECLARE #I1 int , #I2 int
DECLARE #V1 varchar(5) , #V2 varchar(5)
SELECT #D1='1/1/2010', #D2='1/2/2010'
,#I1=5 , #I2=999
,#V1='abc' , #V2='xyz'
PRINT dbo.DSGetMinimumInt(#D1,#D2)
PRINT dbo.DSGetMinimumInt(#I1,#I2)
PRINT dbo.DSGetMinimumInt(#V1,#V2)
Test Output:
2010-01-01 00:00:00.000
5
abc
If you are going to use this in a query, I would just use an inline CASE statement, which would be MUCH faster then the UDF:
CASE
WHEN #valueAnyType1<#ValueAnyType2 THEN #valueAnyType1
ELSE #ValueAnyType2
END
you can add protections for NULL if necessary:
CASE
WHEN #valueAnyType1<=ISNULL(#ValueAnyType2,#valueAnyType1) THEN #valueAnyType1
ELSE #ValueAnyType2
END

All major databases except SQL Server support LEAST and GREATEST which do what you want.
In SQL Server, you can emulate it this way:
WITH q (col1, col2) AS
(
SELECT 'test1', 'test2'
UNION ALL
SELECT 'test3', 'test4'
)
SELECT (
SELECT MIN(col)
FROM (
SELECT col1 AS col
UNION ALL
SELECT col2
) qa
)
FROM q
, though it will be a little bit less efficient than a UDF.

Azure SQL DB (and future SQL Server versions) now supports GREATEST/LEAST:
GREATEST
LEAST

Related

SQL server Function - Take column name as input parameter

I need to write a SQL function to return column specific values, so I am passing the column name as a parameter to SQL-function to return its corresponding value. Here is the sample function
CREATE FUNCTION GETDATETIME(#columnName VARCHAR(100))
RETURNS DATETIME
AS
BEGIN
RETURN (SELECT TOP 1.#columnName FROM TEST_TABLE )
END
GO
The above function seems to be straight forward, but it not working as expected.
And when I execute the function
SELECT dbo.GETDATETIME('DATETIMECOLUMNNAME')
I am getting this error:
Conversion failed when converting date and/or time from character string.
Can someone help me to identify the issue?
For that you need to write dynamic sql. But Functions won't support execute statement.
So you need to write multiple If conditions for each column.
CREATE FUNCTION GETDATETIME(#columnName VARCHAR(100))
RETURNS DATETIME
AS
BEGIN
DECLARE #RESULT DATETIME;
IF (#columnName = 'ABC')
Begin
SELECT TOP 1 #RESULT = [ABC] FROM TEST_TABLE
END
ELSE IF (#columnName = 'DEF')
Begin
SELECT TOP 1 #RESULT = [DEF] FROM TEST_TABLE
END
ELSE IF (#columnName = 'GHI')
Begin
SELECT TOP 1 #RESULT = [GHI] FROM TEST_TABLE
END
RETURN #RESULT
END
GO
Edit 2:
If your column always return Datetime, then you can do like below.
CREATE TABLE A_DUM (ID INT, STARTDATE DATETIME, ENDDATE DATETIME, MIDDLEDATE DATETIME)
INSERT INTO A_DUM
SELECT 1, '2019-07-24 11:35:58.910', '2019-07-28 11:35:58.910', '2019-07-26 11:35:58.910'
UNION ALL
SELECT 2, '2019-07-29 11:35:58.910', '2019-08-01 11:35:58.910', '2019-07-24 11:35:58.910'
And your function like below
CREATE FUNCTION GETDATETIME(#columnName VARCHAR(100))
RETURNS DATETIME
AS
BEGIN
DECLARE #RESULT DATETIME;
SELECT TOP 1 #RESULT = CAST(PROP AS DATETIME)
FROM A_DUM
UNPIVOT
(
PROP FOR VAL IN (STARTDATE, ENDDATE,MIDDLEDATE)
)UP
WHERE VAL = #columnName
RETURN #RESULT
END
GO
There's a workaround to this, similar to #Shakeer's answer - if you are attempting to GROUP BY or perform a WHERE on a column name, then you can just use a CASE statement to create a clause to match on specific column names (if you know them).
Obviously this doesn't work very well if you have many columns to hard-code, but at least it's a way to achieve the general idea.
E.g. with WHERE clause:
WHERE
(CASE
WHEN #columnname = 'FirstColumn' THEN FirstColumnCondition
WHEN #columnname = 'SecondColumn' THEN SecondColumnCondition
ELSE SomeOtherColumnCondition
END)
Or with GROUP BY:
GROUP BY
(CASE
WHEN #columnname = 'FirstColumn' THEN FirstColumnGroup
WHEN #columnname = 'SecondColumn' THEN SecondColumnGroup
ELSE SomeOtherColumnGroup
END)
No you cannot use dynamic sql in functions in SQL. Please check this link for more info link.
So it is not possible to achieve this by any function, yes you may use stored procedures with output parameter for same.
You may find this link for reference link.

Query uses Index Scan instead of Seek when OR #param = 1 added to WHERE clause

I'm going through stored procedures to make them sargable and I noticed something unexpected about how the index was used.
There's a non-clustered index on DateColumn, and a clustered index on the table (not directly referenced in the query).
While the following uses an index seek on the non-clustered index that has DateColumn as an index column:
DECLARE #timestamp as datetime
SET #timestamp = '2014-01-01'
SELECT column1, column2 FROM Table WHERE DateColumn > #timestamp
However the following uses an index scan:
DECLARE #timestamp as datetime
DECLARE #flag as bit
SET #timestamp = '2014-01-01'
SET #flag = 0
SELECT column1, column2 FROM Table WHERE (DateColumn > #timestamp) OR (#flag = 1)
I put the brackets in just in case, but of course it made no difference.
Because the #flag = 1 has nothing to do with the table, I was expecting a seek in both cases. Out of interest if I change it to 0 = 1 it uses index seek again. The #flag value is a parameter for the procedure that tells the query to return all records, so not something I can hard code in reality.
Is there a way to make this use a seek instead of a scan? The only option I can think of is the following, however in reality the queries are much more complex, so duplication like this hurts readability and maintainability:
DECLARE #timestamp as datetime
DECLARE #flag as bit
SET #timestamp = '2014-01-01'
SET #flag = 0
IF #flag = 1
BEGIN
SELECT column1, column2 FROM Table
END
ELSE
BEGIN
SELECT column1, column2 FROM Table WHERE DateColumn > #timestamp
END
Try with dynamic SQL like this.
DECLARE #flag BIT,
#query NVARCHAR(500)
SET #flag=0
SET #query='
SELECT <columnlist>
FROM <tablename>
WHERE columnname = value
or 1=' + CONVERT(NVARCHAR(1), #flag)
EXEC Sp_executesql
#query
Your dynamic solution is actually better because you won't get caught out when you pass in #flag=1 the first time and that's what you get for all subsequent calls. As #RaduGheorghiu says, a scan is better than a seek in these cases.
If it were me I would have 2 procedures, one for "get everything" and one for "get for date". Two procedures, two usages, two query plans. If the repetition bothers you, you can introduce a view.
Just for completeness I'm going to post an option that I just realised works in my specific situation. This is probably what I'm going to use due to simplicity, however this probably won't work for 99.9% of cases, so I don't consider it a better answer than the dynamic SQL.
declare #flag as int
declare #date as datetime
set #flag = 1
set #date = '2015-08-11 09:12:08.553'
set #date = (select case #flag when 1 then '1753-01-01' else #date end)
select Column1, Column2
from Table_1
where DateColumn > #date
The above works because the DateColumn stores the modified date for the record (I'm returning deltas). It has a default value of getdate() and is set to getdate() on updates. This means in this specific case I know that for a value of #date = '1753-01-01' all records will be returned.

How to compare multiple values in one column against a delimited string in a stored procedure

Ok, first question, and you guys scare me with how much you know, so please be gentle...
I am trying to pass in a delimited string and convert it to an array in a stored procedure and then use the array to check against values in a column. Here's the catch. I'm trying to take the preexisting table, that checks for one association and expand it to allow for multiple associations.
So the column annAssociations might have three ids, 4,16,32, but I need to check if it belongs to the groupIds being queried, 6,12,32. Since one of the values matched, it should return that row.
Here is the procedure as it exists.
CREATE PROCEDURE [dbo].[sp_annList]
-- Date Range of Announcements.
#dateBegin datetime,
#dateEnd datetime,
-- Announcement type and associations.
#annType varchar(50),
#annAssociation varchar(255)
AS
BEGIN
-- Set the SELECT statement for the announcements.
SET NOCOUNT ON;
-- See if this should be a limited query
IF #annAssociation <> ''
Begin
SELECT *
FROM announcements
WHERE (date1 <= #dateEnd AND date2 >= #dateBegin) -- See if the announcement falls in the date range.
AND annType = #annType -- See if the announcement is the right type.
AND annAssociations LIKE (select SplitText from dbo.fnSplit(#annAssociation, ','))
ORDER BY title
END
Else
Begin
SELECT *
FROM announcements
WHERE (date1 <= #dateEnd AND date2 >= #dateBegin)
AND annType = #annType
ORDER BY title
End
END
And here is the method I'm using to convert the delimited string and store it in a temporary table.
CREATE Function [dbo].[fnSplit](#text text, #delimitor nchar(1))
RETURNS
#table TABLE
(
[Index] int Identity(0,1),
[SplitText] varchar(10)
)
AS
BEGIN
declare #current varchar(10)
declare #endIndex int
declare #textlength int
declare #startIndex int
set #startIndex = 1
if(#text is not null)
begin
set #textLength = datalength(#text)
while(1=1)
begin
set #endIndex = charindex(#delimitor, #text, #startIndex)
if(#endIndex != 0)
begin
set #current = substring(#text,#startIndex, #endIndex - #StartIndex)
Insert Into #table ([SplitText]) values(#current)
set #startIndex = #endIndex + 1
end
else
begin
set #current = substring(#text, #startIndex, datalength(#text)-#startIndex+1)
Insert Into #table ([SplitText]) values(#current)
break
end
end
end
return
END
Sorry for the long question. I just wanted to get all the info out there. I've been researching for days, and I either don't know where to look or am missing something obvious.
You probably won't get much better performance than this approach (well you might see better performance with a CLR split function but at 3 or 4 items you won't see much difference):
SELECT *
FROM announcements AS a
WHERE ...
AND EXISTS (SELECT 1 FROM dbo.fnSplit(#annAssociation) AS n
WHERE ',' + a.annList + ',' LIKE '%,' + n.SplitText + ',%');
The key here is that you only need to split up one of the lists.
You really should stop storing multiple values in the annAssocations column. Each id is a separate piece of data and should be stored separately (in addition to better conforming to normalization, it will make queries like this simpler).

Splitting A String into Multiple Values in SQL Server 2000

I'm asking the question on behalf of someone that works for my client that asked me this. I'm actually more familiar with mySQL than SQL Server, but unfortunately, SQL Server is what the client has used for years.
The question basically this: Is there a way in SQL Server to split a string into multiple values (e.g. array?) that can be used in a WHERE statement.
Here's a PHP example of what I'm talking about.
<?php
$string = "10,11,12,13";
$explode = explode(",", $string);
?>
$explode would be equal to array(10,11,12,13). What I need to do is something like this:
SELECT {long field list] FROM {tables} WHERE hour IN SPLIT(",", "10,11,12,13")
With SPLIT being my pseudo-code function that performs the splitting
The reason why I'm not doing this in, let's say, PHP, is because the query is being constructed by reporting software where we can't perform logic (such as my PHP code) before sending it to the database, and the multiple values are being returned by the software as a single string separated by pipes (|).
Unfortunately I do not have access to the reporting software (I think he said it was called Logi or LogiReports or something) or the query my associate was drafting up, but all that is really important for this question is the WHERE clause.
Any ideas?
Dynamic SQL can be used:
declare #in varchar(10)
set #in = '10,11,12,13'
exec ('SELECT {long field list] FROM {tables} WHERE hour IN (' + #in + ')')
Several methods here: Arrays and list in SQL Server
For short strings, I prefer a numbers table
I could copy/paste from here but it really is worth reading
You could use a Function which receives a string containing the "id's" separated by pipes, and return it as a table, which you can query and use in a subquery maybe, like this:
SELECT {long field list] FROM {tables} WHERE hour IN
(SELECT OrderID from dbo.SplitOrderIDs('2001,2002'))
ALTER FUNCTION [dbo].[SplitOrderIDs]
(
#OrderList varchar(500)
)
RETURNS
#ParsedList table
(
OrderID int
)
AS
BEGIN
DECLARE #OrderID varchar(10), #Pos int
SET #OrderList = LTRIM(RTRIM(#OrderList))+ ','
SET #Pos = CHARINDEX(',', #OrderList, 1)
IF REPLACE(#OrderList, ',', '') <> ''
BEGIN
WHILE #Pos > 0
BEGIN
SET #OrderID = LTRIM(RTRIM(LEFT(#OrderList, #Pos - 1)))
IF #OrderID <> ''
BEGIN
INSERT INTO #ParsedList (OrderID)
VALUES (CAST(#OrderID AS int)) --Use Appropriate conversion
END
SET #OrderList = RIGHT(#OrderList, LEN(#OrderList) - #Pos)
SET #Pos = CHARINDEX(',', #OrderList, 1)
END
END
RETURN
END
you can use this stored procedure.
I hope that be useful for you.
CREATE PROCEDURE SP_STRING_SPLIT (#String varchar(8000),#Separator Char(10),#pos_select int=0)
AS
BEGIN
SET NOCOUNT ON
DECLARE #Caracter varchar(8000)
DECLARE #Pos int
Set #Pos=1
Set #Caracter=''
CREATE TABLE #ARRAY
( String varchar(8000) NOT NULL,
Pos int NOT NULL IDENTITY (1, 1)
)
While (#Pos<=len(#String))
Begin
If substring(#String,#Pos,1)=Ltrim(Rtrim(#Separator))
Begin
INSERT INTO #ARRAY SELECT #Caracter
SET #Caracter=''
End
Else
Begin
--forma la palabra}
Set #Caracter=#Caracter+substring(#String,#Pos,1)
End
If #Pos=len(#String)
Begin
INSERT INTO #ARRAY SELECT #Caracter
End
SET #Pos=#Pos+1
End
SELECT Pos,String FROM #ARRAY where (Pos=#pos_select Or #pos_select=0)
END
GO
exec SP_STRING_SPLIT 'HELLO, HOW ARE YOU?',',',0

Is there a way to make a TSQL variable constant?

Is there a way to make a TSQL variable constant?
No, but you can create a function and hardcode it in there and use that.
Here is an example:
CREATE FUNCTION fnConstant()
RETURNS INT
AS
BEGIN
RETURN 2
END
GO
SELECT dbo.fnConstant()
One solution, offered by Jared Ko is to use pseudo-constants.
As explained in SQL Server: Variables, Parameters or Literals? Or… Constants?:
Pseudo-Constants are not variables or parameters. Instead, they're simply views with one row, and enough columns to support your constants. With these simple rules, the SQL Engine completely ignores the value of the view but still builds an execution plan based on its value. The execution plan doesn't even show a join to the view!
Create like this:
CREATE SCHEMA ShipMethod
GO
-- Each view can only have one row.
-- Create one column for each desired constant.
-- Each column is restricted to a single value.
CREATE VIEW ShipMethod.ShipMethodID AS
SELECT CAST(1 AS INT) AS [XRQ - TRUCK GROUND]
,CAST(2 AS INT) AS [ZY - EXPRESS]
,CAST(3 AS INT) AS [OVERSEAS - DELUXE]
,CAST(4 AS INT) AS [OVERNIGHT J-FAST]
,CAST(5 AS INT) AS [CARGO TRANSPORT 5]
Then use like this:
SELECT h.*
FROM Sales.SalesOrderHeader h
JOIN ShipMethod.ShipMethodID const
ON h.ShipMethodID = const.[OVERNIGHT J-FAST]
Or like this:
SELECT h.*
FROM Sales.SalesOrderHeader h
WHERE h.ShipMethodID = (SELECT TOP 1 [OVERNIGHT J-FAST] FROM ShipMethod.ShipMethodID)
My workaround to missing constans is to give hints about the value to the optimizer.
DECLARE #Constant INT = 123;
SELECT *
FROM [some_relation]
WHERE [some_attribute] = #Constant
OPTION( OPTIMIZE FOR (#Constant = 123))
This tells the query compiler to treat the variable as if it was a constant when creating the execution plan. The down side is that you have to define the value twice.
No, but good old naming conventions should be used.
declare #MY_VALUE as int
There is no built-in support for constants in T-SQL. You could use SQLMenace's approach to simulate it (though you can never be sure whether someone else has overwritten the function to return something else…), or possibly write a table containing constants, as suggested over here. Perhaps write a trigger that rolls back any changes to the ConstantValue column?
Prior to using a SQL function run the following script to see the differences in performance:
IF OBJECT_ID('fnFalse') IS NOT NULL
DROP FUNCTION fnFalse
GO
IF OBJECT_ID('fnTrue') IS NOT NULL
DROP FUNCTION fnTrue
GO
CREATE FUNCTION fnTrue() RETURNS INT WITH SCHEMABINDING
AS
BEGIN
RETURN 1
END
GO
CREATE FUNCTION fnFalse() RETURNS INT WITH SCHEMABINDING
AS
BEGIN
RETURN ~ dbo.fnTrue()
END
GO
DECLARE #TimeStart DATETIME = GETDATE()
DECLARE #Count INT = 100000
WHILE #Count > 0 BEGIN
SET #Count -= 1
DECLARE #Value BIT
SELECT #Value = dbo.fnTrue()
IF #Value = 1
SELECT #Value = dbo.fnFalse()
END
DECLARE #TimeEnd DATETIME = GETDATE()
PRINT CAST(DATEDIFF(ms, #TimeStart, #TimeEnd) AS VARCHAR) + ' elapsed, using function'
GO
DECLARE #TimeStart DATETIME = GETDATE()
DECLARE #Count INT = 100000
DECLARE #FALSE AS BIT = 0
DECLARE #TRUE AS BIT = ~ #FALSE
WHILE #Count > 0 BEGIN
SET #Count -= 1
DECLARE #Value BIT
SELECT #Value = #TRUE
IF #Value = 1
SELECT #Value = #FALSE
END
DECLARE #TimeEnd DATETIME = GETDATE()
PRINT CAST(DATEDIFF(ms, #TimeStart, #TimeEnd) AS VARCHAR) + ' elapsed, using local variable'
GO
DECLARE #TimeStart DATETIME = GETDATE()
DECLARE #Count INT = 100000
WHILE #Count > 0 BEGIN
SET #Count -= 1
DECLARE #Value BIT
SELECT #Value = 1
IF #Value = 1
SELECT #Value = 0
END
DECLARE #TimeEnd DATETIME = GETDATE()
PRINT CAST(DATEDIFF(ms, #TimeStart, #TimeEnd) AS VARCHAR) + ' elapsed, using hard coded values'
GO
If you are interested in getting optimal execution plan for a value in the variable you can use a dynamic sql code. It makes the variable constant.
DECLARE #var varchar(100) = 'some text'
DECLARE #sql varchar(MAX)
SET #sql = 'SELECT * FROM table WHERE col = '''+#var+''''
EXEC (#sql)
For enums or simple constants, a view with a single row has great performance and compile time checking / dependency tracking ( cause its a column name )
See Jared Ko's blog post https://blogs.msdn.microsoft.com/sql_server_appendix_z/2013/09/16/sql-server-variables-parameters-or-literals-or-constants/
create the view
CREATE VIEW ShipMethods AS
SELECT CAST(1 AS INT) AS [XRQ - TRUCK GROUND]
,CAST(2 AS INT) AS [ZY - EXPRESS]
,CAST(3 AS INT) AS [OVERSEAS - DELUXE]
, CAST(4 AS INT) AS [OVERNIGHT J-FAST]
,CAST(5 AS INT) AS [CARGO TRANSPORT 5]
use the view
SELECT h.*
FROM Sales.SalesOrderHeader
WHERE ShipMethodID = ( select [OVERNIGHT J-FAST] from ShipMethods )
Okay, lets see
Constants are immutable values which are known at compile time and do not change for the life of the program
that means you can never have a constant in SQL Server
declare #myvalue as int
set #myvalue = 5
set #myvalue = 10--oops we just changed it
the value just changed
Since there is no build in support for constants, my solution is very simple.
Since this is not supported:
Declare Constant #supplement int = 240
SELECT price + #supplement
FROM what_does_it_cost
I would simply convert it to
SELECT price + 240/*CONSTANT:supplement*/
FROM what_does_it_cost
Obviously, this relies on the whole thing (the value without trailing space and the comment) to be unique. Changing it is possible with a global search and replace.
There are no such thing as "creating a constant" in database literature. Constants exist as they are and often called values. One can declare a variable and assign a value (constant) to it. From a scholastic view:
DECLARE #two INT
SET #two = 2
Here #two is a variable and 2 is a value/constant.
SQLServer 2022 (currently only as Preview available) is now able to Inline the function proposed by SQLMenace, this should prevent the performance hit described by some comments.
CREATE FUNCTION fnConstant() RETURNS INT AS BEGIN RETURN 2 END GO
SELECT is_inlineable FROM sys.sql_modules WHERE [object_id]=OBJECT_ID('dbo.fnConstant');
is_inlineable
1
SELECT dbo.fnConstant()
ExecutionPlan
To test if it also uses the value coming from the Function, I added a second function returning value "1"
CREATE FUNCTION fnConstant1()
RETURNS INT
AS
BEGIN
RETURN 1
END
GO
Create Temp Table with about 500k rows with Value 1 and 4 rows with Value 2:
DROP TABLE IF EXISTS #temp ;
create table #temp (value_int INT)
DECLARE #counter INT;
SET #counter = 0
WHILE #counter <= 500000
BEGIN
INSERT INTO #temp VALUES (1);
SET #counter = #counter +1
END
SET #counter = 0
WHILE #counter <= 3
BEGIN
INSERT INTO #temp VALUES (2);
SET #counter = #counter +1
END
create index i_temp on #temp (value_int);
Using the describe plan we can see that the Optimizer expects 500k values for
select * from #temp where value_int = dbo.fnConstant1(); --Returns 500001 rows
Constant 1
and 4 rows for
select * from #temp where value_int = dbo.fnConstant(); --Returns 4rows
Constant 2
Robert's performance test is interesting. And even in late 2022, the scalar functions are much slower (by an order of magnitude) than variables or literals. A view (as suggested mbobka) is somewhere in-between when used for this same test.
That said, using a loop like that in SQL Server is not something I'd ever do, because I'd normally be operating on a whole set.
In SQL 2019, if you use schema-bound functions in a set operation, the difference is much less noticeable.
I created and populated a test table:
create table #testTable (id int identity(1, 1) primary key, value tinyint);
And changed the test so that instead of looping and changing a variable, it queries the test table and returns true or false depending on the value in the test table, e.g.:
insert #testTable(value)
select case when value > 127
then #FALSE
else #TRUE
end
from #testTable with(nolock)
I tested 5 scenarios:
hard-coded values
local variables
scalar functions
a view
a table-valued function
running the test 10 times, yielded the following results:
scenario
min
max
avg
scalar functions
233
259
240
hard-coded values
236
265
243
local variables
235
278
245
table-valued function
243
272
253
view
244
267
254
Suggesting to me, that for set-based work in (at least) 2019 and better, there's not much in it.
set nocount on;
go
-- create test data table
drop table if exists #testTable;
create table #testTable (id int identity(1, 1) primary key, value tinyint);
-- populate test data
insert #testTable (value)
select top (1000000) convert(binary (1), newid())
from sys.all_objects a
, sys.all_objects b
go
-- scalar function for True
drop function if exists fnTrue;
go
create function dbo.fnTrue() returns bit with schemabinding as
begin
return 1
end
go
-- scalar function for False
drop function if exists fnFalse;
go
create function dbo.fnFalse () returns bit with schemabinding as
begin
return 0
end
go
-- table-valued function for booleans
drop function if exists dbo.tvfBoolean;
go
create function tvfBoolean() returns table with schemabinding as
return
select convert(bit, 1) as true, convert(bit, 0) as false
go
-- view for booleans
drop view if exists dbo.viewBoolean;
go
create view dbo.viewBoolean with schemabinding as
select convert(bit, 1) as true, convert(bit, 0) as false
go
-- create table for results
drop table if exists #testResults
create table #testResults (id int identity(1,1), test int, elapsed bigint, message varchar(1000));
-- define tests
declare #tests table(testNumber int, description nvarchar(100), sql nvarchar(max))
insert #tests values
(1, N'hard-coded values', N'
declare #testTable table (id int, value bit);
insert #testTable(id, value)
select id, case when t.value > 127
then 0
else 1
end
from #testTable t')
, (2, N'local variables', N'
declare #FALSE as bit = 0
declare #TRUE as bit = 1
declare #testTable table (id int, value bit);
insert #testTable(id, value)
select id, case when t.value > 127
then #FALSE
else #TRUE
end
from #testTable t'),
(3, N'scalar functions', N'
declare #testTable table (id int, value bit);
insert #testTable(id, value)
select id, case when t.value > 127
then dbo.fnFalse()
else dbo.fnTrue()
end
from #testTable t'),
(4, N'view', N'
declare #testTable table (id int, value bit);
insert #testTable(id, value)
select id, case when value > 127
then b.false
else b.true
end
from #testTable t with(nolock), viewBoolean b'),
(5, N'table-valued function', N'
declare #testTable table (id int, value bit);
insert #testTable(id, value)
select id, case when value > 127
then b.false
else b.true
end
from #testTable with(nolock), dbo.tvfBoolean() b')
;
declare #testNumber int, #description varchar(100), #sql nvarchar(max)
declare #testRuns int = 10;
-- execute tests
while #testRuns > 0 begin
set #testRuns -= 1
declare testCursor cursor for select testNumber, description, sql from #tests;
open testCursor
fetch next from testCursor into #testNumber, #description, #sql
while ##FETCH_STATUS = 0 begin
declare #TimeStart datetime2(7) = sysdatetime();
execute sp_executesql #sql;
declare #TimeEnd datetime2(7) = sysdatetime()
insert #testResults(test, elapsed, message)
select #testNumber, datediff_big(ms, #TimeStart, #TimeEnd), #description
fetch next from testCursor into #testNumber, #description, #sql
end
close testCursor
deallocate testCursor
end
-- display results
select test, message, count(*) runs, min(elapsed) as min, max(elapsed) as max, avg(elapsed) as avg
from #testResults
group by test, message
order by avg(elapsed);
The best answer is from SQLMenace according to the requirement if that is to create a temporary constant for use within scripts, i.e. across multiple GO statements/batches.
Just create the procedure in the tempdb then you have no impact on the target database.
One practical example of this is a database create script which writes a control value at the end of the script containing the logical schema version. At the top of the file are some comments with change history etc... But in practice most developers will forget to scroll down and update the schema version at the bottom of the file.
Using the above code allows a visible schema version constant to be defined at the top before the database script (copied from the generate scripts feature of SSMS) creates the database but used at the end. This is right in the face of the developer next to the change history and other comments, so they are very likely to update it.
For example:
use tempdb
go
create function dbo.MySchemaVersion()
returns int
as
begin
return 123
end
go
use master
go
-- Big long database create script with multiple batches...
print 'Creating database schema version ' + CAST(tempdb.dbo.MySchemaVersion() as NVARCHAR) + '...'
go
-- ...
go
-- ...
go
use MyDatabase
go
-- Update schema version with constant at end (not normally possible as GO puts
-- local #variables out of scope)
insert MyConfigTable values ('SchemaVersion', tempdb.dbo.MySchemaVersion())
go
-- Clean-up
use tempdb
drop function MySchemaVersion
go

Resources