I have databse having 2 tables i.e (Installment and InstallmentPlan).
In the installment table there are number of columns. I want to add one new computed column name SurchargeCalculated (SC). The calculation for SC is this
SC = (Installment.Amount * Installment.days * (InstallmentPlan.InstPercentage /365 /100))
I have created user-defined function SurchargeCal having 3 parameters which are Amount, days and InstPercentage. The problem is that when i add computed column in installment table and call scalar function from there, the saclar func needs InstPercetage parameter of 2nd table (InstallmentPlan).
I know recommended ways is to use view but that will complicate my problem as i am using installment table in C#.
Any help will be extremely appreciated.
My scalar function is
USE [myDB]
GO
/****** Object: UserDefinedFunction [dbo].[SurchargeCal] Script Date: 17/02/2020 2:21:15 PM
******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
-- =============================================
ALTER FUNCTION [dbo].[SurchargeCal]
(
-- Add the parameters for the function here
#days as int,
#amount as money,
#surchargePerc as decimal
)
RETURNS decimal
AS
BEGIN
-- Declare the return variable here
DECLARE #result as decimal =0;
-- Add the T-SQL statements to compute the return value here
--SELECT <#ResultVar, sysname, #Result> = <#Param1, sysname, #p1>
if #days = 0
set #result = 0
else if (#days > 0 and #amount > 0)
set #result = (#days * #amount * (#surchargePerc/ 365 / 100))
else
set #result = 0
-- Return the result of the function
RETURN #result
END
then below ADD table command XXXX is the problem
USE [myDB]
GO
ALTER TABLE dbo.Installment
ADD SurchargeCalculated AS dbo.SurchargeCalc(days,Amount, XXXX) --where XXX should be InstPercenatage
GO
Since you can't use a subquery directly on the computed column, you will need to do it on the scalar function itself :
CREATE FUNCTION SurchargeCal(#days as integer, #amount as money, #PlanKey as integer)
RETURNS decimal
AS
BEGIN
DECLARE #result as decimal = 0;
SELECT #result = #amount * #days * InstPercentage / 365 / 100
FROM InstallMentPlan
WHERE PlanKey = #PlanKey
RETURN #result
END
Now you can create the computed column, passing the PlanKey instead of its InstPercentage.
ALTER TABLE dbo.Installment
ADD SurchargeCalculated AS dbo.SurchargeCalc(days, Amount, PlanKey)
I have an application with a requirement that the user can "copy" a record. This will duplicate the record in the table, and the associated records in any child tables.
I will use a trigger to execute a stored procedure to do the copy. The issue I am facing is that I want to increment the ID field for the copied record, which is also the FK in the child tables. The ID field is not a standard format, so using an increment won't work. In order to make this future-proof, I was going to use dynamic SQL to pull the columns for each table so that I don't need to modify the code if I add a new field to one of the tables. The client's system admin can also add columns to the table via the GUI, but they have no access to the SQL backend so they would need to contact us to modify the code (not ideal).
Example:
Declare #ColumnNames varchar(2000)
Declare #BLDGCODE char(4)
set #BLDGCODE = '001'
select #ColumnNames = COALESCE(#ColumnNames + ', ', '') + COLUMN_NAME
from
INFORMATION_SCHEMA.COLUMNS
where
TABLE_NAME='FMB0'
Declare #DynSqlStatement varchar(max);
set #DynSqlStatement = 'Insert into dbo.FMB0('+ #ColumnNames + ')
select * from dbo.FMB0 where BLDGCODE= ' + cast(#BLDGCODE as char(4));
print(#DynSqlStatement);
This solves the issue for a new column being added to one of the tables. However, how can I increment the ID (BLDGCODE in this example). Is my only solution to script out the columns by name so I can increment the ID, or is there a function I am overlooking?
Hopefully this made sense. I am an intermediate SQL user at best, so forgive the naivete if there's an obvious solution.
UPDATE
So I've decided to use #temp tables to hold the record that was changed, modify the id there, and then insert back into the main table from the #temp table. This is working pretty well, with one exception. I get the following error:
The column "FLOORID_" cannot be modified because it is either a computed column or is the result of a UNION operator.
Below is my stored procedure. I've investigated a STUFF approach, but not sure where to insert that code. Using STUFF with calculated column. I am now back to thinking I need to call out the columns specifically for the one table with the computed column, and if we add a new field, I just need to modify this stored procedure. Anyone have any other ideas?
ALTER PROCEDURE [dbo].[LDAC_BLDGCOPY]
#BLDGCODE CHAR(4)
AS
BEGIN
SET NOCOUNT ON;
--COPY BUILDING RECORD
BEGIN
DECLARE #ColumnNamesB0 VARCHAR(2000);
SELECT *
INTO #TEMPB0
FROM FMB0
WHERE BLDGCODE = #BLDGCODE;
UPDATE #TEMPB0
SET BLDGCODE = CONVERT(CHAR(4), (CAST((#BLDGCODE) AS INT) +
100)),
auto_key = dbo.GetAutoKey(),
BLDGCOPY = 0;
SELECT #ColumnNamesB0 = COALESCE(#ColumnNamesB0+', ',
'')+COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'FMB0';
DECLARE #DynSqlStatementB0 VARCHAR(MAX);
SET #DynSqlStatementB0 = 'Insert into dbo.FMB0('+#ColumnNamesB0+')
select * from #TEMPB0';
EXEC (#DynSqlStatementB0);
END;
--COPY FLOOR RECORDS
BEGIN
DECLARE #ColumnNamesL0 VARCHAR(2000);
--DECLARE #Val INT =
RTRIM(CONVERT(CHAR(10),CAST(LEFT(#BL_KEY,LEN(RTRIM(#BL_KEY))-2)+1 as
INT)))+'01
SELECT *
INTO #TEMPL0
FROM FML0
WHERE BLDGCODE = #BLDGCODE;
UPDATE #TEMPL0
SET BLDGCODE = CONVERT(CHAR(4), (CAST((#BLDGCODE) AS INT) +
100)),
auto_key = dbo.GetAutoKey();
UPDATE #TEMPL0
SET FLOORID_ = auto_key;
SELECT #ColumnNamesL0 = COALESCE(#ColumnNamesL0+', ',
'')+COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'FML0';
DECLARE #DynSqlStatementL0 VARCHAR(MAX);
SET #DynSqlStatementL0 = 'Insert into dbo.FML0('+#ColumnNamesL0+')
select * from #TEMPL0';
EXEC (#DynSqlStatementL0);
END;
--COPY ROOM RECORDS
BEGIN
DECLARE #ColumnNamesA0 VARCHAR(2000);
--DECLARE #Val INT =
RTRIM(CONVERT(CHAR(10),CAST(LEFT(#BL_KEY,LEN(RTRIM(#BL_KEY))-2)+1 as
INT)))+'01
SELECT *
INTO #TEMPA0
FROM FMA0
WHERE BLDGCODE = #BLDGCODE;
UPDATE #TEMPA0
SET
BLDGCODE = CONVERT(CHAR(4), (CAST((#BLDGCODE) AS INT) +
100)),
auto_key = dbo.GetAutoKey(),
FLOORID = #TEMPL0.FLOORID_
FROM #TEMPA0
INNER JOIN #TEMPL0 ON CONVERT(CHAR(4), (CAST((#BLDGCODE) AS
INT) + 100)) = #TEMPL0.BLDGCODE;
SELECT #ColumnNamesA0 = COALESCE(#ColumnNamesA0+', ',
'')+COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'FMA0';
DECLARE #DynSqlStatementA0 VARCHAR(MAX);
SET #DynSqlStatementA0 = 'Insert into dbo.FMA0('+#ColumnNamesA0+')
select * from #TEMPA0';
EXEC (#DynSqlStatementA0);
DROP TABLE #TEMPB0;
DROP TABLE #TEMPL0;
DROP TABLE #TEMPA0;
END;
END;
I need help to create an insert stored procedure with parameters, if pass the value for that parameter, then it has to insert or update in database, else it should use a default of zero.
My problem is that when I execute the code below using a SampleID = 0, it works... any other SampleID does nothing.
Thanks in advance
Execution of the stored procedure:
SET NOCOUNT OFF
EXEC SP_Samples '0', '3' ,'Pink'
GO
Stored procedure code:
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[SP_Samples]
(#SamplesID int,
#SamplesCategoriesID int,
#SamplesName nvarchar(50)
)
AS
BEGIN
SET NOCOUNT ON;
IF (#SamplesID = 0)
BEGIN
INSERT INTO tblSamples ([SamplesName], [SamplesCategoriesID])
VALUES (#SamplesName, #SamplesCategoriesID)
END
ELSE
BEGIN
UPDATE [tblSamples]
SET SamplesCategoriesID = #SamplesCategoriesID,
SamplesName = #SamplesName
WHERE SamplesID = #SamplesID
END
END
First, you have to learn how to debug a stored procedure.
Just temporarily insert following code into your procedure:
SELECT #SamplesID, #SamplesCategoriesID, #SamplesName;
That is how you will make sure values passed correctly.
Second, for Non-zero cases add following statement:
SELECT COUNT(*) FROM tblSamples WHERE SamplesID = #SamplesID;
That statement should not return zero.
Third, to be 500% sure you are doing it right, pass parameters by name like this:
EXEC SP_Samples #SamplesID = 0, #SamplesCategoriesID = 3, #SamplesName = 'Pink';
GO
EXEC SP_Samples #SamplesID = 1, #SamplesCategoriesID = 4, #SamplesName = 'Blue';
GO
I am having a little trouble trying to create a function that I can call to generate a 32 character random ID. I have it working in query editor but it is my understanding that to generate this in multiple lines of results accompanied by existing data from tables I have to create a function and call it. Here is my code:
CREATE FUNCTION [dbo].[Func_Gen_OID] (#NewOID varchar(32))
RETURNS VARCHAR(32) AS
BEGIN
DECLARE #Length int = 32
DECLARE #Output varchar(32)
DECLARE #counter smallint
DECLARE #RandomNumber float
DECLARE #RandomNumberInt tinyint
DECLARE #CurrentCharacter varchar(1)
DECLARE #ValidCharacters varchar(255)
SET #ValidCharacters = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789'
DECLARE #ValidCharactersLength int
SET #ValidCharactersLength = len(#ValidCharacters)
SET #CurrentCharacter = ''
SET #RandomNumber = 0
SET #RandomNumberInt = 0
SET #Output = ''
SET NOCOUNT ON
SET #counter = 1
WHILE #counter < (#Length + 1)
BEGIN
SET #RandomNumber = Rand()
SET #RandomNumberInt = Convert(tinyint, ((#ValidCharactersLength - 1) * #RandomNumber + 1))
SELECT #CurrentCharacter = SUBSTRING(#ValidCharacters, #RandomNumberInt, 1)
SET #counter = #counter + 1
SET #Output = #Output + #CurrentCharacter
RETURN #Output
END
Thanks for any and all help!
instead of RAND() function, use this:
create view ViewRandomNumbers
as
select rand( ) as Number
go
I'll guess you're seeing this error:
Invalid use of a side-effecting operator 'rand' within a function.
You can't use RAND() inside a user-defined function directly (you'd need a view as an intermediary). And in this case, RAND would return the same value for every row of your query anyway as you haven't specified a varying seed (different for each call to the function).
If you want to encapsulate the rest of the logic inside the function, you will need to pass it a randomly-generated value from elsewhere, possibly generated via CHECKSUM() and NEWID().
There are some possibilities mentioned in this SQL Server Central thread
Two issues I see:
You are returning #Output inside your while loop.
You don't need the input parameter as you aren't using it inside the function
Is there a way to make a TSQL variable constant?
No, but you can create a function and hardcode it in there and use that.
Here is an example:
CREATE FUNCTION fnConstant()
RETURNS INT
AS
BEGIN
RETURN 2
END
GO
SELECT dbo.fnConstant()
One solution, offered by Jared Ko is to use pseudo-constants.
As explained in SQL Server: Variables, Parameters or Literals? Or… Constants?:
Pseudo-Constants are not variables or parameters. Instead, they're simply views with one row, and enough columns to support your constants. With these simple rules, the SQL Engine completely ignores the value of the view but still builds an execution plan based on its value. The execution plan doesn't even show a join to the view!
Create like this:
CREATE SCHEMA ShipMethod
GO
-- Each view can only have one row.
-- Create one column for each desired constant.
-- Each column is restricted to a single value.
CREATE VIEW ShipMethod.ShipMethodID AS
SELECT CAST(1 AS INT) AS [XRQ - TRUCK GROUND]
,CAST(2 AS INT) AS [ZY - EXPRESS]
,CAST(3 AS INT) AS [OVERSEAS - DELUXE]
,CAST(4 AS INT) AS [OVERNIGHT J-FAST]
,CAST(5 AS INT) AS [CARGO TRANSPORT 5]
Then use like this:
SELECT h.*
FROM Sales.SalesOrderHeader h
JOIN ShipMethod.ShipMethodID const
ON h.ShipMethodID = const.[OVERNIGHT J-FAST]
Or like this:
SELECT h.*
FROM Sales.SalesOrderHeader h
WHERE h.ShipMethodID = (SELECT TOP 1 [OVERNIGHT J-FAST] FROM ShipMethod.ShipMethodID)
My workaround to missing constans is to give hints about the value to the optimizer.
DECLARE #Constant INT = 123;
SELECT *
FROM [some_relation]
WHERE [some_attribute] = #Constant
OPTION( OPTIMIZE FOR (#Constant = 123))
This tells the query compiler to treat the variable as if it was a constant when creating the execution plan. The down side is that you have to define the value twice.
No, but good old naming conventions should be used.
declare #MY_VALUE as int
There is no built-in support for constants in T-SQL. You could use SQLMenace's approach to simulate it (though you can never be sure whether someone else has overwritten the function to return something else…), or possibly write a table containing constants, as suggested over here. Perhaps write a trigger that rolls back any changes to the ConstantValue column?
Prior to using a SQL function run the following script to see the differences in performance:
IF OBJECT_ID('fnFalse') IS NOT NULL
DROP FUNCTION fnFalse
GO
IF OBJECT_ID('fnTrue') IS NOT NULL
DROP FUNCTION fnTrue
GO
CREATE FUNCTION fnTrue() RETURNS INT WITH SCHEMABINDING
AS
BEGIN
RETURN 1
END
GO
CREATE FUNCTION fnFalse() RETURNS INT WITH SCHEMABINDING
AS
BEGIN
RETURN ~ dbo.fnTrue()
END
GO
DECLARE #TimeStart DATETIME = GETDATE()
DECLARE #Count INT = 100000
WHILE #Count > 0 BEGIN
SET #Count -= 1
DECLARE #Value BIT
SELECT #Value = dbo.fnTrue()
IF #Value = 1
SELECT #Value = dbo.fnFalse()
END
DECLARE #TimeEnd DATETIME = GETDATE()
PRINT CAST(DATEDIFF(ms, #TimeStart, #TimeEnd) AS VARCHAR) + ' elapsed, using function'
GO
DECLARE #TimeStart DATETIME = GETDATE()
DECLARE #Count INT = 100000
DECLARE #FALSE AS BIT = 0
DECLARE #TRUE AS BIT = ~ #FALSE
WHILE #Count > 0 BEGIN
SET #Count -= 1
DECLARE #Value BIT
SELECT #Value = #TRUE
IF #Value = 1
SELECT #Value = #FALSE
END
DECLARE #TimeEnd DATETIME = GETDATE()
PRINT CAST(DATEDIFF(ms, #TimeStart, #TimeEnd) AS VARCHAR) + ' elapsed, using local variable'
GO
DECLARE #TimeStart DATETIME = GETDATE()
DECLARE #Count INT = 100000
WHILE #Count > 0 BEGIN
SET #Count -= 1
DECLARE #Value BIT
SELECT #Value = 1
IF #Value = 1
SELECT #Value = 0
END
DECLARE #TimeEnd DATETIME = GETDATE()
PRINT CAST(DATEDIFF(ms, #TimeStart, #TimeEnd) AS VARCHAR) + ' elapsed, using hard coded values'
GO
If you are interested in getting optimal execution plan for a value in the variable you can use a dynamic sql code. It makes the variable constant.
DECLARE #var varchar(100) = 'some text'
DECLARE #sql varchar(MAX)
SET #sql = 'SELECT * FROM table WHERE col = '''+#var+''''
EXEC (#sql)
For enums or simple constants, a view with a single row has great performance and compile time checking / dependency tracking ( cause its a column name )
See Jared Ko's blog post https://blogs.msdn.microsoft.com/sql_server_appendix_z/2013/09/16/sql-server-variables-parameters-or-literals-or-constants/
create the view
CREATE VIEW ShipMethods AS
SELECT CAST(1 AS INT) AS [XRQ - TRUCK GROUND]
,CAST(2 AS INT) AS [ZY - EXPRESS]
,CAST(3 AS INT) AS [OVERSEAS - DELUXE]
, CAST(4 AS INT) AS [OVERNIGHT J-FAST]
,CAST(5 AS INT) AS [CARGO TRANSPORT 5]
use the view
SELECT h.*
FROM Sales.SalesOrderHeader
WHERE ShipMethodID = ( select [OVERNIGHT J-FAST] from ShipMethods )
Okay, lets see
Constants are immutable values which are known at compile time and do not change for the life of the program
that means you can never have a constant in SQL Server
declare #myvalue as int
set #myvalue = 5
set #myvalue = 10--oops we just changed it
the value just changed
Since there is no build in support for constants, my solution is very simple.
Since this is not supported:
Declare Constant #supplement int = 240
SELECT price + #supplement
FROM what_does_it_cost
I would simply convert it to
SELECT price + 240/*CONSTANT:supplement*/
FROM what_does_it_cost
Obviously, this relies on the whole thing (the value without trailing space and the comment) to be unique. Changing it is possible with a global search and replace.
There are no such thing as "creating a constant" in database literature. Constants exist as they are and often called values. One can declare a variable and assign a value (constant) to it. From a scholastic view:
DECLARE #two INT
SET #two = 2
Here #two is a variable and 2 is a value/constant.
SQLServer 2022 (currently only as Preview available) is now able to Inline the function proposed by SQLMenace, this should prevent the performance hit described by some comments.
CREATE FUNCTION fnConstant() RETURNS INT AS BEGIN RETURN 2 END GO
SELECT is_inlineable FROM sys.sql_modules WHERE [object_id]=OBJECT_ID('dbo.fnConstant');
is_inlineable
1
SELECT dbo.fnConstant()
ExecutionPlan
To test if it also uses the value coming from the Function, I added a second function returning value "1"
CREATE FUNCTION fnConstant1()
RETURNS INT
AS
BEGIN
RETURN 1
END
GO
Create Temp Table with about 500k rows with Value 1 and 4 rows with Value 2:
DROP TABLE IF EXISTS #temp ;
create table #temp (value_int INT)
DECLARE #counter INT;
SET #counter = 0
WHILE #counter <= 500000
BEGIN
INSERT INTO #temp VALUES (1);
SET #counter = #counter +1
END
SET #counter = 0
WHILE #counter <= 3
BEGIN
INSERT INTO #temp VALUES (2);
SET #counter = #counter +1
END
create index i_temp on #temp (value_int);
Using the describe plan we can see that the Optimizer expects 500k values for
select * from #temp where value_int = dbo.fnConstant1(); --Returns 500001 rows
Constant 1
and 4 rows for
select * from #temp where value_int = dbo.fnConstant(); --Returns 4rows
Constant 2
Robert's performance test is interesting. And even in late 2022, the scalar functions are much slower (by an order of magnitude) than variables or literals. A view (as suggested mbobka) is somewhere in-between when used for this same test.
That said, using a loop like that in SQL Server is not something I'd ever do, because I'd normally be operating on a whole set.
In SQL 2019, if you use schema-bound functions in a set operation, the difference is much less noticeable.
I created and populated a test table:
create table #testTable (id int identity(1, 1) primary key, value tinyint);
And changed the test so that instead of looping and changing a variable, it queries the test table and returns true or false depending on the value in the test table, e.g.:
insert #testTable(value)
select case when value > 127
then #FALSE
else #TRUE
end
from #testTable with(nolock)
I tested 5 scenarios:
hard-coded values
local variables
scalar functions
a view
a table-valued function
running the test 10 times, yielded the following results:
scenario
min
max
avg
scalar functions
233
259
240
hard-coded values
236
265
243
local variables
235
278
245
table-valued function
243
272
253
view
244
267
254
Suggesting to me, that for set-based work in (at least) 2019 and better, there's not much in it.
set nocount on;
go
-- create test data table
drop table if exists #testTable;
create table #testTable (id int identity(1, 1) primary key, value tinyint);
-- populate test data
insert #testTable (value)
select top (1000000) convert(binary (1), newid())
from sys.all_objects a
, sys.all_objects b
go
-- scalar function for True
drop function if exists fnTrue;
go
create function dbo.fnTrue() returns bit with schemabinding as
begin
return 1
end
go
-- scalar function for False
drop function if exists fnFalse;
go
create function dbo.fnFalse () returns bit with schemabinding as
begin
return 0
end
go
-- table-valued function for booleans
drop function if exists dbo.tvfBoolean;
go
create function tvfBoolean() returns table with schemabinding as
return
select convert(bit, 1) as true, convert(bit, 0) as false
go
-- view for booleans
drop view if exists dbo.viewBoolean;
go
create view dbo.viewBoolean with schemabinding as
select convert(bit, 1) as true, convert(bit, 0) as false
go
-- create table for results
drop table if exists #testResults
create table #testResults (id int identity(1,1), test int, elapsed bigint, message varchar(1000));
-- define tests
declare #tests table(testNumber int, description nvarchar(100), sql nvarchar(max))
insert #tests values
(1, N'hard-coded values', N'
declare #testTable table (id int, value bit);
insert #testTable(id, value)
select id, case when t.value > 127
then 0
else 1
end
from #testTable t')
, (2, N'local variables', N'
declare #FALSE as bit = 0
declare #TRUE as bit = 1
declare #testTable table (id int, value bit);
insert #testTable(id, value)
select id, case when t.value > 127
then #FALSE
else #TRUE
end
from #testTable t'),
(3, N'scalar functions', N'
declare #testTable table (id int, value bit);
insert #testTable(id, value)
select id, case when t.value > 127
then dbo.fnFalse()
else dbo.fnTrue()
end
from #testTable t'),
(4, N'view', N'
declare #testTable table (id int, value bit);
insert #testTable(id, value)
select id, case when value > 127
then b.false
else b.true
end
from #testTable t with(nolock), viewBoolean b'),
(5, N'table-valued function', N'
declare #testTable table (id int, value bit);
insert #testTable(id, value)
select id, case when value > 127
then b.false
else b.true
end
from #testTable with(nolock), dbo.tvfBoolean() b')
;
declare #testNumber int, #description varchar(100), #sql nvarchar(max)
declare #testRuns int = 10;
-- execute tests
while #testRuns > 0 begin
set #testRuns -= 1
declare testCursor cursor for select testNumber, description, sql from #tests;
open testCursor
fetch next from testCursor into #testNumber, #description, #sql
while ##FETCH_STATUS = 0 begin
declare #TimeStart datetime2(7) = sysdatetime();
execute sp_executesql #sql;
declare #TimeEnd datetime2(7) = sysdatetime()
insert #testResults(test, elapsed, message)
select #testNumber, datediff_big(ms, #TimeStart, #TimeEnd), #description
fetch next from testCursor into #testNumber, #description, #sql
end
close testCursor
deallocate testCursor
end
-- display results
select test, message, count(*) runs, min(elapsed) as min, max(elapsed) as max, avg(elapsed) as avg
from #testResults
group by test, message
order by avg(elapsed);
The best answer is from SQLMenace according to the requirement if that is to create a temporary constant for use within scripts, i.e. across multiple GO statements/batches.
Just create the procedure in the tempdb then you have no impact on the target database.
One practical example of this is a database create script which writes a control value at the end of the script containing the logical schema version. At the top of the file are some comments with change history etc... But in practice most developers will forget to scroll down and update the schema version at the bottom of the file.
Using the above code allows a visible schema version constant to be defined at the top before the database script (copied from the generate scripts feature of SSMS) creates the database but used at the end. This is right in the face of the developer next to the change history and other comments, so they are very likely to update it.
For example:
use tempdb
go
create function dbo.MySchemaVersion()
returns int
as
begin
return 123
end
go
use master
go
-- Big long database create script with multiple batches...
print 'Creating database schema version ' + CAST(tempdb.dbo.MySchemaVersion() as NVARCHAR) + '...'
go
-- ...
go
-- ...
go
use MyDatabase
go
-- Update schema version with constant at end (not normally possible as GO puts
-- local #variables out of scope)
insert MyConfigTable values ('SchemaVersion', tempdb.dbo.MySchemaVersion())
go
-- Clean-up
use tempdb
drop function MySchemaVersion
go