The ALTER DATABASE statement is not allowed within a trigger - sql-server

I want to create the file group dynamically when user want to insert data into the table, but SQL Server throws an exception.
I know that I can handle this with SQL Server Agent, but if my approach isn't correct please tell me the correct way.
Kind regards.
ALTER TRIGGER [AuditTrigger]
ON [Audit]
INSTEAD OF INSERT
AS
BEGIN
DECLARE #DateInserted DATETIME = (SELECT DateInserted FROM inserted);
DECLARE #NextRange DATETIME;
DECLARE #currentFileGroup NVARCHAR(MAX)= ('APP_PT_' + CAST(YEAR(#DateInserted) AS NVARCHAR(4)) +'_'+ CAST(MONTH(#DateInserted) AS NVARCHAR(2)))
--print #currentFileGroup;
DECLARE #fileExsits BIT = (SELECT (CASE WHEN EXISTS(SELECT NULL AS [EMPTY] FROM SYS.FILEGROUPS WHERE name LIKE #currentFileGroup) THEN 1 ELSE 0 END))
IF #fileExsits = 0
BEGIN
SET #NextRange = (SELECT Replace(CONVERT(VARCHAR(10), #DateInserted, 111),'/','-'))
DECLARE #filefullname VARCHAR(MAX) = (SELECT physical_name FROM SYS.DATABASE_FILES WHERE name = 'DB_Test')
DECLARE #fgFullName VARCHAR(MAX) = (SELECT (LEFT(#filefullname, LEN(#filefullname) - CHARINDEX('\', REVERSE(#filefullname))) + '.ndf'))
-- The exception occurs here --
ALTER DATABASE DB_TEST
ADD FILE (NAME = [#currentFileGroup],
FILENAME = [#fgFullName],
SIZE = 5MB,
MAXSIZE = 100MB,
FILEGROWTH = 1MB)
TO FILEGROUP Audit_2017
ALTER PARTITION FUNCTION [PF]()
SPLIT RANGE (#NextRange);
ALTER PARTITION SCHEME [PS]
NEXT USED [#currentFileGroup];
END
INSERT INTO LogTable VALUES (#currentFileGroup)
INSERT INTO [Audit]
SELECT DateInserted, Title
FROM inserted;
END
Result:
Msg 287, Level 16, State 2, Procedure AuditTrigger, Line 24
The ALTER DATABASE statement is not allowed within a trigger.

Instead of a trigger, you could use a stored procedure for the Audit table inserts and include the filegroup/file/partition maintenance code there. Note that this trigger will fail on multi-row inserts due to the subquery.
That said, I think the scheduled daily job approach for partition maintenance is cleaner. Not sure why you are bothering to create a new file and filegroup for each partition. Unless you have a special use case, you could simply place each partition on the same filegroup. Make sure the partition function is RANGE RIGHT to avoid excessive data movement and logging during SPLIT.

Related

Method to bulk modify triggers in SQL Server database

I have a database that uses Insert, Update, and Delete Triggers for almost all tables. They log the host and program performing the operation in a separate auditing table. The triggers all include this select statement to set variables that get inserted into the auditing table:
select #HostName = HostName, #ProgramName = Program_Name
from master..sysprocesses where SPID = ##SPID
We are now looking to migrate to Azure SQL Database, which does not support the master..sysprocesses syntax. It also appears that table is deprecated as well: https://learn.microsoft.com/en-us/sql/relational-databases/system-compatibility-views/sys-sysprocesses-transact-sql?view=sql-server-ver15
What we need to do is update the triggers to use this instead:
select #HostName = [host_name], #ProgramName = [program_name]
from sys.dm_exec_sessions where session_id = ##SPID
However, the database has hundreds of tables and each table has three triggers that need updating. The text-replacement for each trigger is identical. Is there a feasible way to script out something to perform this update on all triggers in the database?
OK, I just tested this by jamming your string in a few triggers (as a comment of course) and then running it. I am not advocating this as the correct way to do it, as this link will help you with the correct way to do dynamic sql https://dba.stackexchange.com/questions/165149/exec-vs-sp-executesql-performance
However, this does work and will help you understand how you would piece these things together to get to that point.
Note, any formatting difference between your triggers may cause this to miss some, so youll want to verify that 0on your own.
DECLARE #string VARCHAR(8000)='select #HostName = HostName, #ProgramName = Program_Name
from master..sysprocesses where SPID = ##SPID'
, #counter INT=1
, #Max INT
, #Sql VARCHAR(mAX)
;
IF OBJECT_ID('TempDB..#TrigUpdate') IS NOT NULL DROP TABLE #TrigUpdate;
CREATE TABLE #TrigUpdate
(
SqlVar VARCHAR(MAX)
, RowID INT
)
;
INSERT INTO #TrigUpdate
SELECT REPLACE(REPLACE(t.definition, #string, ''), 'CREATE TRIGGER', 'ALTER TRIGGER')
, Row_Number() OVER (ORDER BY t.Definition ASC) AS RowID
FROM sys.objects o
INNER JOIN sys.sql_modules t on o.object_id =t.object_id
WHERE o.type_desc='SQL_TRIGGER'
AND CHARINDEX(#string, t.definition,1)>0
;
SET #Max = (SELECT COUNT(*) FROM #TrigUpdate);
WHILE #Counter<=#Max
BEGIN
SET #sql = (SELECT SqlVar FROM #TrigUpdate WHERE RowID=#counter);
EXEC(#Sql);
SET #Counter=#Counter+1;
END
It could be done with Object_Definition and Replace.
Create Table #Triggers_new (TriggerName sysname, QueryText VarChar(max))
Declare #string_pattern VarChar(max), #string_replacement VarChar(max)
Select #string_pattern = '<string_pattern>'
Select #string_replacement = '<string_replacement>'
Insert Into #Triggers_new (TriggerName, QueryText)
Select [name], Replace(Object_Definition(object_id), #string_pattern, #string_replacement)
From sys.objects
Where [type] = 'TR'
Order by [name]
-- Update #Triggers_new Set QueryText = Replace(QueryText, 'Create Trigger ', 'Alter Trigger ')
Why do you use a so heavy query on system table/view that can be changed without your consent ?
Can't you simplify you by using metada functions like :
SELECT HOST_NAME(), PROGRAM_NAME()...
That will give the requested information values ?

Why doesn't this alter after insert statement work?

I have a stored procedure with dynamic sql that i have embedded as below:
delete from #temp_table
begin tran
set #sql = 'select * into #temp_table from sometable'
exec (#sql)
commit tran
begin
set #sql = 'alter table #temp_table add column1 float'
exec(#sql)
end
update #temp_table
set column1 = column1*100
select *
into Primary_Table
from #temp_table
However, I noticed that all the statements work but the alter does not. When run the procedure, I get an error message: "Invalid Column name column1"
What am I doing wrong here?
EDIT: Realized I didn't mention that the first insert is a dynamic sql as well. Updated it.
Alternate approach tried but throws same error:
delete from #temp_table
begin tran
set #sql = 'select * into #temp_table from sometable'
exec (#sql)
commit tran
alter table #temp_table add column1 float
update #temp_table set column1 = column1*100
Local temporary tables exhibit something like dynamic scope. When you create a local temporary table inside a call to exec it goes out of scope and existence on the return from exec.
EXEC (N'create table #x (c int)')
GO
SELECT * FROM #x
Msg 208, Level 16, State 0, Line 4
Invalid object name '#x'.
The select is parsed after the dynamic SQL to create #x is ran. But #x is not there because dropped on exit from exec.
Update
Depending on the situation there are different ways to work around the issue.
Put everything into the same string:
DECLARE #Sql NVARCHAR(MAX) = N'SELECT 1 AS source INTO #table_name;
ALTER TABLE #table_name ADD TARGET float;
UPDATE #table_name SET Target = 100 * source;';
EXEC (#Sql);
Create the table ahead of the dynamic sql that populates it.
CREATE TABLE #table_name (source INT);
EXEC (N'insert into #table_name (source) select 1;');
ALTER TABLE #table_name ADD target FLOAT;
UPDATE #table_name SET target = 100 * source;
In this option, the alter table statement can be removed by adding the additional column to the create table statement.' Note also that the alter table and update statements could be in separate invocations of dynamic SQL, if that was beneficial to your context.
1) It should be ALTER TABLE #temp... Not ALTER #temp.
2) Even if #1 weren't an issue, you're adding column1, as a NULLable column with no default value and, in the next statement setting it's value to itself * 100...
NULL * 100 = NULL
3) Why are you using dynamic sql to alter the #temp table? It can just as easily be done with a regular ALTER TABLE script... or, better yet, can be included in the original table definition.
This is because the #temp_table reference in the outer batch is a different temp table than the one created in dynamic SQL. Consider:
use tempdb
drop table if exists sometable
drop table if exists #temp_table
go
create table sometable(id int, a int)
create table #temp_table(id int, b int)
exec( 'select * into #temp_table from sometable; select * from #temp_table;' )
select * from #temp_table
Outputs
id a
----------- -----------
(0 rows affected)
id b
----------- -----------
(0 rows affected)
A temp table created in a nested batch is scoped to the nested batch and automatically dropped after. A "nested batch" is either a dynamic SQL query or a stored procedure. This behavior is explained here CREATE TABLE, but it only mentions stored procedures. Dynamic SQL behaves the same.
If you create the temp table in a top level batch, you can access it in dynamic SQL, you just can't create a new temp table in dynamic SQL and see it in the outer batch or in subsequent same-level dynamic SQL. So try to use INSERT INTO instead of SELECT INTO.

Copy entire SQL table to another and truncate original table

I am writing a stored procedure that will copy the entire contents of a table called "CS_Consolidation" into a backup table called "CS_ConsolidationBackup2016" all fields are exactly the same and the new data everyday must just be added after which the original table must be truncated.
I am however having a problem with my procedure and how it is written if anyone can help:
CREATE PROCEDURE BackUpData2
AS
BEGIN
SET NOCOUNT ON;
SELECT *
INTO [dbo].[CS_ConsolidationBackUp]
FROM [dbo].[CS_Consolidation]
TRUNCATE TABLE [dbo].[CS_Consolidation]
GO
Why do you want to copy the data and then delete the original? This is entirely more complicated and stressful to the system then you need. There is no need to create a second copy of the data so that you can just turn around and drop the first copy.
A much easier path would be to rename to current table and then create you new primary table.
EXEC sp_rename 'CS_Consolidation', 'CS_ConsolidationBackUp';
GO
select *
into CS_Consolidation
from CS_ConsolidationBackUp
where 1 = 0; --this ensures no rows but the entire structure is copied.
If you are looking to create one backup table daily, would something like this work?
DECLARE #BackupTableName nvarchar(250)
SELECT #BackupTableName = 'CS_ConsolidationBackUp' + CAST(CONVERT(date, getdate()) as varchar(250))
IF EXISTS(SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = #BackupTableName)
BEGIN
EXEC('DROP TABLE [' + #BackupTableName + ']')
END
EXEC('SELECT * INTO [dbo].[' + #BackupTableName + '] FROM [dbo].[CS_Consolidation]')
TRUNCATE TABLE [dbo].[CS_Consolidation]
You are missing and "end" statement before "go". This is the correct code:
CREATE PROCEDURE BackUpData2
AS
BEGIN
SET NOCOUNT ON;
SELECT *
INTO [dbo].[CS_ConsolidationBackUp]
FROM [dbo].[CS_Consolidation]
TRUNCATE TABLE [dbo].[CS_Consolidation]
end
GO

How can I create an UPDATE statement from a large XML data type?

I am working with two databases that are not accessible at the same time. One of the standard methods of dealing with this I've seen on here is to create dynamic sql for loading one from the other.
I created a stored procedure that would drop update statements from an existing database. My issue is what happens when the XML is too large to be held in a VARCHAR(max).
Here is a relevant snippet from my attempt where field2 is actually of an XML data type:
DECLARE #field1Col VARCHAR(50)
DECLARE #field2Col VARCHAR(max)
DECLARE #vsSQL VARCHAR(max)
DECLARE curUpdates CURSOR FOR
-- field 1 is varchar(50), not null
-- field 2 is XML(.), null
SELECT
t.field1
,REPLACE(CAST(t.[field2] AS VARCHAR(max)), '''', '''''')
FROM
myTable t
WHERE
t.criteria = 0
OPEN curUpdates
FETCH NEXT FROM curUpdates INTO #field1Col, #field2Col
WHILE ##FETCH_STATUS = 0
BEGIN
SET #vsSQL = 'UPDATE dbo.myTable SET [field1] = ''' + #field1Col+ ''' WHERE [field2] = ''' + #field2Col + ''''
INSERT INTO #tmp ( SQLText ) VALUES ( #vsSQL )
FETCH NEXT FROM curUpdates INTO #field1Col, #field2Col
END
CLOSE curUpdates
DEALLOCATE curUpdates
SET NOCOUNT OFF;
SELECT * FROM #tmp
The issue I have is that even using VARCHAR(max), the XML will sometimes overrun the size. The end product just stops when it reaches the so many characters (the max size of a VARCHAR?).
Is there another approach for working with large XML (splitting into chunks, avoid casting, etc.) where I can build a string of update statements from it?
I do not have access to database B. I'd like to (one time run) update
a few tables in database B
The one time run could point to something like this:
CREATE DATABASE MyOneTimeRun;
GO
USE MyOneTimeRun;
GO
SELECT * INTO MyCopy FROM YourDatabase.dbo.YourTable;
GO
BACKUP DATABASE [MyOneTimeRun] TO DISK = N'C:\Path\MyOneTimeRun.bak' WITH NOFORMAT, NOINIT
,NAME = N'MyOneTimeRun-Copy of MyTable'
,SKIP, NOREWIND, NOUNLOAD, STATS = 10
GO
USE master;
GO
EXEC msdb.dbo.sp_delete_database_backuphistory #database_name = N'MyOneTimeRun'
GO
USE [master]
GO
ALTER DATABASE [MyOneTimeRun] SET SINGLE_USER WITH ROLLBACK IMMEDIATE
GO
USE [master]
GO
DROP DATABASE [MyOneTimeRun]
GO
Now you have a BAK-file with the content you need which you can restore on your other server.
There you use the appropriate scripts to shuffle your data typesafe and clean from the copy into your target.

Locally scoped begin-end declares (altering multiple triggers in a single transaction)

Goal
I need to alter a number of almost identical triggers on a number of tables (and a number of databases).
Therefore I wan't to make one big script, and perform all the changes in one succeed-or-fail transaction.
My first attempt (that doesn't work)
---First alter trigger
ALTER TRIGGER [dbo].[trg_UserGarbleValue] ON [dbo].[users]
FOR INSERT
AS
Begin
DECLARE #GarbleValue NVARCHAR(200)
DECLARE #NewID NVARCHAR(20)
SET #NewID = (SELECT TOP 1 usr_id FROM users ORDER BY usr_id DESC)
SET #GarbleValue = dbo.fn_GetRandomString(4) + #NewID + dbo.fn_GetRandomString(4)
UPDATE users SET usr_garble_value = #GarbleValue WHERE usr_id = #NewID
End
Go
--Subsequent alter trigger (there would be many more in the real-world usage)
ALTER TRIGGER [dbo].[trg_SegmentGarbleValue] ON [dbo].[segment]
FOR INSERT
AS
Begin
DECLARE #GarbleValue NVARCHAR(200)
DECLARE #NewID NVARCHAR(20)
SET #NewID = (SELECT TOP 1 seg_id FROM segment ORDER BY seg_id DESC)
SET #GarbleValue = dbo.fn_GetRandomString(4) + #NewID + dbo.fn_GetRandomString(4)
UPDATE segment SET seg_garble_value = #GarbleValue WHERE seg_id = #NewID
End
Go
Running each of the alter trigger statements by themselves works fine. But when both of them are run in the same transaction, the declares crash in the second alter because the variables name already exists.
How do I accomplish this? Is there any way to declare a variable locally within a begin-end scope, or do I need to rethink it completely?
(I'm aware that the "top 1" for fetching new records is probably not very clever, but that is another matter)
I think you've confused GO (the batch separator) and transactions. It shouldn't complain about the variable names being redeclared, provided the batch separators are still present:
BEGIN TRANSACTION
GO
ALTER TRIGGER [dbo].[trg_UserGarbleValue] ON [dbo].[users]
FOR INSERT
AS
---Etc
Go
ALTER TRIGGER [dbo].[trg_SegmentGarbleValue] ON [dbo].[segment]
FOR INSERT
AS
---Etc
Go
COMMIT
As to your note about TOP 1, it's worse than you think - a trigger runs once per statement, not once per row - it could be running in response to multiple rows having been inserted. And, happily, there is a pseudo-table available (called inserted) that contains exactly those rows which caused the trigger to fire - there's no need for you to go searching for those row(s) in the base table.

Resources