Goal
I need to alter a number of almost identical triggers on a number of tables (and a number of databases).
Therefore I wan't to make one big script, and perform all the changes in one succeed-or-fail transaction.
My first attempt (that doesn't work)
---First alter trigger
ALTER TRIGGER [dbo].[trg_UserGarbleValue] ON [dbo].[users]
FOR INSERT
AS
Begin
DECLARE #GarbleValue NVARCHAR(200)
DECLARE #NewID NVARCHAR(20)
SET #NewID = (SELECT TOP 1 usr_id FROM users ORDER BY usr_id DESC)
SET #GarbleValue = dbo.fn_GetRandomString(4) + #NewID + dbo.fn_GetRandomString(4)
UPDATE users SET usr_garble_value = #GarbleValue WHERE usr_id = #NewID
End
Go
--Subsequent alter trigger (there would be many more in the real-world usage)
ALTER TRIGGER [dbo].[trg_SegmentGarbleValue] ON [dbo].[segment]
FOR INSERT
AS
Begin
DECLARE #GarbleValue NVARCHAR(200)
DECLARE #NewID NVARCHAR(20)
SET #NewID = (SELECT TOP 1 seg_id FROM segment ORDER BY seg_id DESC)
SET #GarbleValue = dbo.fn_GetRandomString(4) + #NewID + dbo.fn_GetRandomString(4)
UPDATE segment SET seg_garble_value = #GarbleValue WHERE seg_id = #NewID
End
Go
Running each of the alter trigger statements by themselves works fine. But when both of them are run in the same transaction, the declares crash in the second alter because the variables name already exists.
How do I accomplish this? Is there any way to declare a variable locally within a begin-end scope, or do I need to rethink it completely?
(I'm aware that the "top 1" for fetching new records is probably not very clever, but that is another matter)
I think you've confused GO (the batch separator) and transactions. It shouldn't complain about the variable names being redeclared, provided the batch separators are still present:
BEGIN TRANSACTION
GO
ALTER TRIGGER [dbo].[trg_UserGarbleValue] ON [dbo].[users]
FOR INSERT
AS
---Etc
Go
ALTER TRIGGER [dbo].[trg_SegmentGarbleValue] ON [dbo].[segment]
FOR INSERT
AS
---Etc
Go
COMMIT
As to your note about TOP 1, it's worse than you think - a trigger runs once per statement, not once per row - it could be running in response to multiple rows having been inserted. And, happily, there is a pseudo-table available (called inserted) that contains exactly those rows which caused the trigger to fire - there's no need for you to go searching for those row(s) in the base table.
Related
Why not exists recompile option for trigger?
Suddenly the performance of one of our procedure (multiple SELECTs, multiple tables, insert into table) went from returning data in around 1 secs to 10-30secs.
After adding various debugging and logging we noticed that the performance would increase from the slow 10-30secs, back to sub-second speeds. (because alter trigger one of the table)
Just to clarify. The sequence of events:
Slow performance of Insert
Alter trigger table
Fast performance of Insert
I think slow performance associated with create wrong plan cash. because, before call insert command on the procedure, I write print datetime and the beginning of the the trigger command, add print datetime, so when call the procedure before alter trigger, The time difference between the first print and the second print is 20 sec, but when alter trigger, back to sub-second speeds. It should be noted that the commands in the trigger are not complicated
so, I need to add recompile option to trigger like procedure
it is trigger Script sample:
create trigger t_test on tbl AFTER insert
as
begin
begin try
declare #yearid int,
#id int
select #id = id,#yearid = yearid
from inserted
if exists(select * from FinancialYear where id = #yearid and flag = 0)
begin
raiserror('year not correct',16,1)
end
DECLARE #PublicNo BIGINT=(SELECT ISNULL(MAX(PublicNo),0)+1 FROM tbl)
update tbl
set PublicNo = #PublicNo
where #id
insert into tbl2
values (...)
end try
begin catch
print error_message()
end catch
end
I have a transaction that calls a stored procedure which creates a temp table. I need to be able to access this temp table outside of the stored procedure after it has been ran. Note: for what I am trying to do, I cannot use global temp tables.
Example:
Here is an example of the stored procedure:
CREATE PROCEDURE [dbo].[GetChangeID]()
AS
BEGIN
IF OBJECT_ID('tempdb..#CurrentChangeID') IS NOT NULL
DROP TABLE #CurrentChangeID
SELECT '00000000-0000-0000-0000-000000000000' AS ChangeID INTO #CurrentChangeID
END
GO
Here is an example of the transaction:
BEGIN TRANSACTION
DECLARE #changeID uniqueidentifier
EXEC dbo.GetChangeID
DECLARE #test uniqueidentifier
SET #test = (SELECT ChangeID FROM #CurrentChangeID)
COMMIT TRANSACTION
GO
The issue is that it cannot find a table named #CurrentChangeID.
How can I make it to where it can see this table without declaring it as a global temp table such as ##CurrentChangeID?
------UPDATE------
So let me give more context to my question because that was just a simplified example. So what I am ultimately trying to do is this: 1. Begin Transaction 2. Call stored procedure that generates the GUID 3. Then update row in a given view that has a trigger. 4. Within that trigger get the GUID that was generated within the sp. 5. Commit.
First of all you can't get access to local temp table defined in SP outside stored procedure. It will always be out of scope.
Second you probalbly don't even need temp table. In your example:
SET #test = (SELECT ChangeID FROM #CurrentChangeID)
it looks like you want only one value.
I propose to use output parameter.
CREATE PROCEDURE [dbo].[GetChangeID](
#test UNIQUEIDENTIFIER OUTPUT
)
AS
BEGIN
-- ...
SET #test = '00000000-0000-0000-0000-000000000000';
END;
And call:
DECLARE #changeID uniqueidentifier
EXEC dbo.GetChangeID #chaneId OUTPUT;
SELECT #changeId;
Thank you lad2025 and Dan Guzman for your input. The way I was originally trying to do this was definitely incorrect.
I did, however, figure out a way to accomplish this task.
Modified Stored Procedure:
CREATE PROCEDURE [dbo].[GetChangeID]()
AS
BEGIN
DECLARE #ChangeID uniqueidentifier
...
Code that generates the uniqueidentifier, #ChangeID.
...
--This can be seen within the context of this batch.
SET CONTEXT_INFO #ChangeID
END
GO
Then anywhere within this transaction that you would like to access the changeID, you just have to use the following query:
SELECT CONTEXT_INFO as changeID
FROM sys.dm_exec_requests
WHERE session_id = ##SPID AND request_id = CURRENT_REQUEST_ID()
I created a simple trigger:
ALTER TRIGGER [dbo].[idlist_update] ON [dbo].[Store]
FOR INSERT, UPDATE
AS
BEGIN
DECLARE #brand varchar(50);
DECLARE #model varchar(50);
DECLARE #category varchar(100);
DECLARE #part varchar(100);
DECLARE #count int;
SELECT #count = COUNT(*) FROM inserted;
SELECT #brand=Brand, #model=Model, #category=AClass, #part=Descript FROM inserted;
EXECUTE GenerateId_Part #brand, #model, #category, #part;
END
With rows, modified by our users (they using special application), it works ok, but I need to apply it to all rows in the table (more than 200.000+). I tried:
UPDATE Store SET lastupd={fn NOW()};
But it does not work.
I believe the syntax you want is:
UPDATE dbo.Store SET lastupd = CURRENT_TIMESTAMP;
However if you are updating the entire table every time you insert or update a single row, this seems quite wasteful to me. Why not store that fact once, somewhere else, instead of storing it redundantly 200,000 times? It seems to be a property of the store itself, not the products in it.
Also note that your trigger won't properly handle multi-row operations. You can't do this "assign a variable from inserted" trick because if two rows are inserted by a single statement, you'll only affect one arbitrary row. Unlike some platforms, in SQL Server triggers fire per statement, not per row. We can help fix this if you show what the stored procedure GenerateId_Part does.
I have three tables and three stored procedures respectively to insert/update records in these tables. The first table has a primary key column, RecNo, which is auto-generated.
Around 1000 users are entering records in these tables simultaneously from different geographic locations. I am noticing that sometimes inserts in the second or third table get missed even when inserts were successfully done and no warning was generated.
I want to know how the auto-generated primary key column handles concurrent issues. Do I need to set isolation level to SERIALIZABLE on top of each stored procedure?
I am using SQL Server 2008 R2 Express with default isolation level, i.e., READ COMMITTED.
One of my stored procedure looks like:
ALTER PROCEDURE [dbo].[pheSch_CreateOrUpdateTubewellDetails]
-- Add the parameters for the stored procedure here
#TwTaskFlag nvarchar(6),
#TwParameterID bigint,
#SerialNumber bigint,
#TotalNum int,
#TwType nvarchar(50),
#Depth nvarchar(60),
#Diameter nvarchar(60),
#WaterCapacity nvarchar(60),
#PS nvarchar(15),
#PSNum int,
#PSType nvarchar(60),
#Remarks nvarchar(80)
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- Insert statements for procedure here
BEGIN
UPDATE tw_details
SET
TotalNum = #TotalNum,
TwType = #TwType,
Depth = #Depth,
Diameter = #Diameter,
WaterCapacity = #WaterCapacity,
PS = #PS,
PSNum = #PSNum,
PSType = #PSType,
Remarks = #Remarks
WHERE twpid = #TwParameterID;
END
END
You need not change the isolation level,
the Identity column is well suited for concurrent inserts.
IF
you have no any Triggers attached to the table - then show all the details
BUT
you noticed the INSERTS - i do not see any of them here
USE [ddb]
GO
SET ANSI_NULLS OFF
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TRIGGER [dbo].[requeststrigger]
ON [dbo].[requests]
AFTER INSERT,UPDATE
AS
BEGIN
DECLARE #email VARCHAR(400);
DECLARE #firstname VARCHAR(400);
DECLARE #requestno VARCHAR(400);
DECLARE #lastname VARCHAR(400);
DECLARE #statusid INT;
DECLARE thecursor CURSOR FOR SELECT inserted.requestno,contacts.firstname,contacts.lastname,contacts.email FROM request_contacts,contacts,inserted WHERE request_contacts.requestid=inserted.requestid AND contacts.contactid=request_contacts.contactid AND request_contacts.notification=1 AND contacts.notification=1;
SET #statusid = (SELECT statusid FROM inserted);
IF #statusid = 4 AND #statusid <> (SELECT statusid FROM deleted)
BEGIN
SET NOCOUNT ON
SET ARITHABORT ON
OPEN thecursor;
FETCH NEXT FROM thecursor
INTO #requestno,#firstname,#lastname,#email
WHILE ##FETCH_STATUS = 0
BEGIN
EXEC MAIL_SEND #email,#firstname,#requestno,#lastname;
FETCH NEXT FROM thecursor
INTO #requestno,#firstname,#lastname,#email
END
CLOSE thecursor;
DEALLOCATE thecursor
SET NOCOUNT OFF
END
END
This simply makes the whole UPDATE/INSERT not work. When I remove the cursor declaration, it works. The cursor is just selecting a field from a table that is existing in the same database called "contacts". What is wrong?
Are you prepared to consider amending your design? There appear to be a couple of issues with what you're attempting here.
A trigger isn't necessarily the best place to be doing this kind of row-by-row operation since it executes in-line with the changes to the source table, and will affect the performance of the system negatively.
Also, your existing code evaluates statusid only for a single row in the batch, although logically it could be set to more than one value in a single batch of updates.
A more robust approach might be to insert the rows which should generate a MAIL_SEND operation to a queuing table, from which a scheduled job can pick rows up and execute MAIL_SEND, setting a flag so that each operation is carried out only once.
This would simplify your trigger to an insert - no cursor would be required there (although you will still need a loop of some kind in the sechduled job).