Validating UPDATE and INSERT statements against an entire table - sql-server

I'm looking for the best way to go about adding a constraint to a table that is effectively a unique index on the relationship between the record and the rest of the records in that table.
Imagine the following table describing the patrols of various guards (from the previous watchman scenario)
PK PatrolID Integer
FK GuardID Integer
Starts DateTime
Ends DateTime
We start with a constraint specifying that the start and end times must be logical:
Ends >= Starts
However I want to add another logical constraint: A specific guard (GuardID) cannot be in two places at the same time, meaning that for any record the period specified by Start/Ends should not overlap with the period defined for any other patrol by the same guard.
I can think of two ways of trying to approach this:
Create an INSTEAD OF INSERT trigger. This trigger would then use cursors to go through the INSERTED table, checking each record. If any record conflicted with an existing record, an error would be raised. The two problems I have with this approach are: I dislike using cursors in a modern version of SQL Server, and I'm not sure how to go about implimenting the same logic for UPDATEs. There may also be the complexity of records within INSERTED conflicting with each other.
The second, seemingly better, approach would be to create a CONSTRAINT that calls a user defined function, passing the PatrolID, GuardID, Starts and Ends. The function would then do a WHERE EXISTS query checking for any records that overlap the GuardID/Starts/Ends parameters that are not the original PatrolID record. However I'm not sure of what potential side effects this approach might have.
Is the second approach better? Does anyone see any pitfalls, such as when inserting/updating multiple rows at once (here I'm concerned because rows within that group could conflict, meaning the order they are "inserted" makes a difference). Is there a better way of doing this (such as some fancy INDEX trick?)

Use an after trigger to check that the overlap constraint has not been violated:
create trigger Patrol_NoOverlap_AIU on Patrol for insert, update as
begin
if exists (select *
from inserted i
inner join Patrol p
on i.GuardId = p.GuardId
and i.PatrolId <> p.PatrolId
where (i.Starts between p.starts and p.Ends)
or (i.Ends between p.Starts and p.Ends))
rollback transaction
end
NOTE: Rolling back a transaction within a trigger will terminate the batch. Unlike a normal contraint violation, you will not be able to catch the error.
You may want a different where clause depending on how you define the time range and overlap. For instance if you want to be able to say Guard #1 is at X from 6:00 to 7:00 then Y 7:00 to 8:00 the above would not allow. You would want instead:
create trigger Patrol_NoOverlap_AIU on Patrol for insert, update as
begin
if exists (select *
from inserted i
inner join Patrol p
on i.GuardId = p.GuardId
and i.PatrolId <> p.PatrolId
where (p.Starts <= i.Starts and i.Starts < p.Ends)
or (p.Starts <= i.Ends and i.Ends < p.Ends))
rollback transaction
end
Where Starts is the time the guarding starts and Ends is the infinitesimal moment after guarding ends.

The simplest way would be to use a stored procedure for the inserts. The stored procedure can do the insert in a single statement:
insert into YourTable
(GuardID, Starts, Ends)
select #GuardID, #Starts, #Ends
where not exists (
select *
from YourTable
where GuardID = #GuardID
and Starts <= #Ends
and Ends >= #Start
)
if ##rowcount <> 1
return -1 -- Failure
In my experience triggers and constraints with UDF's tend to become very complex. They have side effects that can require a lot of debugging to figure out.
Stored procedures just work, and they have the added advantage that you can deny INSERT permissions to clients, giving you fine-grained control over what enters your database.

CREATE TRIGGER [dbo].[emaill] ON [dbo].[email]
FOR INSERT
AS
BEGIN
declare #email CHAR(50);
SELECT #email=i.email from inserted i;
IF #email NOT LIKE '%_#%_.__%'
BEGIN
print 'Triggered Fired';
Print 'Invalid Emaill....';
ROLLBACK TRANSACTION
END
END

Can be done with constraints too:
http://www2.sqlblog.com/blogs/alexander_kuznetsov/archive/2009/03/08/storing-intervals-of-time-with-no-overlaps.aspx

Related

Use of inserted and deleted tables for logging - is my concept sound?

I have a table with a simple identity column primary key. I have written a 'For Update' trigger that, among other things, is supposed to log the changes of certain columns to a log table. Needless to say, this is the first time I've tried this.
Essentially as follows:
Declare Cursor1 Cursor for
select a.*, b.*
from inserted a
inner join deleted b on a.OrderItemId = b.OrderItemId
(where OrderItemId is the actual name of the primary identity key).
I then do the usual open the cursor and go into a fetch next loop. With the columns I want to test, I do:
if Update(Field1)
begin
..... do some logging
end
The columns include varchars, bits, and datetimes. It works, sometimes. The problem is that the log function is writing the a and b values of the field to a log and in some cases, it appears that the before and after values are identical.
I have 2 questions:
Am I using the Update function correctly?
Am I accessing the before and after values correctly?
Is there a better way?
If you are using SQL Server 2016 or higher, I would recommend skipping this trigger entirely and instead using system-versioned temporal tables.
Not only will it eliminate the need for (and performance issues around) the trigger, it'll be easier to query the historical data.

SQL server GetDate in trigger called sequentially has the same value

I have a trigger on a table for insert, delete, update that on the first line gets the current date with GetDate() method.
The trigger will compare the deleted and inserted table to determine what field has been changed and stores in another table the id, datetime and the field changed. This combination must be unique
A stored procedure does an insert and an update sequentially on the table. Sometimes I get a violation of primary key and I suspect that the GetDate() returns the same value.
How can I make the GetDate() return different values in the trigger.
EDIT
Here is the code of the trigger
CREATE TRIGGER dbo.TR
ON table
FOR DELETE, INSERT, UPDATE
AS
BEGIN
SET NoCount ON
DECLARE #dt Datetime
SELECT #dt = GetDate()
insert tableLog (id, date, field, old, new)
select I.id, #dt, 'field', D.field, I.field
from INSERTED I LEFT JOIN DELETED D ON I.id=D.id
where IsNull(I.field, -1) <> IsNull(D.field, -1)
END
and the code of the calls
...
insert into table ( anotherfield)
values (#anotherfield)
if ##rowcount=1 SET #ID=##Identity
...
update table
set field = #field
where Id = #ID
...
Sometimes the GetDate() between the 2 calls (insert and update) takes 7 milliseconds and sometimes it has the same value.
That's not exactly full solution but try using SYSDATETIME instead and of course make sure that target table can store up datetime2 up to microseconds.
Note that you can't force different datetime regardless of precision (unless you will start counting up to ticks) as stuff can just happen at the same time wihthin given precision.
If stretching up to microseconds won't solve the issue on practical level, I think you will have to either redesign this logging schema (perhaps add identity column on top of what you have) or add some dirty trick - like make this insert in try catch block and add like microsecond (nanosecond?) in a loop until you insert successfully. Definitely not s.t. I would recommend.
Look at this answer: SQL Server: intrigued by GETDATE()
If you are inserting multiple ROWS, they will all use the same value of GetDate(), so you can try wrapping it in a UDF to get unique values. But as I said, this is just a guess unless you post the code of your trigger so we can see what you are actually doing?
It sounds like you're trying to create an audit trail - but now you want to forge some of the entries?
I'd suggest instead adding a rowversion column to the table and including that in your uniqueness criteria - either instead of or as well as the datetime value that is being recorded.
In this way, even if two rows are inserted with identical date/time data, you can still tell the actual insertion order.

SQL Server, double insert at the exact same time, unicity Bug

I am facing trouble when the following code is called two times almost at the same time.
DECLARE #membershipIdReturn as uniqueidentifier=null
SELECT #membershipIdReturn = MembershipId
FROM [Loyalty].[Membership]
WITH (NOLOCK)
WHERE ContactId = #customerIdFront
AND
IsDeleted = 0
IF (#membershipIdReturn IS NULL)
//InsertStatementHere
The calls are so close (about 3 thousandth of a second), that the second call also enter inside the if statement. Then an unicity failure is lift because this is not supposed to happen.
Is the bug because of the (NOLOCK)? I need it for transaction issues.
Is there any workaround for correcting this behavior ?
Thanks Al
Two options
1.Use unique constraint then put your insert statement in Try Catch block
ALTER TABLE [Loyalty].[Membership]
ADD CONSTRAINT uc_ContactId_IsDeleted UNIQUE(ContactId, IsDeleted)
2.Use Merge with serializable hint. Therefore, there will be no gap between select and insert.
MERGE [Loyalty].[Membership] WITH (SERIALIZABLE) as T
USING [Loyalty].[Membership] as S
ON ContactId = #customerIdFront
AND IsDeleted = 0
WHEN NOT MATCHED THEN
INSERT (MemberName, MemberTel) values ('','');

Any Performance Advantages to Consolidating SQL Server Triggers?

I have a single SQL Server table with 10 different triggers that fire on the same INSERT and UPDATE.
Is there any performance advantage to consolidating the SQL inside the 10 triggers into a single trigger?
The single consolidated trigger would do the same thing as the 10 different triggers, but only one trigger would get fired instead of 10.
Thanks!
Update: Thanks for the great feedback. I updated my question to indicate that I am wondering about performance advantages. I didn't think of the "absolute control over order" advantage. I am aware that these various triggers should be refactored at some point, but am wondering more about performance of one versus many triggers.
The primary advantage I see is that you'll have full control over the order of execution, which you don't have now.
I might suggest a hybrid approach: 1 trigger that calls 10 stored procedures, each encapsulating the logic of one of the existing triggers.
Consolidating triggers can make a huge performance boost. For example consider the following table, two triggers, and an update:
CREATE TABLE dbo.TriggerTest(ID INT NOT NULL PRIMARY KEY,
s INT NOT NULL)
GO
INSERT INTO dbo.TriggerTest(ID, s)
SELECT n, 1 FROM dbo.Numbers
WHERE n BETWEEN 1 AND 100000;
GO
CREATE TRIGGER TriggerTestNoSignChange
ON dbo.TriggerTest
AFTER UPDATE
AS
BEGIN
IF EXISTS(SELECT * FROM INSERTED AS i JOIN DELETED AS d
ON i.id = d.id WHERE sign(i.s)*sign(d.s)<0)
BEGIN
RAISERROR('s cannot change sign', 16, 1);
ROLLBACK ;
END
END
GO
CREATE TRIGGER TriggerTestNoBigChange
ON dbo.TriggerTest
AFTER UPDATE
AS
BEGIN
IF EXISTS(SELECT * FROM INSERTED AS i JOIN DELETED AS d
ON i.id = d.id WHERE ABS(i.s - d.s)>5)
BEGIN
RAISERROR('s cannot change by more than 5', 16, 1);
ROLLBACK ;
END
END
GO
UPDATE dbo.TriggerTest SET s=s+1
WHERE ID BETWEEN 1 AND 1000;
This update uses 1671 ms CPU and 4M reads. Let's consolidate two triggers and rerun the update:
DROP TRIGGER TriggerTestNoSignChange;
DROP TRIGGER TriggerTestNoBigChange;
GO
CREATE TRIGGER TriggerTestNoBigChangeOrSignChange
ON dbo.TriggerTest
AFTER UPDATE
AS
BEGIN
IF EXISTS(SELECT * FROM INSERTED AS i JOIN DELETED AS d
ON i.id = d.id WHERE sign(i.s)*sign(d.s)<0 OR ABS(i.s - d.s)>5)
BEGIN
RAISERROR('s cannot change sign or change by more than 5', 16, 1);
ROLLBACK ;
END
END
GO
UPDATE dbo.TriggerTest SET s=s+1
WHERE ID BETWEEN 1 AND 1000;
The same update runs twice us fast. Big surprise. ;)
I agree with Joe. By combining into one trigger you get:
full control over order
better visibility into all of the operations in one place
You'll also possibly be able to more clearly see places where some of this logic can be combined, and with a bigger piece of code may be able to invoke motivation to clean it up. 10 triggers sounds like an awful lot. What is the trend for the operations that are being performed in each of the triggers? Is it auditing, updating totals somewhere, etc.? I am sure there are some things that could be handled by better up-front logic (e.g. stored procedures that handle the initial insert/update operations, computed columns, even indexed views to prevent the need to perform after-DML computations).
There'd be a lot more have SPs to to the insert and update and gettingb rid of the triggers, other than that I agree with #Joe Stefanelli. Can't see why you'd get a noticeable performance up date from doing it either way.
10 triggers, shudder...

Best way to get totally sequential int values in SQL Server

I have a business requirement that the InvoiceNumber field in my Invoices table be totally sequential - no gaps or the auditors might think our accountants are up to something fishy!
My first thought was to simply use the primary key (identity) but if a transaction is rolled back a gap appears in the sequence.
So my second thought is to use a trigger which, at the point of insert, looks for the highest InvoiceNumber value in the table, adds 1 to it, and uses it as the InvoiceNumber for the new row. Easy to implement.
Are there potential issues with near-simultaneous inserts? For example, might two near simultaneous inserts running the trigger at the same time get the same 'currently highest InvoiceNumber' value and therefore insert rows with the same InvoiceNumber?
Are there other issues I might be missing? Would another approach be better?
Create a table which keeps tracks of 'counters'.
For your invoices, you can add some record to that table which keeps track of the next integer that must be used.
When creating an invoice, you should use that value, and increase it. When your transaction is rolled back, the update to that counter will be rollbacked as well. (Make sure that you put a lock on that table, to be sure that no other process can use the same value).
This is much more reliable than looking at the highest current counter that is being used in your invoice table.
You may still get gaps if data gets deleted from the table. But if data only goes in and not out, then with proper use of transactions on an external sequence table, it should be possible to do this nicely. Don't use MAX()+1 because it can have timing issues, or you may have to lock more of the table (page/table) than required.
Have a sequential table that has only one single record and column. Retrieve numbers from the table atomically, wrapping the retrieval and usage in a single transaction.
begin tran
declare #next int
update seqn_for_invoice set #next=next=next+1
insert invoice (invoicenumber,...) value (#next, ....)
commit
The UPDATE statement is atomic and cannot be interrupted, and the double assignment make the value of #next atomic. It is equivalent to using an OUTPUT clause in SQL Server 2005+ to return the updated value. If you need a range of numbers in one go, it is easier to use the PRE-update value rather than the POST-update value, i.e.
begin tran
declare #next int
update seqn_for_invoice set #next=next, next=next+3 -- 3 in one go
insert invoice (invoicenumber,...) value (#next, ....)
insert invoice (invoicenumber,...) value (#next+1, ....)
insert invoice (invoicenumber,...) value (#next+2, ....)
commit
Reference for SQL Server UPDATE statement
SET #variable = column = expression sets the variable to the same value as the column. This differs from SET #variable = column, column = expression, which sets the variable to the pre-update value of the column.
CREATE TABLE dbo.Sequence(
val int
)
Insert a row with an initial seed. Then to allocate a range of sufficient size for your insert (call it in the same transaction obviously)
CREATE PROC dbo.GetSequence
#val AS int OUTPUT,
#n as int =1
AS
UPDATE dbo.Sequence
SET #val = val = val + #n;
SET #val = #val - #n + 1;
This will block other concurrent attempts to increment the sequence until the first transaction commits.

Resources