performing some MSSQL exercises, and I am trying to create a trigger. However, the solution I have, comes across as theoretically correct to me but it is not working.
The aim is to create a trigger for a table that has only two columns. One column is the primary key and is Identity and does not allow null values. The other column is one that ALLOWS NULL values. However, it permits NULL values ONLY FOR A SINGLE ROW in the entire table. Basically a trigger should fire for an insert/update operation on this table which attempts to insert/update the column to a NULL value when there is already an existing NULL value for the column in the table.
This condition I capture in my trigger code as follows:
After Insert, Update
AS
set ANSI_WARNINGS OFF
If ( (select count(NoDupName) from TestUniqueNulls where NoDupName is null) > 1 )
BEGIN
Print 'There already is a row that contains a NULL value, transaction aborted';
ROLLBACK TRAN
END
However, the transaction executes itself nonetheless. I am not sure why this is happening and the trigger is not firing itself.
So anybody to enlighten my misgivings here?
I also have used set ANSI_WARNINGS OFF at the start of the trigger.
count(col) only counts non null values so count(NoDupName) ... where NoDupName is null will always be zero. You would need to check count(*) instead.
I realise this is just a practice exercise but an indexed view might be a better mechanism for this.
CREATE VIEW dbo.NoMoreThanOneNull
WITH SCHEMABINDING
AS
SELECT NoDupName
FROM dbo.TestUniqueNulls
WHERE NoDupName IS NULL
GO
CREATE UNIQUE CLUSTERED INDEX ix ON dbo.NoMoreThanOneNull(NoDupName)
Yeah that's a gotcha. The expression inside parens of the COUNT has to evaluate to not null, otherwise it will not be counted. So it is safer to use *, or 1 or any not nullable column in the expression. The most commonly encountered expression is '*', although you can come across '1' as well. There is no difference between these expressions in terms of performance. However if you use expression that can evaluate to null (like nullable column), your counts and other aggregations can be completely off.
create table nulltest(a int null)
go
insert nulltest(a) values (1), (null), (2)
go
select * from nulltest
select COUNT(*) from nulltest
select COUNT(1) from nulltest
select COUNT(a) from nulltest
go
drop table nulltest
Related
I am running into a problem where my check constraints are correctly stopping commands from executing but my Identity column value increases. I guess this is because the check occurs after the statement runs and the transaction gets rolled back due to the check failing. This leaves the identity value incremented by 1.
Is there a way to run the constraint check before the SQL statement gets executed?
CREATE TABLE TestTable
(
Id INT IDENTITY(1,1) PRIMARY KEY(Id),
Name VARCHAR(100)
)
INSERT INTO TestTable VALUES ('Type-1'),('Type-2'),('Type-55'),('Type-009')
--Add a check constraint so nobody can edit this without doing serious work
ALTER TABLE TestTable WITH NOCHECK ADD CONSTRAINT [CHECK_TestTable_READONLY] CHECK(1=0)
--This fails with the constraint as expected
INSERT INTO TestTable VALUES('This will Fail')
INSERT INTO TestTable VALUES('This will again....')
--Check the Id, it was incremented...
SELECT (IDENT_CURRENT( 'TestTable' ) ) As CurrentIdentity
When I had to do the same thing in the past I created a trigger that just threw an exception on insert and delete. this has several advantages, most importantly is that it prevents updates and deletes and you could give a custom exception message explaining what you did there and why, its an extremely bad habit to just put illogical constraints and hope that 3 months from now people would understand whats going on there and know they should ask you about it. It also prevents the Id counter from being incremented if its that important. If it is important, I would also not use auto increment and just set the ID number manually, since even if you are using these triggers you could always have an accidental syntax error or any other error after you disabled them and tried to add a value.
create trigger PreventChanges
on TestTable
FOR INSERT, UPDATE, DELETE
as
begin
throw 51000, 'DO NOT change anything in that table unless you really have to! in order to do so pleasae talk to GER (or just disable and reenable this trigger)',1
and
It sounds like you're intending to use the identity column for something it's not meant for. But to answer your question, could you not just manually code up some SQL Server IF statements to test your data before the insert happens (perhaps in a stored procedure)? I wouldn't know how to make this dynamic to 'fit all constraints on any table', but the process would do what you want - prevent the INSERT from firing. Though, if your constraints change, then you would have to change the procedure too.
e.g.,
IF 1 = 0 -- or use any of your constraints here...
BEGIN
-- nest more IFs if you have multiple check-constraints...
INSERT INTO TestTable
VALUES ('This will not increase your identity number since 1 does not equal 0')
END
We are having occasionally EMPTY records in our table/column below when there are multiple records inserted at one shot. While technically this is allowed since the column is nullable, the default constraint should apply for every row inserted.
ALTER TABLE [dbo].[JOB] ADD [DATE_CREATED] [nvarchar](35) NULL CONSTRAINT [DF_JOB_DATE_CREATED] DEFAULT (sysdatetime())
The one possible reason I could think of is "The default will only apply if you don't insert explicitly to that column". But I couldn't find anywhere code does that but I'm still working on that. Any other possible reasons?
We are on SQL Server 2012. The purpose of the column is to capture created date and time for processing. We can't have this column Non-nullable as this is a reporting column which shouldn't have a business impact.
Thank you for your advise.
Make the column NOT NULL. At the very least, do that so you can capture what application/query is explicitly inserting NULLs - which really just shouldn't be allowed.
Short of that, create a trigger:
CREATE TRIGGER trg_JOB_CreateDate
ON dbo.JOB
AFTER INSERT
AS
BEGIN
UPDATE j
SET DateInserted = GETDATE() -- consider using GETUTCDATE()
FROM JOB j
INNER JOIN inserted i
ON i.PrimaryKeyName = JOB.PrimaryKeyName
END
However, this could result in some additional transactional overhead, and won't stop someone from updating the column to = NULL. But again, if having that be null breaks something, then you really should just have the column be NOT NULL.
This question already has answers here:
How can I do a BEFORE UPDATED trigger with sql server?
(9 answers)
Closed 2 years ago.
This is on Azure.
I have a supertype entity and several subtype entities, the latter of which needs to obtain their foreign keys from the primary key of the super type entity on each insert. In Oracle, I use a BEFORE INSERT trigger to accomplish this. How would one accomplish this in SQL Server / T-SQL?
DDL
CREATE TABLE super (
super_id int IDENTITY(1,1)
,subtype_discriminator char(4) CHECK (subtype_discriminator IN ('SUB1', 'SUB2')
,CONSTRAINT super_id_pk PRIMARY KEY (super_id)
);
CREATE TABLE sub1 (
sub_id int IDENTITY(1,1)
,super_id int NOT NULL
,CONSTRAINT sub_id_pk PRIMARY KEY (sub_id)
,CONSTRAINT sub_super_id_fk FOREIGN KEY (super_id) REFERENCES super (super_id)
);
I wish for an insert into sub1 to fire a trigger that actually inserts a value into super and uses the super_id generated to put into sub1.
In Oracle, this would be accomplished by the following:
CREATE TRIGGER sub_trg
BEFORE INSERT ON sub1
FOR EACH ROW
DECLARE
v_super_id int; //Ignore the fact that I could have used super_id_seq.CURRVAL
BEGIN
INSERT INTO super (super_id, subtype_discriminator)
VALUES (super_id_seq.NEXTVAL, 'SUB1')
RETURNING super_id INTO v_super_id;
:NEW.super_id := v_super_id;
END;
Please advise on how I would simulate this in T-SQL, given that T-SQL lacks the BEFORE INSERT capability?
Sometimes a BEFORE trigger can be replaced with an AFTER one, but this doesn't appear to be the case in your situation, for you clearly need to provide a value before the insert takes place. So, for that purpose, the closest functionality would seem to be the INSTEAD OF trigger one, as #marc_s has suggested in his comment.
Note, however, that, as the names of these two trigger types suggest, there's a fundamental difference between a BEFORE trigger and an INSTEAD OF one. While in both cases the trigger is executed at the time when the action determined by the statement that's invoked the trigger hasn't taken place, in case of the INSTEAD OF trigger the action is never supposed to take place at all. The real action that you need to be done must be done by the trigger itself. This is very unlike the BEFORE trigger functionality, where the statement is always due to execute, unless, of course, you explicitly roll it back.
But there's one other issue to address actually. As your Oracle script reveals, the trigger you need to convert uses another feature unsupported by SQL Server, which is that of FOR EACH ROW. There are no per-row triggers in SQL Server either, only per-statement ones. That means that you need to always keep in mind that the inserted data are a row set, not just a single row. That adds more complexity, although that'll probably conclude the list of things you need to account for.
So, it's really two things to solve then:
replace the BEFORE functionality;
replace the FOR EACH ROW functionality.
My attempt at solving these is below:
CREATE TRIGGER sub_trg
ON sub1
INSTEAD OF INSERT
AS
BEGIN
DECLARE #new_super TABLE (
super_id int
);
INSERT INTO super (subtype_discriminator)
OUTPUT INSERTED.super_id INTO #new_super (super_id)
SELECT 'SUB1' FROM INSERTED;
INSERT INTO sub (super_id)
SELECT super_id FROM #new_super;
END;
This is how the above works:
The same number of rows as being inserted into sub1 is first added to super. The generated super_id values are stored in a temporary storage (a table variable called #new_super).
The newly inserted super_ids are now inserted into sub1.
Nothing too difficult really, but the above will only work if you have no other columns in sub1 than those you've specified in your question. If there are other columns, the above trigger will need to be a bit more complex.
The problem is to assign the new super_ids to every inserted row individually. One way to implement the mapping could be like below:
CREATE TRIGGER sub_trg
ON sub1
INSTEAD OF INSERT
AS
BEGIN
DECLARE #new_super TABLE (
rownum int IDENTITY (1, 1),
super_id int
);
INSERT INTO super (subtype_discriminator)
OUTPUT INSERTED.super_id INTO #new_super (super_id)
SELECT 'SUB1' FROM INSERTED;
WITH enumerated AS (
SELECT *, ROW_NUMBER() OVER (ORDER BY (SELECT 1)) AS rownum
FROM inserted
)
INSERT INTO sub1 (super_id, other columns)
SELECT n.super_id, i.other columns
FROM enumerated AS i
INNER JOIN #new_super AS n
ON i.rownum = n.rownum;
END;
As you can see, an IDENTIY(1,1) column is added to #new_user, so the temporarily inserted super_id values will additionally be enumerated starting from 1. To provide the mapping between the new super_ids and the new data rows, the ROW_NUMBER function is used to enumerate the INSERTED rows as well. As a result, every row in the INSERTED set can now be linked to a single super_id and thus complemented to a full data row to be inserted into sub1.
Note that the order in which the new super_ids are inserted may not match the order in which they are assigned. I considered that a no-issue. All the new super rows generated are identical save for the IDs. So, all you need here is just to take one new super_id per new sub1 row.
If, however, the logic of inserting into super is more complex and for some reason you need to remember precisely which new super_id has been generated for which new sub row, you'll probably want to consider the mapping method discussed in this Stack Overflow question:
Using merge..output to get mapping between source.id and target.id
While Andriy's proposal will work well for INSERTs of a small number of records, full table scans will be done on the final join as both 'enumerated' and '#new_super' are not indexed, resulting in poor performance for large inserts.
This can be resolved by specifying a primary key on the #new_super table, as follows:
DECLARE #new_super TABLE (
row_num INT IDENTITY(1,1) PRIMARY KEY CLUSTERED,
super_id int
);
This will result in the SQL optimizer scanning through the 'enumerated' table but doing an indexed join on #new_super to get the new key.
I have a bit IsDefault column. Only one row of data within the table may have this bit column set to 1, all the others must be 0.
How can I enforce this?
All versions:
Trigger
Indexed view
Stored proc (eg test on write)
SQL Server 2008: a filtered index
CREATE UNIQUE INDEX IX_foo ON bar (MyBitCol) WHERE MyBitCol = 1
Assuming your PK is a single, numeric column, you could add a computed column to your table:
ALTER TABLE YourTable
ADD IsDefaultCheck AS CASE IsDefault
WHEN 1 THEN -1
WHEN 0 THEN YourPK
END
Then create a unique index on the computed column.
CREATE UNIQUE INDEX IX_DefaultCheck ON YourTable(IsDefaultCheck)
I think the trigger is the best idea if you want to change the old default record to 0 when you insert/update a new one and if you want to make sure one record always has that value (i.e. if you delete the record with the value you would assign it to a different record). You would have to decide on the rules for doing so. These triggers can be tricky because you have to account for multiple records in the inserted and deleted tables. So if 3 records in a batch try to update to become the default record, which one wins?
If you want to make sure the one default record never changes when someone else tries to change it, the filtered index is a good idea.
Different approaches can be taken here, but I think only two are correct. But lets do it step by step.
We have table Hierachy table in which we have Root column. This column tells us what row is currently the starting point. As in question asked, we want to have only one starting point.
We think that we can do it with:
Constraint
Indexed View
Trigger
Different table and relation
Constraint
In this approach first we need to create function which will do the job.
CREATE FUNCTION [gt].[fnOnlyOneRoot]()
RETURNS BIT
BEGIN
DECLARE #rootAmount TINYINT
DECLARE #result BIT
SELECT #rootAmount=COUNT(1) FROM [gt].[Hierarchy] WHERE [Root]=1
IF #rootAmount=1
set #result=1
ELSE
set #result=0
RETURN #result
END
GO
And then the constraint:
ALTER TABLE [gt].[Hierarchy] WITH CHECK ADD CONSTRAINT [ckOnlyOneRoot] CHECK (([gt].[fnOnlyOneRoot]()=(1)))
Unfortunately approach is wrong as this constraint won't allow us to change any values in the table. It need to have exactly one root marked (insert with Root=1 will throw exception, and update with set Root=0 also)
We could change the fnOnyOneRoot to allow having 0 selected roots but it not what we wanted.
Index
Index will remove all rows which are defined in the where clause and on the rest data will setup unique constraint. We have different options here:
- Root can be nullable and we can add in where Root!=0 and Root is not null
- Root must have value and we can add only in where Root!=0
- and different combinations
CREATE UNIQUE INDEX ix_OnyOneRoot ON [gt].[Hierarchy](Root) WHERE Root !=0 and Root is not null
This approach also is not perfect. Maximum one Root will be forced, but minimum not. To update data we need to set previous rows to null or 0.
Trigger
We can do two kinds of trigger both behaves differently
- Prevent trigger - which won't allow us to put wrong data
- DoTheJob trigger - which in background will update data for us
Prevent trigger
This is basically the same as constraint, if we want to force only one root than we cannot update or insert.
CREATE TRIGGER tOnlyOneRoot
ON [gt].[Hierarchy]
AFTER INSERT, UPDATE
AS
DECLARE #rootAmount TINYINT
DECLARE #result BIT
SELECT #rootAmount=COUNT(1) FROM [gt].[Hierarchy] WHERE [Root]=1
IF #rootAmount=1
set #result=1
ELSE
set #result=0
IF #result=0
BEGIN
RAISERROR ('Only one root',0,0);
ROLLBACK TRANSACTION
RETURN
END
GO
DoTheJob trigger
This trigger will check for all inserted/updated rows and if more than one Root will be passed it will throw exception. In other case, so if one new Root will be updated or inserted, trigger will allow to do it and after operation it will change Root value for all other rows to 0.
CREATE TRIGGER tOnlyOneRootDoTheJob
ON [gt].[Hierarchy]
AFTER INSERT, UPDATE
AS
DECLARE #insertedCount TINYINT
SELECT #insertedCount = COUNT(1) FROM inserted WHERE [Root]=1
if (#insertedCount > 1)
BEGIN
RAISERROR ('Only one root',0,0);
ROLLBACK TRANSACTION
RETURN
END
DECLARE #newRootId INT
SELECT #newRootId = [HierarchyId] FROM inserted WHERE [Root]=1
UPDATE [gt].[Hierarchy] SET [Root]=0 WHERE [HierarchyId] <> #newRootId
GO
This is the solution we tried to achieve. Only one root rule is always meet. (Additional trigger for Delete should be done)
Different table and relation
This is lets say more normalized way. We create new table allow only to have one row (using the options described above) and we join.
CREATE TABLE [gt].[HierarchyDefault](
[HierarchyId] INT PRIMARY KEY NOT NULL,
CONSTRAINT FK_HierarchyDefault_Hierarchy FOREIGN KEY (HierarchyId) REFERENCES [gt].[Hierarchy](HierarchyId)
)
Does it will hit the performance?
With one column
SET STATISTICS TIME ON;
SELECT [HierarchyId],[ParentHierarchyId],[Root]
FROM [gt].[Hierarchy] WHERE [root]=1
SET STATISTICS TIME OFF;
Result
CPU time = 0 ms, elapsed time = 0 ms.
With join:
SET STATISTICS TIME ON;
SELECT h.[HierarchyId],[ParentHierarchyId],[Root]
FROM [gt].[Hierarchy] h
INNER JOIN [gt].[HierarchyDefault] hd on h.[HierarchyId]=hd.[HierarchyId]
WHERE [root]=1
SET STATISTICS TIME OFF;
Result
CPU time = 0 ms, elapsed time = 0 ms.
Summary
I will use the trigger. It is some magic in the table, but it did all job under the hood.
Easy table creation:
CREATE TABLE [gt].[Hierarchy](
[HierarchyId] INT PRIMARY KEY IDENTITY(1,1),
[ParentHierarchyId] INT NULL,
[Root] BIT
CONSTRAINT FK_Hierarchy_Hierarchy FOREIGN KEY (ParentHierarchyId)
REFERENCES [gt].[Hierarchy](HierarchyId)
)
You could apply an Instead of Insert trigger and check the value as it's coming in.
Create Trigger TRG_MyTrigger
on MyTable
Instead of Insert
as
Begin
--Check to see if the row is marked as active....
If Exists(Select * from inserted where IsDefault= 1)
Begin
Update Table Set IsDefault=0 where ID= (select ID from inserted);
insert into Table(Columns)
select Columns from inserted
End
End
Alternatively you could apply a unique constraint on the column.
The accepted answer to the below question is both interesting and relevant:
Constraint for only one record marked as default
"But the serious relational folks will tell you this information
should just be in another table."
Have a separate 1 row table that tells you which record is 'default'. Anon touched on this in his comment.
I think this is the best approach - simple, clean & doesn't require a 'clever' esoteric solution prone to errors or later misunderstanding. You can even drop the IsDefualt column.
I have created an INSTEAD OF INSERT trigger on a view in my database. I want to know which columns are included in the column list of the INSERT statement on the view.
If you read the MSDN documentation for triggers the UPDATE() and COLUMNS_UPDATED() functions should satisfy this requirement. However, during my testing I found that regardless of what columns are in the INSERT column list the UPDATE() and COLUMNS_UPDATED() functions always return all columns from the view.
CREATE VIEW dbo.MyView (BatchId, [Status], OrderNumber, WhenClosed) AS
SELECT bth.BatchId, bth.[Status], bth.OrderNumber,
Private.ufxAdjustDateTime(bth.WhenClosed, bth.WhenClosedUtcOffset)
FROM Private.Batch AS bth
GO
CREATE TRIGGER dbo.[MyView-Insert] ON dbo.MyView INSTEAD OF INSERT AS
BEGIN
SET NOCOUNT ON
DECLARE #batchIdIsSet BIT
SELECT #batchIdIsSet = 0
IF UPDATE(BatchId)
SELECT #batchIdIsSet = 1
INSERT INTO Private.Batch
(BatchId, [Status], OrderNumber)
SELECT CASE #batchIdSet
WHEN 1 THEN ins.BatchId
ELSE NEWID()
END, ins.[Status], ins.OrderNumber
FROM inserted AS ins
END
The reason I want to do this is that I need to modify an existing table and I have loads of legacy code that relies on it. So what I've done is created a new table, changed the old table to a view and created triggers, on the view, to allow INSERT, UPDATE and DELETE statements.
Now, the old table had defaults for certain columns, if the insert into the view uses the default I want to use a default in my insert into the new table. To do this I have to be able to figure out which columns had values [explicitly] supplied for the INSERT.
Checking to see if the column has NULL is not enough because the INSERT statement can explicitly set the field value to NULL and this is perfectly acceptable.
Hmmm, I hope this is clear.
Kep.
On an INSERT statement, every column is affected. It either gets NULL, or the value you're specifying.
Checking for NULL would be the best option, but as you can't do that, I'm thinking you might be a bit stuck. Can you work out scenarios which might need to handle NULL explicitly?
For INSERT, everything is a change because it's a new row.
If you had an AFTER trigger, you could test to see if the inserted value is the default value. If the default is NULL (eg nullable and no default), then how can you distinguish if NULL is explicitly inserted in any trigger?
In a BEFORE trigger, I don't know if you can trap the default. Of course, if the default is NEWID() this still won't help you.
On the face of it, this can't be done in a trigger.