I'm doing a database for a university work. At certain point we have to create a trigger/function that can limit insertion of records on my SQL Server Management Studio 17, taking into account different foreign keys. For example we can have 100 records, but we can only have 50 records for the same foreign key.
We have this constraint:
CREATE TABLE [dbo].[DiaFerias] WITH CHECK
ADD CONSTRAINT [Verificar22DiasFerias]
CHECK (([dbo].[verificarDiasFerias]((22)) ='True'))
With the help of this function:
ALTER FUNCTION [dbo].[verificarDiasFerias] (#contagem INT)
RETURNS VARCHAR(5)
AS
BEGIN
IF EXISTS (SELECT DISTINCT idFuncionario, idDiaFerias
FROM DiaFerias
GROUP BY idFuncionario, idDiaFerias
HAVING COUNT(*) <= #contagem)
RETURN 'True'
RETURN 'False'
END
Would I use a trigger here? Reluctantly, yes (See Enforce maximum number of child rows for how to do it without triggers, in a way I'd never recommend). Would I use that function? No.
I'd create an indexed view:
CREATE VIEW dbo.DiaFerias_Counts
WITH SCHEMABINDING
AS
SELECT idFuncionario, idDiaFerias, COUNT_BIG(*) as Cnt
FROM dbo.DiaFerias
GROUP BY idFuncionario, idDiaFerias
GO
CREATE UNIQUE CLUSTERED INDEX PK_DiaFerias_Counts on
dbo.DiaFerias_Counts (idFuncionario, idDiaFerias)
Why do this? So that SQL Server maintains these counts for us automatically, so we don't have to write a wide-ranging query in the triggers. We can now write the trigger, something like:
CREATE TRIGGER T_DiaFerias
ON dbo.DiaFerias
AFTER INSERT, UPDATE
AS
SET NOCOUNT ON;
IF EXISTS (
SELECT
*
FROM dbo.DiaFerias_Counts dfc
WHERE
dfc.Cnt > 22
AND
(EXISTS (select * from inserted i
where i.idFuncionario = dfc.idFuncionario AND i.idDiaFerias = dfc.idDiaFerias)
OR EXISTS (select * from deleted d
where d.idFuncionario = dfc.idFuncionario AND d.idDiaFerias = dfc.idDiaFerias)
)
)
BEGIN
RAISERROR('Constraint violation',16,1)
END
Hopefully you can see how it's meant to work - we only want to query counts for items that may have been affected by whatever caused us to trigger - so we use inserted and deleted to limit our search.
And we reject the change if any count is greater than 22, unlike your function which only starts rejecting rows if every count is greater than 22.
Related
I am trying to constrain a SQL Server Database by a Start Date and End Date such that I can never double book a resource (i.e. no overlapping or duplicate reservations).
Assume my resources are numbered such that the table looks like
ResourceId, StartDate, EndDate, Status
So lets say I have resource #1. I want to make sure that I cannot have have the a reservation for 1/8/2017 thru 1/16/2017 and a separate reservation for 1/10/2017 - 1/18/2017 for the same resource.
A couple of more complications, a StartDate for a resource can be the same as the EndDate for the resource. So 1/8/1027 thru 1/16/2017 and 1/16/2017 thru 1/20/2017 is ok (i.e., one person can check in on the same day another person checkouts).
Furthermore, the Status field indicates whether the booking of the resource is Active or Cancelled. So we can ignore all cancelled reservations.
We have protected against these overlapping or double booking reservations in Code (Stored Procs and C#) when saving but we are hoping to add an extra layer of protection by adding a DB Contraint.
Is this possible in SQL Server ?
Thanks in Advance
You can use a CHECK constraint to make sure startdate is on or before EndDate easily enough:
CONSTRAINT [CK_Tablename_ValidDates] CHECK ([EndDate] >= [StartDate])
A constraint won't help with preventing an overlapping date range. You can instead use a TRIGGER to enforce this by creating a FOR INSERT, UPDATE trigger that rolls back the transaction if it detects a duplicate:
CREATE TRIGGER [TR_Tablename_NoOverlappingDates] FOR INSERT, UPDATE AS
IF EXISTS(SELECT * from inserted INNER JOIN [MyTable] ON blah blah blah ...) BEGIN
ROLLBACK TRANSACTION;
RAISERROR('hey, no overlapping date ranges here, buddy', 16, 1);
RETURN;
END
Another option is to create a indexed view that finds duplicates and put a unique constraint on that view that will be violated if more than 1 record exists. This is usually accomplished with a dummy table that has 2 rows cartesian joined to an aggregate view that selects the duplicate id-- thus one record with a duplicate would return two rows in the view with the same fake id value that has a unique index.
I've done both, I like the trigger approach better.
Drawing from this answer here: Date range overlapping check constraint.
First, check to make sure there are not existing overlaps:
select *
from dbo.Reservation as r
where exists (
select 1
from dbo.Reservation i
where i.PersonId = r.PersonId
and i.ReservationId != r.ReservationId
and isnull(i.EndDate,'20990101') > r.StartDate
and isnull(r.EndDate,'20990101') > i.StartDate
);
go
If it is all clear, then create your function.
There are a couple of different ways to write the function, e.g. we could skip the StartDate and EndDate and use something based only on ReservationId like the query above, but I will use this as the example:
create function dbo.udf_chk_Overlapping_StartDate_EndDate (
#ResourceId int
, #StartDate date
, #EndDate date
) returns bit as
begin;
declare #r bit = 1;
if not exists (
select 1
from dbo.Reservation as r
where r.ResourceId = #ResourceId
and isnull(#EndDate ,'20991231') > r.StartDate
and isnull(r.EndDate,'20991231') > #StartDate
and r.[Status] = 'Active'
group by r.ResourceId
having count(*)>1
)
set #r = 0;
return #r;
end;
go
Then add your constraint:
alter table dbo.Reservation
add constraint chk_Overlapping_StartDate_EndDate
check (dbo.udf_chk_Overlapping_StartDate_EndDate(ResourceId,StartDate,EndDate)=0);
go
Last: Test it.
I've been trying to find a solution to a very simple problem, but I just cant find out how to do it. I have two tables Transactions and Credit_Card.
Transactions
transid (PK), ccid (FK: to credit_card > ccid), amount, timestamp
Credit_Card
ccid (PK), Balance, creditlimit
I want to create a trigger so before someone inserts a transaction it checks that the transaction amount + the balance of the credit card does not go over the creditlimit and if it is, it rejects the insert.
"EDIT" The following code fixed my issue, big thanks to Dan Guzman for his contribution!
CREATE TRIGGER TR_transactions
ON transactions FOR INSERT, UPDATE
AS
IF EXISTS(
SELECT 1
FROM (
SELECT t.ccid, SUM(t.amount) AS amount
FROM inserted AS t
GROUP BY t.ccid) AS t
JOIN Credit_Card AS cc ON
cc.ccid = t.ccid
WHERE cc.creditlimit <= (t.amount + cc.balance)
)
BEGIN
RAISERROR('Credit limit exceeded', 16, 1);
ROLLBACK;
END;
If I understand correctly, you just need to check the credit limit against newly inserted/updated transactions. Keep in mind that a SQL Server trigger fires once per statement and a statement may affect multiple rows. The virtual inserted will have images of the affected rows. You can use this to limit the credit check to only the credit cards affected by the related transactions.
CREATE TRIGGER TR_transactions
ON transactions FOR INSERT, UPDATE
AS
IF EXISTS(
SELECT 1
FROM (
SELECT inserted.ccid, SUM(inserted.amount) AS amount
FROM inserted
GROUP BY inserted.ccid) AS t
JOIN Credit_Card AS cc ON
cc.ccid = t.ccid
WHERE cc.creditlimit <= (t.amount + cc.balance)
)
BEGIN
RAISERROR('Credit limit exceeded', 16, 1);
ROLLBACK;
END;
EDIT
I removed the t alias from the inserted table and qualified the columns with inserted instead to better indicate the source of the data. It's generally a good practice to qualify column names with the table name or alias in multi-table queries to avoid ambiguity.
The integers 16 and 1 in the RAISERROR statement specify the severity and state of the raised error. See the SQL Server Books Online reference for details. Severity 11 and greater raise an error, with severities in the 11 through 16 range indicating a user-correctable error.
you can try this.
ALTER trigger [dbo].[TrigerOnInsertPonches]
On [dbo].[CHECKINOUT]
After Insert
As
BEGIN
DECLARE #ccid int
,#amount money
you have to tell sql that the trigger is ofter insert,
then you can declare the using variable. declare
select #ccid=o.ccid from inserted o;
I think is the correct whey to catch the id.
then you can make the select filtiering from this value.
I hope this can be usefull
We are using the technique outlined here to generate random record IDs without collisions. In short, we create a randomly-ordered table of every possible ID, and mark each record as 'Taken' as it is used.
I use the following Stored Procedure to obtain an ID:
ALTER PROCEDURE spc_GetId #retVal BIGINT OUTPUT
AS
DECLARE #curUpdate TABLE (Id BIGINT);
SET NOCOUNT ON;
UPDATE IdMasterList SET Taken=1
OUTPUT DELETED.Id INTO #curUpdate
WHERE ID=(SELECT TOP 1 ID FROM IdMasterList WITH (INDEX(IX_Taken)) WHERE Taken IS NULL ORDER BY SeqNo);
SELECT TOP 1 #retVal=Id FROM #curUpdate;
RETURN;
The retrieval of the ID must be an atomic operation, as simultaneous inserts are possible.
For large inserts (10+ million), the process is quite slow, as I must pass through the table to be inserted via a cursor.
The IdMasterList has a schema:
SeqNo (BIGINT, NOT NULL) (PK) -- sequence of ordered numbers
Id (BIGINT) -- sequence of random numbers
Taken (BIT, NULL) -- 1 if taken, NULL if not
The IX_Taken index is:
CREATE NONCLUSTERED INDEX (IX_Taken) ON IdMasterList (Taken ASC)
I generally populate a table with Ids in this manner:
DECLARE #recNo BIGINT;
DECLARE #newId BIGINT;
DECLARE newAdds CURSOR FOR SELECT recNo FROM Adds
OPEN newAdds;
FETCH NEXT FROM newAdds INTO #recNo;
WHILE ##FETCH_STATUS=0 BEGIN
EXEC spc_GetId #newId OUTPUT;
UPDATE Adds SET id=#newId WHERE recNo=#recNo;
FETCH NEXT FROM newAdds INTO #id;
END;
CLOSE newAdds;
DEALLOCATE newAdds;
Questions:
Is there any way I can improve the SP to extract Ids faster?
Would a conditional index improve peformance (I've yet to test, as
IdMasterList is very big)?
Is there a better way to populate a table with these Ids?
As with most things in SQL Server, if you are using cursors, you are doing it wrong.
Since you are using SQL Server 2012, you can use a SEQUENCE to keep track of what random value you already used and effectively replace the Taken column.
CREATE SEQUENCE SeqNoSequence
AS bigint
START WITH 1 -- Start with the first SeqNo that is not taken yet
CACHE 1000; -- Increase the cache size if you regularly need large blocks
Usage:
CREATE TABLE #tmp
(
recNo bigint,
SeqNo bigint
)
INSERT INTO #tmp (recNo, SeqNo)
SELECT recNo,
NEXT VALUE FOR SeqNoSequence
FROM Adds
UPDATE Adds
SET id = m.id
FROM Adds a
INNER JOIN #tmp tmp ON a.recNo = tmp.recNo
INNER JOIN IdMasterList m ON tmp.SeqNo = m.SeqNo
SEQUENCE is atomic. Subsequent calls to NEXT VALUE FOR SeqNoSequence are guaranteed to return unique values, even for parallel processes. Note that there can be gaps in SeqNo, but it's a very small trade off for the huge speed increase.
Put a PK inden of BigInt on each table
insert into user (name)
values ().....
update user set = user.ID = id.ID
from id
left join usr
on usr.PK = id.PK
where user.ID = null;
one
insert into user (name) value ("justsaynotocursor");
set #PK = select select SCOPE_IDENTITY();
update user set ID = (select ID from id where PK = #PK);
Few ideas that came to my mind:
Try if removing the top, inner select etc. helps to improve the performance of the ID fetching (look at statistics io & query plan):
UPDATE top(1) IdMasterList
SET #retVal = Id, Taken=1
WHERE Taken IS NULL
Change the index to be a filtered index, since I assume you don't need to fetch numbers that are taken. If I remember correctly, you can't do this for NULL values, so you would need to change the Taken to be 0/1.
What actually is your problem? Fetching single IDs or 10+ million IDs? Is the problem CPU / I/O etc. caused by the cursor & ID fetching logic, or are the parallel processes being blocked by other processes?
Use sequence object to get the SeqNo. and then fetch the Id from idMasterList using the value returned by it. This could work if you don't have gaps in IdMasterList sequences.
Using READPAST hint could help in blocking, for CPU / I/O issues, you should try to optimize the SQL.
If the cause is purely the table being a hotspot, and no other easy solutions seem to help, split it into several tables and use some kind of simple logic (even ##spid, rand() or something similar) to decide from which table the ID should be fetched. You would need more checking if all tables have free numbers, but it shouldn't be that bad.
Create different procedures (or even tables) to handle fetching of single ID, hundreds of IDs and millions of IDs.
I am struggling to find a SQL Server replacement for select for update that works.
I have a master table that contains a column which is used for next order number. The application does a select from update on this row, reads the current value (while locked) adds one to this value and then updates the row, then uses the number it received. This process works perfectly on all databases I've tried but for SQL Server which does not seem to have any process for selecting data for exclusive use.
How do I do a locked read and update of something like a next order number from a sequence table is SQL Server?
BTW, I know I can use things like IDENTITY cols and stuff, to do this, but in this case I must read from this existing column. Get the value and inc it, and do it in a safe locked manner to avoid 2 users getting the same value.
UPDATE::
Thank you, that works for this case :)
DECLARE #Output char(30)
UPDATE scheme.sysdirm
SET #Output = key_value = cast(key_value as int)+1
WHERE system_key='OPLASTORD'
SELECT #Output
I have one other place I do something similar. I read and lock a stock record too.
SELECT STOCK
FROM PRODUCT
WHERE ID = ? FOR UPDATE.
I then do some validation and the do
UPDATE PRODUCT SET STOCK = ?
WHERE ID=?
I can't just use your above method here, as the value I update is based on things I do from the stock I read. But I need to ensure no one else can mess with the stock while I do this. Again, easy on other DB's with SELECT FOR UPDATE... is there a SQL Server workaround?? :)
You can simple do an UPDATE that also reads out the new value into a SQL Server variable:
DECLARE #Output INT
UPDATE dbo.YourTable
SET #Output = YourColumn = YourColumn + 1
WHERE ID = ????
SELECT #Output
Since it's an atomic UPDATE statement, it's safe against concurrency issues (since only one connection can get an update locks at any one given time). A potential second session that wants to get the incremented value at the same time will have to wait until the first one completes, thus getting the next value from the table.
As an alternative you can use the OUTPUT clause of the UPDATE statement, although this will insert into a table variable.
Create table YourTable
(
ID int,
YourColumn int
)
GO
INSERT INTO YourTable VALUES (1, 1)
GO
DECLARE #Output TABLE
(
YourColumn int
)
UPDATE YourTable
SET YourColumn = YourColumn + 1
OUTPUT inserted.YourColumn INTO #Output
WHERE ID = 1
SELECT TOP 1 YourColumn
FROM #Output
**** EDIT
If you want to ensure that no-one can change the data after you have read it, you can use a repeatable read. You should be aware that any reads of any tables you do will be locked for Update (pessimistic locking) and may cause Deadlocking. You can also sue the SELECT ... FROM TABLE (UPDLOCK) hint within a transaction.
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ
BEGIN TRANSACTION
SELECT STOCK
FROM PRODUCT
WHERE ID = ?
.....
...
UPDATE Product
SET Stock = nnn
WHERE ID = ?
COMMIT TRANSACTION
I have the following SQL Server Query in a stored procedure and I am running this service from a windows application. I am populating the temp table variable with 30 million records and then comparing them with previous days records in tbl_ref_test_main to Add add and delete the different records. there is a trigger on tbl_ref_test_main on insert and delete. Trigger write the same record in another table. Because of the comparison of 30 million records its taking ages to produce the result and throws and error saying A severe error occurred on the current command. The results, if any, should be discarded.
Any suggestions please.
Thanks in advance.
-- Declare table variable to store the records from CRM database
DECLARE #recordsToUpload TABLE(ClassId NVARCHAR(100), Test_OrdID NVARCHAR(100),Test_RefId NVARCHAR(100),RefCode NVARCHAR(100));
-- Populate the temp table
INSERT INTO #recordsToUpload
SELECT
class.classid AS ClassId,
class.Test_OrdID AS Test_OrdID ,
CAST(ref.test_RefId AS VARCHAR(100)) AS Test_RefId,
ref.ecr_RefCode AS RefCode
FROM Dev_MSCRM.dbo.Class AS class
LEFT JOIN Dev_MSCRM.dbo.test_ref_class refClass ON refClass.classid = class.classid
LEFT JOIN Dev_MSCRM.dbo.test_ref ref ON refClass.test_RefId = ref.test_RefId
WHERE class.StateCode = 0
AND (ref.ecr_RefCode IS NULL OR (ref.statecode = 0 AND LEN(ref.ecr_RefCode )<= 18 ))
AND LEN(class.Test_OrdID )= 12
AND ((ref.ecr_RefCode IS NULL AND ref.test_RefId IS NULL)
OR (ref.ecr_RefCode IS NOT NULL AND ref.test_RefId IS NOT NULL));
-- Insert new records to Main table
INSERT INTO dbo.tbl_ref_test_main
Select * from #recordsToUpload
EXCEPT
SELECT * FROM dbo.tbl_ref_test_main;
-- Delete records from main table where similar records does not exist in temp table
DELETE P FROM dbo.tbl_ref_test_main AS P
WHERE EXISTS
(SELECT P.*
EXCEPT
SELECT * FROM #recordsToUpload);
-- Select and return the records to upload
SELECT Test_OrdID,
CASE
WHEN RefCode IS NULL THEN 'NA'
ELSE RefCode
END,
Operation AS 'Operation'
FROM tbl_daily_upload_records
ORDER BY Test_OrdID, Operation, RefCode;
My suggestion would be that 30 million rows is too large for the table variable, try creating a temporary table, populating it with the data and then performing the analysis there.
If this isn't possible/suitable then perhaps create a permanent table and truncating it between uses.