Check constraints in Ms Sql Sever 2005 - sql-server

I am trying to add a check constraint which verity if after an update the new value (which was inserted) is greater than old values which is already stored in table.
For example i have a "price" column which already stores value 100, if the update comes with 101 the is ok, if 99 comes then my constraint should reject the update process. Can this behavior be achieved using check constraints or should i try to use triggers or functions ?
Please advice me regarding this...
Thanks,
Mircea

Check constraints can't access the previous value of the column. You would need to use a trigger for this.
An example of such a trigger would be
CREATE TRIGGER DisallowPriceDecrease
ON Products
AFTER UPDATE
AS
IF NOT UPDATE(price)
RETURN
IF EXISTS(SELECT * FROM inserted i
JOIN deleted d
ON i.primarykey = d.primarykey
AND i.price< d.price)
BEGIN
ROLLBACK TRANSACTION
RAISERROR('Prices may not be decreased', 16, 1)
END

Triggers start as a quick fix, and end with a maintenance nightmare. The two big problems with triggers are:
It's hard to see when a trigger is called. You can easily write an update statement without being aware that a trigger will run.
When triggers start triggering other triggers, it becomes hard to tell what will happen.
As an alternative, wrap access to the table in a stored procedure. For example:
create table TestTable (productId int, price numeric(6,2))
insert into TestTable (productId, price) values (1,5.0)
go
create procedure dbo.IncreasePrice(
#productId int,
#newPrice numeric(6,2))
with execute as owner
as
begin
update dbo.TestTable
set price = #newPrice
where productId = #productId
and price <= #newPrice
return ##ROWCOUNT
end
go
Now if you try to decrease the price, the procedure will fail and return 0:
exec IncreasePrice 1, 4.0
select * from TestTable --> 1, 5.00
exec IncreasePrice 1, 6.0
select * from TestTable --> 1, 6.00
Stored procedures are pretty easy to read. Compared to triggers, they'll cause you a lot less headaches. You can enforce the use of stored procedures by not giving any user the right to UPDATE tables. That's a good practice anyway.

Related

SQL server GetDate in trigger called sequentially has the same value

I have a trigger on a table for insert, delete, update that on the first line gets the current date with GetDate() method.
The trigger will compare the deleted and inserted table to determine what field has been changed and stores in another table the id, datetime and the field changed. This combination must be unique
A stored procedure does an insert and an update sequentially on the table. Sometimes I get a violation of primary key and I suspect that the GetDate() returns the same value.
How can I make the GetDate() return different values in the trigger.
EDIT
Here is the code of the trigger
CREATE TRIGGER dbo.TR
ON table
FOR DELETE, INSERT, UPDATE
AS
BEGIN
SET NoCount ON
DECLARE #dt Datetime
SELECT #dt = GetDate()
insert tableLog (id, date, field, old, new)
select I.id, #dt, 'field', D.field, I.field
from INSERTED I LEFT JOIN DELETED D ON I.id=D.id
where IsNull(I.field, -1) <> IsNull(D.field, -1)
END
and the code of the calls
...
insert into table ( anotherfield)
values (#anotherfield)
if ##rowcount=1 SET #ID=##Identity
...
update table
set field = #field
where Id = #ID
...
Sometimes the GetDate() between the 2 calls (insert and update) takes 7 milliseconds and sometimes it has the same value.
That's not exactly full solution but try using SYSDATETIME instead and of course make sure that target table can store up datetime2 up to microseconds.
Note that you can't force different datetime regardless of precision (unless you will start counting up to ticks) as stuff can just happen at the same time wihthin given precision.
If stretching up to microseconds won't solve the issue on practical level, I think you will have to either redesign this logging schema (perhaps add identity column on top of what you have) or add some dirty trick - like make this insert in try catch block and add like microsecond (nanosecond?) in a loop until you insert successfully. Definitely not s.t. I would recommend.
Look at this answer: SQL Server: intrigued by GETDATE()
If you are inserting multiple ROWS, they will all use the same value of GetDate(), so you can try wrapping it in a UDF to get unique values. But as I said, this is just a guess unless you post the code of your trigger so we can see what you are actually doing?
It sounds like you're trying to create an audit trail - but now you want to forge some of the entries?
I'd suggest instead adding a rowversion column to the table and including that in your uniqueness criteria - either instead of or as well as the datetime value that is being recorded.
In this way, even if two rows are inserted with identical date/time data, you can still tell the actual insertion order.

SQL Server select for update

I am struggling to find a SQL Server replacement for select for update that works.
I have a master table that contains a column which is used for next order number. The application does a select from update on this row, reads the current value (while locked) adds one to this value and then updates the row, then uses the number it received. This process works perfectly on all databases I've tried but for SQL Server which does not seem to have any process for selecting data for exclusive use.
How do I do a locked read and update of something like a next order number from a sequence table is SQL Server?
BTW, I know I can use things like IDENTITY cols and stuff, to do this, but in this case I must read from this existing column. Get the value and inc it, and do it in a safe locked manner to avoid 2 users getting the same value.
UPDATE::
Thank you, that works for this case :)
DECLARE #Output char(30)
UPDATE scheme.sysdirm
SET #Output = key_value = cast(key_value as int)+1
WHERE system_key='OPLASTORD'
SELECT #Output
I have one other place I do something similar. I read and lock a stock record too.
SELECT STOCK
FROM PRODUCT
WHERE ID = ? FOR UPDATE.
I then do some validation and the do
UPDATE PRODUCT SET STOCK = ?
WHERE ID=?
I can't just use your above method here, as the value I update is based on things I do from the stock I read. But I need to ensure no one else can mess with the stock while I do this. Again, easy on other DB's with SELECT FOR UPDATE... is there a SQL Server workaround?? :)
You can simple do an UPDATE that also reads out the new value into a SQL Server variable:
DECLARE #Output INT
UPDATE dbo.YourTable
SET #Output = YourColumn = YourColumn + 1
WHERE ID = ????
SELECT #Output
Since it's an atomic UPDATE statement, it's safe against concurrency issues (since only one connection can get an update locks at any one given time). A potential second session that wants to get the incremented value at the same time will have to wait until the first one completes, thus getting the next value from the table.
As an alternative you can use the OUTPUT clause of the UPDATE statement, although this will insert into a table variable.
Create table YourTable
(
ID int,
YourColumn int
)
GO
INSERT INTO YourTable VALUES (1, 1)
GO
DECLARE #Output TABLE
(
YourColumn int
)
UPDATE YourTable
SET YourColumn = YourColumn + 1
OUTPUT inserted.YourColumn INTO #Output
WHERE ID = 1
SELECT TOP 1 YourColumn
FROM #Output
**** EDIT
If you want to ensure that no-one can change the data after you have read it, you can use a repeatable read. You should be aware that any reads of any tables you do will be locked for Update (pessimistic locking) and may cause Deadlocking. You can also sue the SELECT ... FROM TABLE (UPDLOCK) hint within a transaction.
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ
BEGIN TRANSACTION
SELECT STOCK
FROM PRODUCT
WHERE ID = ?
.....
...
UPDATE Product
SET Stock = nnn
WHERE ID = ?
COMMIT TRANSACTION

SQL Server decrementing inventory

I am using SQL Server 2008. I have a table where orders with SKU are recorded, a table for inventory that has counts and a table where the relationship between the SKU sold and inventory items is recorded.
In the end, I got the report like this
Inventory CurrentQuantity OpenedOrder
SKU1 300 50
SKU2 100 10
Each order will be processed individually. How can I have the database automatically update the inventory tablet after each order is processed?
i.e
If the order has 2 SKU1 in it got processed, the the inventory table will automatically show 298.
Thanks
I would use a Stored Procedure, and perform the order insert and quantity update in one hit:
CREATE PROC dbo.ProcessOrder
#Item int,
#Quantity int
AS
BEGIN
--Update order table here
INSERT INTO dbo.Orders(ItemID,Quantity)
VALUES (#ItemID, #Quantity)
--Update Inventory here
UPDATE dbo.Inventory
SET CurrentQuantity = CurrentQuantity - Quantity
WHERE ItemID = #ItemID
END
I think what you are looking for is a trigger
Basically, set up a trigger that will update the appropriate columns using the inserted/updated data given. Without a full schema set, that is the best answer I can give at this time
I wouldn't be looking at a trigger myself for this.
My check out process
Start a transaction
Check stock level.
If OK, (optional validation / authorisation)
Add a check out record
Reduce the stock
Possibly add some record to invoice teh recipent etc.
Commit the transaction
While you could do it with triggers, I simply fail to see the point, a nice simple clear and all in one place SP_CheckOut stored procedure is where I'd be going.
I would normally advise to use a trigger but stock manipulation is that kind of operation that's usually done a lot of times, sometimes on batches and this is not the best scenario for triggers to be honest.
I think PKG's idea is very good, but you should never forget to add transaction control to it, otherwise you can endup with non-matching stocks:
CREATE PROC dbo.ProcessOrder
#Item int,
#Quantity int
AS
BEGIN
begin transaction my_tran
begin try
--Update order table here
INSERT INTO dbo.Orders(ItemID,Quantity)
VALUES (#ItemID, #Quantity)
--Update Inventory here
UPDATE dbo.Inventory
SET CurrentQuantity = CurrentQuantity - Quantity
WHERE ItemID = #ItemID
commit transaction
end try
begin catch
rollback transaction
--raise error if necessary
end catch
END
you can use the trigger,also use the procedure,and the specific steps on the top,use the procedure need to open the atuo exec feature in mastaer DB.

Linq to SQL with INSTEAD OF Trigger and an Identity Column

I need to use the clock on my SQL Server to write a time to one of my tables, so I thought I'd just use GETDATE(). The problem is that I'm getting an error because of my INSTEAD OF trigger. Is there a way to set one column to GETDATE() when another column is an identity column?
This is the Linq-to-SQL:
internal void LogProcessPoint(WorkflowCreated workflowCreated, int processCode)
{
ProcessLoggingRecord processLoggingRecord = new ProcessLoggingRecord()
{
ProcessCode = processCode,
SubId = workflowCreated.SubId,
EventTime = DateTime.Now // I don't care what this is. SQL Server will use GETDATE() instead.
};
this.Database.Add<ProcessLoggingRecord>(processLoggingRecord);
}
This is the table. EventTime is what I want to have as GETDATE(). I don't want the column to be null.
And here is the trigger:
ALTER TRIGGER [Master].[ProcessLoggingEventTimeTrigger]
ON [Master].[ProcessLogging]
INSTEAD OF INSERT
AS
BEGIN
SET NOCOUNT ON;
SET IDENTITY_INSERT [Master].[ProcessLogging] ON;
INSERT INTO ProcessLogging (ProcessLoggingId, ProcessCode, SubId, EventTime, LastModifiedUser)
SELECT ProcessLoggingId, ProcessCode, SubId, GETDATE(), LastModifiedUser FROM inserted
SET IDENTITY_INSERT [Master].[ProcessLogging] OFF;
END
Without getting into all of the variations I've tried, this last attempt produces this error:
InvalidOperationException
Member AutoSync failure. For members to be AutoSynced after insert, the type must either have an auto-generated identity, or a key that is not modified by the database after insert.
I could remove EventTime from my entity, but I don't want to do that. If it was gone though, then it would be NULL during the INSERT and GETDATE() would be used.
Is there a way that I can simply use GETDATE() on the EventTime column for INSERTs?
Note: I do not want to use C#'s DateTime.Now for two reasons:
1. One of these inserts is generated by SQL Server itself (from another stored procedure)
2. Times can be different on different machines, and I'd like to know exactly how fast my processes are happening.
Bob,
It seems you are attempting to solve two different problems here. One of which has to do with a L2S error with an Instead Of trigger and another with using the date on the SQL Server box for your column. I think you might have problems with Instead of Triggers and L2S. You might want to try an approach that uses an After trigger, like this. I think this will solve both your problems.
ALTER TRIGGER [Master].[ProcessLoggingEventTimeTrigger]
ON [Master].[ProcessLogging]
AFTER INSERT
AS
BEGIN
UPDATE [Master].[ProcessLogging] SET EventTime = GETDATE() WHERE ProcessLoggingId = (SELECT ProcessLoggingId FROM inserted)
END
Don't use a trigger, use a defualt:
create table X
(id int identity primary key,
value varchar(20),
eventdate datetime default(getdate()))
insert into x(value) values('Try')
insert into x(value) values('this')
select * from X
It's much better.
Have you tried using a default value of (getdate()) for the EventTime colum?
You wouldn't then need to set the value in the trigger, it would be set automatically.
A default value is used when you don't explicitly supply a value, e.g.
INSERT INTO ProcessLogging (ProcessLoggingId, ProcessCode, SubId, LastModifiedUser)
SELECT ProcessLoggingId, ProcessCode, SubId, LastModifiedUser FROM inserted
Bob,
I see it is better to don't use triggers in SQL server; it have a lot of disadvantage and not recommended for database performance enhancements. Please check SQL Authority blog for more information about the Triggers problems.
You can achieve what you want without Triggers using the following steps:
Change Eventime column to allow null
Set Eventtime column Default Value to GetDate(). So it always will have a the current insertion value.
Don't set Eventtime value to DateTime.Now from your LinqToSQL code, so it will take the default value in the SQL Server.

SQL Server Trigger. Need Help

I have a table with these columns:
debt
paid
remained
Whenever the paid column is updated I need to recalculate the remained using the following calculation debt minus paid
Could someone help me achieve this?
You could consider a computed column instead.
This article has the syntax for creating from scratch or adding to an existing schema, along the lines of
ALTER TABLE yourtable ADD remainder AS debt - paid
Given table
CREATE TABLE [MyTable]
(
MyTablePK int,
debt numeric(10,2),
paid numeric(10,2),
remainder numeric(10,2)
)
The following trigger will recalculate field Remainder
CREATE TRIGGER tMyTable ON [MyTable] FOR INSERT, UPDATE
AS
BEGIN
SET NOCOUNT ON
UPDATE mt
Set mt.Remainder = mt.Debt - mt.Paid
FROM [MyTable] mt INNER JOIN Inserted i
on mt.MyTablePK = i.MyTablePK
END
You could also define Remainder as a Computed persisted column, which would have a similar effect without the side effects of triggers
Why perform a calculation in a trigger when SQL can do it for you, and you don't have to worry about triggers being disabled, etc:
CREATE TABLE T (
/* Other columns */
Debt decimal (18,4) not null,
Paid decimal (18,4) not null,
Remained as Debt-Paid
)
This is called a computed column
create trigger DebtPaid
on DebtTable
after insert, update
as if update(paid)
begin
update DebtTable
set remained = inserted.debt - inserted.paid
where customerId = inserted.customerId
end
http://msdn.microsoft.com/en-us/library/ms189799.aspx
http://benreichelt.net/blog/2005/12/13/making-a-trigger-fire-on-column-change/
Computed columns can be good but they are calculated on the fly and arent stored anywhere, for some big queries that perform long calculations having a physical denormalyzed value in Remained controlled by trigger can be better than computed columns.
In your trigger remember to only update rows that were updated , you access those by virtual table Inserted Deleted available in triggers.

Resources