I am using SQL Server 2008. I have a table where orders with SKU are recorded, a table for inventory that has counts and a table where the relationship between the SKU sold and inventory items is recorded.
In the end, I got the report like this
Inventory CurrentQuantity OpenedOrder
SKU1 300 50
SKU2 100 10
Each order will be processed individually. How can I have the database automatically update the inventory tablet after each order is processed?
i.e
If the order has 2 SKU1 in it got processed, the the inventory table will automatically show 298.
Thanks
I would use a Stored Procedure, and perform the order insert and quantity update in one hit:
CREATE PROC dbo.ProcessOrder
#Item int,
#Quantity int
AS
BEGIN
--Update order table here
INSERT INTO dbo.Orders(ItemID,Quantity)
VALUES (#ItemID, #Quantity)
--Update Inventory here
UPDATE dbo.Inventory
SET CurrentQuantity = CurrentQuantity - Quantity
WHERE ItemID = #ItemID
END
I think what you are looking for is a trigger
Basically, set up a trigger that will update the appropriate columns using the inserted/updated data given. Without a full schema set, that is the best answer I can give at this time
I wouldn't be looking at a trigger myself for this.
My check out process
Start a transaction
Check stock level.
If OK, (optional validation / authorisation)
Add a check out record
Reduce the stock
Possibly add some record to invoice teh recipent etc.
Commit the transaction
While you could do it with triggers, I simply fail to see the point, a nice simple clear and all in one place SP_CheckOut stored procedure is where I'd be going.
I would normally advise to use a trigger but stock manipulation is that kind of operation that's usually done a lot of times, sometimes on batches and this is not the best scenario for triggers to be honest.
I think PKG's idea is very good, but you should never forget to add transaction control to it, otherwise you can endup with non-matching stocks:
CREATE PROC dbo.ProcessOrder
#Item int,
#Quantity int
AS
BEGIN
begin transaction my_tran
begin try
--Update order table here
INSERT INTO dbo.Orders(ItemID,Quantity)
VALUES (#ItemID, #Quantity)
--Update Inventory here
UPDATE dbo.Inventory
SET CurrentQuantity = CurrentQuantity - Quantity
WHERE ItemID = #ItemID
commit transaction
end try
begin catch
rollback transaction
--raise error if necessary
end catch
END
you can use the trigger,also use the procedure,and the specific steps on the top,use the procedure need to open the atuo exec feature in mastaer DB.
Related
I've been trying to find a solution to a very simple problem, but I just cant find out how to do it. I have two tables Transactions and Credit_Card.
Transactions
transid (PK), ccid (FK: to credit_card > ccid), amount, timestamp
Credit_Card
ccid (PK), Balance, creditlimit
I want to create a trigger so before someone inserts a transaction it checks that the transaction amount + the balance of the credit card does not go over the creditlimit and if it is, it rejects the insert.
"EDIT" The following code fixed my issue, big thanks to Dan Guzman for his contribution!
CREATE TRIGGER TR_transactions
ON transactions FOR INSERT, UPDATE
AS
IF EXISTS(
SELECT 1
FROM (
SELECT t.ccid, SUM(t.amount) AS amount
FROM inserted AS t
GROUP BY t.ccid) AS t
JOIN Credit_Card AS cc ON
cc.ccid = t.ccid
WHERE cc.creditlimit <= (t.amount + cc.balance)
)
BEGIN
RAISERROR('Credit limit exceeded', 16, 1);
ROLLBACK;
END;
If I understand correctly, you just need to check the credit limit against newly inserted/updated transactions. Keep in mind that a SQL Server trigger fires once per statement and a statement may affect multiple rows. The virtual inserted will have images of the affected rows. You can use this to limit the credit check to only the credit cards affected by the related transactions.
CREATE TRIGGER TR_transactions
ON transactions FOR INSERT, UPDATE
AS
IF EXISTS(
SELECT 1
FROM (
SELECT inserted.ccid, SUM(inserted.amount) AS amount
FROM inserted
GROUP BY inserted.ccid) AS t
JOIN Credit_Card AS cc ON
cc.ccid = t.ccid
WHERE cc.creditlimit <= (t.amount + cc.balance)
)
BEGIN
RAISERROR('Credit limit exceeded', 16, 1);
ROLLBACK;
END;
EDIT
I removed the t alias from the inserted table and qualified the columns with inserted instead to better indicate the source of the data. It's generally a good practice to qualify column names with the table name or alias in multi-table queries to avoid ambiguity.
The integers 16 and 1 in the RAISERROR statement specify the severity and state of the raised error. See the SQL Server Books Online reference for details. Severity 11 and greater raise an error, with severities in the 11 through 16 range indicating a user-correctable error.
you can try this.
ALTER trigger [dbo].[TrigerOnInsertPonches]
On [dbo].[CHECKINOUT]
After Insert
As
BEGIN
DECLARE #ccid int
,#amount money
you have to tell sql that the trigger is ofter insert,
then you can declare the using variable. declare
select #ccid=o.ccid from inserted o;
I think is the correct whey to catch the id.
then you can make the select filtiering from this value.
I hope this can be usefull
We are using the technique outlined here to generate random record IDs without collisions. In short, we create a randomly-ordered table of every possible ID, and mark each record as 'Taken' as it is used.
I use the following Stored Procedure to obtain an ID:
ALTER PROCEDURE spc_GetId #retVal BIGINT OUTPUT
AS
DECLARE #curUpdate TABLE (Id BIGINT);
SET NOCOUNT ON;
UPDATE IdMasterList SET Taken=1
OUTPUT DELETED.Id INTO #curUpdate
WHERE ID=(SELECT TOP 1 ID FROM IdMasterList WITH (INDEX(IX_Taken)) WHERE Taken IS NULL ORDER BY SeqNo);
SELECT TOP 1 #retVal=Id FROM #curUpdate;
RETURN;
The retrieval of the ID must be an atomic operation, as simultaneous inserts are possible.
For large inserts (10+ million), the process is quite slow, as I must pass through the table to be inserted via a cursor.
The IdMasterList has a schema:
SeqNo (BIGINT, NOT NULL) (PK) -- sequence of ordered numbers
Id (BIGINT) -- sequence of random numbers
Taken (BIT, NULL) -- 1 if taken, NULL if not
The IX_Taken index is:
CREATE NONCLUSTERED INDEX (IX_Taken) ON IdMasterList (Taken ASC)
I generally populate a table with Ids in this manner:
DECLARE #recNo BIGINT;
DECLARE #newId BIGINT;
DECLARE newAdds CURSOR FOR SELECT recNo FROM Adds
OPEN newAdds;
FETCH NEXT FROM newAdds INTO #recNo;
WHILE ##FETCH_STATUS=0 BEGIN
EXEC spc_GetId #newId OUTPUT;
UPDATE Adds SET id=#newId WHERE recNo=#recNo;
FETCH NEXT FROM newAdds INTO #id;
END;
CLOSE newAdds;
DEALLOCATE newAdds;
Questions:
Is there any way I can improve the SP to extract Ids faster?
Would a conditional index improve peformance (I've yet to test, as
IdMasterList is very big)?
Is there a better way to populate a table with these Ids?
As with most things in SQL Server, if you are using cursors, you are doing it wrong.
Since you are using SQL Server 2012, you can use a SEQUENCE to keep track of what random value you already used and effectively replace the Taken column.
CREATE SEQUENCE SeqNoSequence
AS bigint
START WITH 1 -- Start with the first SeqNo that is not taken yet
CACHE 1000; -- Increase the cache size if you regularly need large blocks
Usage:
CREATE TABLE #tmp
(
recNo bigint,
SeqNo bigint
)
INSERT INTO #tmp (recNo, SeqNo)
SELECT recNo,
NEXT VALUE FOR SeqNoSequence
FROM Adds
UPDATE Adds
SET id = m.id
FROM Adds a
INNER JOIN #tmp tmp ON a.recNo = tmp.recNo
INNER JOIN IdMasterList m ON tmp.SeqNo = m.SeqNo
SEQUENCE is atomic. Subsequent calls to NEXT VALUE FOR SeqNoSequence are guaranteed to return unique values, even for parallel processes. Note that there can be gaps in SeqNo, but it's a very small trade off for the huge speed increase.
Put a PK inden of BigInt on each table
insert into user (name)
values ().....
update user set = user.ID = id.ID
from id
left join usr
on usr.PK = id.PK
where user.ID = null;
one
insert into user (name) value ("justsaynotocursor");
set #PK = select select SCOPE_IDENTITY();
update user set ID = (select ID from id where PK = #PK);
Few ideas that came to my mind:
Try if removing the top, inner select etc. helps to improve the performance of the ID fetching (look at statistics io & query plan):
UPDATE top(1) IdMasterList
SET #retVal = Id, Taken=1
WHERE Taken IS NULL
Change the index to be a filtered index, since I assume you don't need to fetch numbers that are taken. If I remember correctly, you can't do this for NULL values, so you would need to change the Taken to be 0/1.
What actually is your problem? Fetching single IDs or 10+ million IDs? Is the problem CPU / I/O etc. caused by the cursor & ID fetching logic, or are the parallel processes being blocked by other processes?
Use sequence object to get the SeqNo. and then fetch the Id from idMasterList using the value returned by it. This could work if you don't have gaps in IdMasterList sequences.
Using READPAST hint could help in blocking, for CPU / I/O issues, you should try to optimize the SQL.
If the cause is purely the table being a hotspot, and no other easy solutions seem to help, split it into several tables and use some kind of simple logic (even ##spid, rand() or something similar) to decide from which table the ID should be fetched. You would need more checking if all tables have free numbers, but it shouldn't be that bad.
Create different procedures (or even tables) to handle fetching of single ID, hundreds of IDs and millions of IDs.
I am struggling to find a SQL Server replacement for select for update that works.
I have a master table that contains a column which is used for next order number. The application does a select from update on this row, reads the current value (while locked) adds one to this value and then updates the row, then uses the number it received. This process works perfectly on all databases I've tried but for SQL Server which does not seem to have any process for selecting data for exclusive use.
How do I do a locked read and update of something like a next order number from a sequence table is SQL Server?
BTW, I know I can use things like IDENTITY cols and stuff, to do this, but in this case I must read from this existing column. Get the value and inc it, and do it in a safe locked manner to avoid 2 users getting the same value.
UPDATE::
Thank you, that works for this case :)
DECLARE #Output char(30)
UPDATE scheme.sysdirm
SET #Output = key_value = cast(key_value as int)+1
WHERE system_key='OPLASTORD'
SELECT #Output
I have one other place I do something similar. I read and lock a stock record too.
SELECT STOCK
FROM PRODUCT
WHERE ID = ? FOR UPDATE.
I then do some validation and the do
UPDATE PRODUCT SET STOCK = ?
WHERE ID=?
I can't just use your above method here, as the value I update is based on things I do from the stock I read. But I need to ensure no one else can mess with the stock while I do this. Again, easy on other DB's with SELECT FOR UPDATE... is there a SQL Server workaround?? :)
You can simple do an UPDATE that also reads out the new value into a SQL Server variable:
DECLARE #Output INT
UPDATE dbo.YourTable
SET #Output = YourColumn = YourColumn + 1
WHERE ID = ????
SELECT #Output
Since it's an atomic UPDATE statement, it's safe against concurrency issues (since only one connection can get an update locks at any one given time). A potential second session that wants to get the incremented value at the same time will have to wait until the first one completes, thus getting the next value from the table.
As an alternative you can use the OUTPUT clause of the UPDATE statement, although this will insert into a table variable.
Create table YourTable
(
ID int,
YourColumn int
)
GO
INSERT INTO YourTable VALUES (1, 1)
GO
DECLARE #Output TABLE
(
YourColumn int
)
UPDATE YourTable
SET YourColumn = YourColumn + 1
OUTPUT inserted.YourColumn INTO #Output
WHERE ID = 1
SELECT TOP 1 YourColumn
FROM #Output
**** EDIT
If you want to ensure that no-one can change the data after you have read it, you can use a repeatable read. You should be aware that any reads of any tables you do will be locked for Update (pessimistic locking) and may cause Deadlocking. You can also sue the SELECT ... FROM TABLE (UPDLOCK) hint within a transaction.
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ
BEGIN TRANSACTION
SELECT STOCK
FROM PRODUCT
WHERE ID = ?
.....
...
UPDATE Product
SET Stock = nnn
WHERE ID = ?
COMMIT TRANSACTION
I executed this query that deletes 16 000 000 rows :
delete from [table_name]
3 minutes after execution and no results, I cancelled the query.
Took a while to cancel, but in the end it said "Query Cancelled"
Does that mean that any of the 16 000 000 are deleted? Or are they still all there?
This is not the real query, I just used it for the example.
The rows should be there. Delete is rolled back if you cancel the statement.
Once it completes, the changes are committed though.
So, first off - if you want to be sure you can cancel a query successfully, you need to use transactions. IF you were using transactions and that transaction was rolled back, none of the rows would be deleted. Management studio will only use implicit transactions if you have that flag set (see here).
That said, I was really intrigued because I figured that a delete operation should be an "atomic" operation so I wrote the following code to test that:
create database test_database
use test_database
go
create table sample (
ID BIGINT IDENTITY(1,1) PRIMARY KEY,
value VARCHAR(25)
)
declare #id int
select #id = 1
-- up 90000 if this doesn't have enough rows
while #id >=1 and #id <= 90000
begin
insert into sample (value) values ('abcdefg');
select #id = #id + 1
end
-- run, but cancel in the middle
delete from sample
-- check, are there 90000 records now?
select count(id) from sample
-- clean up
drop table sample
Turns out, if you click cancel it does treat this as an "atomic" statement, which means all the rows are still there. Your environment might be different so test it out!
I am trying to add a check constraint which verity if after an update the new value (which was inserted) is greater than old values which is already stored in table.
For example i have a "price" column which already stores value 100, if the update comes with 101 the is ok, if 99 comes then my constraint should reject the update process. Can this behavior be achieved using check constraints or should i try to use triggers or functions ?
Please advice me regarding this...
Thanks,
Mircea
Check constraints can't access the previous value of the column. You would need to use a trigger for this.
An example of such a trigger would be
CREATE TRIGGER DisallowPriceDecrease
ON Products
AFTER UPDATE
AS
IF NOT UPDATE(price)
RETURN
IF EXISTS(SELECT * FROM inserted i
JOIN deleted d
ON i.primarykey = d.primarykey
AND i.price< d.price)
BEGIN
ROLLBACK TRANSACTION
RAISERROR('Prices may not be decreased', 16, 1)
END
Triggers start as a quick fix, and end with a maintenance nightmare. The two big problems with triggers are:
It's hard to see when a trigger is called. You can easily write an update statement without being aware that a trigger will run.
When triggers start triggering other triggers, it becomes hard to tell what will happen.
As an alternative, wrap access to the table in a stored procedure. For example:
create table TestTable (productId int, price numeric(6,2))
insert into TestTable (productId, price) values (1,5.0)
go
create procedure dbo.IncreasePrice(
#productId int,
#newPrice numeric(6,2))
with execute as owner
as
begin
update dbo.TestTable
set price = #newPrice
where productId = #productId
and price <= #newPrice
return ##ROWCOUNT
end
go
Now if you try to decrease the price, the procedure will fail and return 0:
exec IncreasePrice 1, 4.0
select * from TestTable --> 1, 5.00
exec IncreasePrice 1, 6.0
select * from TestTable --> 1, 6.00
Stored procedures are pretty easy to read. Compared to triggers, they'll cause you a lot less headaches. You can enforce the use of stored procedures by not giving any user the right to UPDATE tables. That's a good practice anyway.