I have customer machine what inserts data into my SQL Server database at midnight. In SQL Server there is trigger to delete old records upon insert.
The problem is, the customer does bunch of single insert commands (instead of using bulk insert), actually hundreds of them. I cannot control that. What I did is to record last trigger time and if datediff is less than one hour, don't do anything. I read using WAITFOR in triggers is a bad idea as it locks the table until trigger execution is done. Is there any other way? Cheers.
BEGIN
SET NOCOUNT ON;
IF DATEDIFF(HOUR, (SELECT TOP 1 TimeStamp
FROM HouseKeepingStats
ORDER BY TimeStamp DESC), GETDATE()) > 1
BEGIN
EXEC HouseKeeping;
END
END
Related
I currently have a lot of triggers running on most of my tables. I'm using Insert, Update and Delete triggers on all of them. They log into a separate tabel. However the processing time to use the software has increased because of this. It is barely/not noticable for smaller changes, however for big changes it can go from 10-15min to 1hr.
I would like to change my triggers to stop insterting new log records after say 250 log records in 1 minute (bulk action), delete the newly created logs and create 1 record mentiong bulk and the query used. Problem is I can't seem to get the trigger to stop when activated.
I have already created the conditions needed for this:
CREATE TRIGGER AUDIT_LOGGING_INSERT_ACTUALISERINGSCOEFFICIENT ON ACTUALISERINGSCOEFFICIENT FOR INSERT AS
BEGIN
SET NOCOUNT ON
DECLARE #Group_ID INT = (SELECT COALESCE(MAX(Audit_Trail_Group_ID), 0) FROM NST.dbo.Audit_Trail) + 1
DECLARE #BulkCount INT = (SELECT COUNT(*) FROM NST.dbo.Audit_Trail WHERE Audit_Trail_User = CONCAT('7090-LOCAL-', UPPER(SUSER_SNAME())) AND GETDATE() >= DATEADD(MINUTE, -1, GETDATE()))
IF #BulkCount < 250
BEGIN
INSERT ...
END
ELSE
BEGIN
DECLARE #BulkRecordCount INT = (SELECT COUNT(*) FROM NST.dbo.Audit_Trail WHERE Audit_Trail_User = CONCAT('7090-LOCAL-', UPPER(SUSER_SNAME())) AND GETDATE() >= DATEADD(MINUTE, -60, GETDATE()) AND Audit_Trail_Action LIKE '%BULK%')
IF #BulkRecordCount = 0
BEGIN
INSERT ...
END
END
END
However when I execute a query that changes 10000 plus records the trigger still inserts all 10000. When I execute it again right after it inserts 10000 BULK records. Probably because it executes the first time it triggers (goes through the function) 10000 times?
Also as you can see, this would work only if 1 bulk operation is used in the last 60 min.
Any ideas for handling bulk changes are welcome.
Didn't get it to work by logging the first 250 records.
Instead I did the following:
Created a new table with 'Action' and 'User' columns
I add a record everytime a bulk action starts and delete it when it ends
Changed the trigger so that if a record is found for the user in the new table that it only writes 1 bulk record in the log table
Problems associated:
Problem with this is that I also have had to manually go through the
biggest bulk functions and implement the add and delete.
An extra point of failure if the add record gets added but an exception occurs that doesnt delete the record again. -> Implemented a Try Catch where needed.
I have a procedure which runs every midnight to delete expired rows from the database. During the job my application gets deadlock error when it tries to perform insert or update on the table. What should be the best way to avoid deadlock?
I have millions of record to delete every midnight, which I am deleting in a batch of 1000 rows at a time.
#batch=1000
WHILE #rowCount > 0
BEGIN TRY
DELETE TOP (#batch) FROM application
WHERE job_id IN (SELECT ID FROM JOB
WHERE process_time < #expiryDate)
OR prev_job_id IN (SELECT ID FROM [dbo].JOB
WHERE process_time < #expiryDate)
SELECT #rowCount = ##ROWCOUNT
END TRY
BEGIN CATCH
//some code here...
I'm having issues with timeouts of a table on mine.
Example table:
Id BIGINT,
Token uniqueidentifier,
status smallint,
createdate datetime,
updatedate datetime
I'm inserting data into this table from 2 different stored procedures that are wrapped with transaction (with specific escalation) and also 1 job that executes once every 30 secs.
I'm getting timeout from only 1 of them, and the weird thing that its from the simple one
BEGIN TRY
BEGIN TRAN
INSERT INTO [dbo].[TempTable](Id, AppToken, [Status], [CreateDate], [UpdateDate])
VALUES(#Id, NEWID(), #Status, GETUTCDATE(), GETUTCDATE() )
COMMIT TRAN
END TRY
BEGIN CATCH
IF ##TRANCOUNT > 0
ROLLBACK TRAN;
END CATCH
When there is some traffic on this table (TempTable) this procedure keeps getting timeout.
I checked the execution plan and it seems I haven't missed any indexes in both stored procedures.
Also, the only index on TempTable is the clustered PK on Id.
Any ideas?
If more information is needed, do tell.
The 2nd stored procedure using this table isn't causing any big IO or something.
The job, however, uses an atomic UPDATE on this table and in the end of it DELETEs from the table, but as I checked on high IO of this table, the job takes no longer than 3 secs.
Thanks.
It is most propably because some other process is blocking your insert operation, It could be another insert, delete , update or some trigger or any other sql statement.
To find out who is blocking your operation you can use some esaily avialable stored procedures like
sp_who2
sp_whoIsActive (My Preferred)
While your insert statement is being executed/hung up execute one of these procedures and see who is blocking you.
In sp_who2 you will see a column by the name Blk_by get the SPID from that column and execute the following query
DBCC INPUTBUFFER(71);
GO
This will reutrn the last query executed by that process id. and it is not very well formatted the sql statement, all the query will be in one single line you will need to format it in your SSMS to acutally be able to read it.
On the other hand sp_WhoIsActive will only return the queries that are blocking other process and will have the query formatted just as the user has execute it. Also it will give you the execution plan for that query.
I want to call stored procedure from a trigger,
how to execute that stored procedure after x minutes?
I'm looking for something other than WAITFOR DELAY
thanks
Have an SQL Agent job that runs regularly and pulls stored procedure parameters from a table - the rows should indicate also when their run of the stored procedure should occur, so the SQL Agent job will only pick rows that are due/slightly overdue. It should delete the rows or mark them after calling the stored procedure.
Then, in the trigger, just insert a new row into this same table.
You do not want to be putting anything in a trigger that will affect the execution of the original transaction in any way - you definitely don't want to be causing any delays, or interacting with anything outside of the same database.
E.g., if the stored procedure is
CREATE PROCEDURE DoMagic
#Name varchar(20),
#Thing int
AS
...
Then we'd create a table:
CREATE TABLE MagicDue (
MagicID int IDENTITY(1,1) not null, --May not be needed if other columns uniquely identify
Name varchar(20) not null,
Thing int not null,
DoMagicAt datetime not null
)
And the SQL Agent job would do:
WHILE EXISTS(SELECT * from MagicDue where DoMagicAt < CURRENT_TIMESTAMP)
BEGIN
DECLARE #Name varchar(20)
DECLARE #Thing int
DECLARE #MagicID int
SELECT TOP 1 #Name = Name,#Thing = Thing,#MagicID = MagicID from MagicDue where DoMagicAt < CURRENT_TIMESTAMP
EXEC DoMagic #Name,#Thing
DELETE FROM MagicDue where MagicID = #MagicID
END
And the trigger would just have:
CREATE TRIGGER Xyz ON TabY after insert
AS
/*Do stuff, maybe calculate some values, or just a direct insert?*/
insert into MagicDue (Name,Thing,DoMagicAt)
select YName,YThing+1,DATEADD(minute,30,CURRENT_TIMESTAMP) from inserted
If you're running in an edition that doesn't support agent, then you may have to fake it. What I've done in the past is to create a stored procedure that contains the "poor mans agent jobs", something like:
CREATE PROCEDURE DoBackgroundTask
AS
WHILE 1=1
BEGIN
/* Add whatever SQL you would have put in an agent job here */
WAITFOR DELAY '00:05:00'
END
Then, create a second stored procedure, this time in the master database, which waits 30 seconds and then calls the first procedure:
CREATE PROCEDURE BootstrapBackgroundTask
AS
WAITFOR DELAY '00:00:30'
EXEC YourDB..DoBackgroundTask
And then, mark this procedure as a startup procedure, using sp_procoption:
EXEC sp_procoption N'BootstrapBackgroundTask', 'startup', 'on'
And restart the service - you'll now have a continuously running query.
I had kind of a similar situation where before I processed the records inserted into the table with the trigger, I wanted to make sure all the relevant related data in relational tables was also there.
My solution was to create a scratch table which was populated by the insert trigger on the first table.
The scratch table had a updated flag, (default set to 0), and an insert get date() date field, and the relevant identifier from the main table.
I then created a scheduled process to loop over the scratch table and perform whatever process I wanted to perform against each record individually, and updating the 'updated flag' as each record was processed.
BUT, here is where I was a wee bit clever, in the loop over process looking for records in the scratch table that had a update flag = 0, I also added the AND clause of AND datediff(mi, Updated_Date, getdate())> 5. So the record would not actually be processed until 5 minutes AFTER it was inserted into the scratch table.
This one will take some explaining. What I've done is create a specific custom message queue in SQL Server 2005. I have a table with messages that contain timestamps for both acknowledgment and completion. The stored procedure that callers execute to obtain the next message in their queue also acknowledges the message. So far so good. Well, if the system is experiencing a massive amount of transactions (thousands per minute), isn't it possible for a message to be acknowledged by another execution of the stored procedure while another is prepared to so itself? Let me help by showing my SQL code in the stored proc:
--Grab the next message id
declare #MessageId uniqueidentifier
set #MessageId = (select top(1) ActionMessageId from UnacknowledgedDemands);
--Acknowledge the message
update ActionMessages
set AcknowledgedTime = getdate()
where ActionMessageId = #MessageId
--Select the entire message
...
...
In the above code, couldn't another stored procedure running at the same time obtain the same id and attempt to acknowledge it at the same time? Could I (or should I) implement some sort of locking to prevent another stored proc from acknowledging messages that another stored proc is querying?
Wow, did any of this even make sense? It's a bit difficult to put to words...
Something like this
--Grab the next message id
begin tran
declare #MessageId uniqueidentifier
select top 1 #MessageId = ActionMessageId from UnacknowledgedDemands with(holdlock, updlock);
--Acknowledge the message
update ActionMessages
set AcknowledgedTime = getdate()
where ActionMessageId = #MessageId
-- some error checking
commit tran
--Select the entire message
...
...
This seems like the kind of situation where OUTPUT can be useful:
-- Acknowledge and grab the next message
declare #message table (
-- ...your `ActionMessages` columns here...
)
update ActionMessages
set AcknowledgedTime = getdate()
output INSERTED.* into #message
where ActionMessageId in (select top(1) ActionMessageId from UnacknowledgedDemands)
and AcknowledgedTime is null
-- Use the data in #message, which will have zero or one rows assuming
-- `ActionMessageId` uniquely identifies a row (strongly implied in your question)
...
...
There, we update and grab the row in the same operation, which tells the query optimizer exactly what we're doing, allowing it to choose the most granular lock it can and maintain it for the briefest possible time. (Although the column prefix is INSERTED, OUTPUT is like triggers, expressed in terms of the UPDATE being like deleting the row and inserting the new one.)
I'd need more information about your ActionMessages and UnacknowledgedDemands tables (views/TVFs/whatever), not to mention a greater knowledge of SQL Server's automatic locking, to say whether that and AcknowledgedTime is null clause is necessary. It's there to defend against a race condition between the sub-select and the update. I'm certain it wouldn't be necessary if we were selecting from ActionMessages itself (e.g., where AcknowledgedTime is null with a top on the update, instead of the sub-select on UnacknowledgedDemands). I expect even if it's unnecessary, it's harmless.
Note that OUTPUT is in SQL Server 2005 and above. That's what you said you were using, but if compatibility with geriatric SQL Server 2000 installs were required, you'd want to go another way.
#Kilhoffer:
The whole SQL batch is parsed before execution, so SQL knows that you're going to do an update to the table as well as select from it.
Edit: Also, SQL will not necessarily lock the whole table - it could just lock the necessary rows. See here for an overview of locking in SQL server.
Instead of explicit locking, which is often escalated by SQL Server to higher granularity than desired, why not just try this approach:
declare #MessageId uniqueidentifier
select top 1 #MessageId = ActionMessageId from UnacknowledgedDemands
update ActionMessages
set AcknowledgedTime = getdate()
where ActionMessageId = #MessageId and AcknowledgedTime is null
if ##rowcount > 0
/* acknoweldge succeeded */
else
/* concurrent query acknowledged message before us,
go back and try another one */
The less you lock - the higher concurrency you have.
Should you really be processing things one-by-one? Shouldn't you just have SQL Server acknowledge all unacknowledged messages with todays date and return them? (all also in a transaction of course)
Read more about SQL Server Select Locking here and here. SQL Server has the ability to invoke a table lock on a select. Nothing will happen to the table during the transaction. When the transaction completes, any inserts or updates will then resolve themselves.
You want to wrap your code in a transaction, then SQL server will handle locking the appropriate rows or tables.
begin transaction
--Grab the next message id
declare #MessageId uniqueidentifier
set #MessageId = (select top(1) ActionMessageId from UnacknowledgedDemands);
--Acknowledge the message
update ActionMessages
set AcknowledgedTime = getdate()
where ActionMessageId = #MessageId
commit transaction
--Select the entire message
...