I am doing something like this:
exec up_sp1 -- executing this stored procedure which populates the table sqlcmds
---Check the count of the table sqlcmds,
---if the count is zero then execute up_sp2,
----otherwise wait till the count becomes zero,then exec up_sp2
IF NOT EXISTS ( SELECT 1 FROM [YesMailReplication].[dbo].[SQLCmds])
BEGIN
exec up_sp2
END
What would the correct t-sql look like?
T-SQL has no WAITFOR semantics except for Service Broker queues. So all you can do, short of using Service Broker, is to poll periodically and see if the table was populated. For small scale this works fine, but for high scale it breaks as the right balance between wait time and poll frequency is difficult to achieve, and is even harder to make it adapt to spikes and lows.
But if you are willing to use Service Broker, then you can do much more elegant and scalable solution by leveraging Activation: up_sp1 drops a message into a queue and this message activates the queue procedure that starts and launches up_sp2 in turn, after the up_sp1 has committed. This is a reliable mechanism that handles server restarts, mirroring and clustering failover and even rebuilding of the server from backups. See Asynchronous procedure execution for a an example of achieving something very similar.
The Service Broker solution is surely the best - but there is a WAITFOR solution as well:
exec up_sp1;
while exists (select * from [YesMailReplication].[dbo].[SQLCmds]) begin
waitfor delay ('00:00:10'); -- wait for 10 seconds
end;
exec up_sp2;
Try this:
DECLARE #Count int
SELECT #Count = COUNT(*) FROM [YesMailReplication].[dbo].[SQLCmds])
IF #Count > 0 BEGIN
exec up_sp2
END
Why not keep it simple and self-documenting?
DECLARE #Count int;
SELECT #Count = Count(*) FROM [YesMailReplication].[dbo].[SQLCmds]
If #Count = 0 exec up_sp2
Related
(The reason why I need the following are unimportant)
What I'd like to do is adjust the following so that it executes the stored procedure, which usually takes 30 minutes, but then the procedure stops after a set time of 60 seconds - effectively the same as if I am in SSMS running the procedure and press the cancel button after 60 seconds.
I don't want to reconfigure the whole db so that every other long running stored procedure times out after 30 seconds - only the specific procedure TESTexecuteLongRunningProc.
Here is the test procedure being called:
CREATE PROCEDURE [dbo].[TESTlongRunningProc]
AS
BEGIN
--placeholder that represents the long-running proc
WAITFOR DELAY '00:30:00';
END;
This is the proc I would like to adjust so it cancels itself after 60 seconds:
CREATE PROCEDURE [dbo].[TESTexecuteLongRunningProc]
AS
BEGIN
EXECUTE WH.dbo.TESTlongRunningProc;
-->>here I would like some code that cancels TESTexecuteLongRunningProc after 60 seconds
END;
Essentially you can create a separate process to watch the background for a specific tasks and metrics and kill if necessary. Lets start by implanting a tracking device into your code you wish to track. I used a comment block with a key phrase "Kill Me". You can place something similar in your original code
CREATE PROCEDURE TrackedToKill
-- EXEC TrackedToKill
/* Comment Block tracking device: Kill Me*/
AS
BEGIN
DECLARE #Counter bigint = 0
WHILE 1 = 1
BEGIN
SET #Counter = #Counter + 1
WAITFOR DELAY '00:00:30'
END
END
Then lets see if we can find the running sessions
SELECT session_id,
command,database_id,user_id,
wait_type,wait_resource,wait_time,
percent_complete,estimated_completion_time,
total_elapsed_time,reads,writes,text
FROM sys.dm_exec_requests
CROSS APPLY sys.dm_exec_sql_text (sys.dm_exec_requests.sql_handle)
WHERE text LIKE '%Kill Me%'
AND session_id <> ##SPID
OK Great, this should return sessions with your tracking device. We can then turn this into another stored procedure that will kill your processes based on the tracking device and any other criteria you might need. You can launch this manually or perhaps with the SQL agent at start up. Include as many additional criteria you need to make sure you limit the scope of what you're killing (ie; User, database, block or Processes that that haven't been rolled back already).
CREATE PROCEDURE HunterKiller
-- EXEC HunterKiller
AS
BEGIN
DECLARE #SessionToKill int
DECLARE #SQL nvarchar(3000)
WHILE 1=1
BEGIN
SET #SessionToKill = (SELECT TOP 1 session_id
FROM sys.dm_exec_requests
CROSS APPLY sys.dm_exec_sql_text (sys.dm_exec_requests.sql_handle)
WHERE session_id <> ##SPID
AND text LIKE '%Kill Me%'
AND total_elapsed_time >= 15000)
SET #SQL = 'KILL ' + CONVERT(nvarchar,#SessionToKill)
EXEC (#SQL)
WAITFOR DELAY '00:00:05'
END
END
Assuming you can use the SQL Server Agent, perhaps using the sp_start_job and sp_stop_job procedures could work for you.
This is untested and without any sort of warranty, and the parameters have been shortened for readability:
-- control procedure
declare #starttime DATETIME = SYSDATETIME()
exec msdb..sp_start_job 'Job' -- The job containing the target procedure that takes 30 minutes
while 1>0
BEGIN
-- Check to see if the job is still running and if it has been running long enough
IF EXISTS(
SELECT TOP 1 b.NAME
FROM msdb..sysjobactivity a
INNER JOIN msdb..sysjobs b
ON a.job_id = b.job_id
WHERE start_execution_date >= #starttime
AND stop_execution_date IS NULL
AND b.NAME in ('job')
and DATEDIFF(second,start_execution_date,SYSDATETIME()) >= 60
)
BEGIN
exec msdb..sp_stop_job 'Job'
END
waitfor delay '00:00:05';
END
I've got a long-running stored procedure on a SQL server database. I don't want it to run more often than once every ten minutes.
Once the stored procedure has run, I want to store the latest result in a LatestResult table, against a time, and have all calls to the procedure return that result for the next ten minutes.
That much is relatively simple, but we've found that, because the procedure checks the LatestResult table and updates it, that large userbases are getting a number of deadlocks, when two users call the procedure at the same time.
In a client-side/threading situation, I would solve this by using a lock, having the first user lock the function, the second user encounters the lock, waiting for the result, the first user finishes their procedure call, updates the LatestResult table, and unlocks the second user, who then picks up the result from the LatestResult table.
Is there any way to accomplish this kind of locking in SQL Server?
EDIT:
This is basically how the code looks without its error checking calls:
DECLARE #LastChecked AS DATETIME
DECLARE #LastResult AS NUMERIC(18,2)
SELECT TOP 1 #LastChecked = LastRunTime, #LastResult = LastResult FROM LastResult
DECLARE #ReturnValue AS NUMERIC(18,2)
IF DATEDIFF(n, #LastChecked, GetDate()) >= 10 OR NOT #LastResult = 0
BEGIN
SELECT #ReturnValue = ABS(ISNULL(SUM(ISNULL(Amount,0)),0)) FROM Transactions WHERE ISNULL(DeletedFlag,0) = 0 GROUP BY GroupID ORDER BY ABS(ISNULL(SUM(ISNULL(Amount,0)),0))
UPDATE LastResult SET LastRunTime = GETDATE(), LastResult = #ReturnValue
SELECT #ReturnValue
END
ELSE
BEGIN
SELECT #LastResult
END
I'm not really sure what's going on with the grouping, but I've found a test system where execution time is coming in around 4 seconds.
I think there's some work scheduled to archive some of these records and boil them down to running totals, which will probably help things given that there's several million rows in that four second table...
This is a valid opportunity to use an Application Lock (see sp_getapplock and sp_releaseapplock) as it is a lock taken out on a concept that you define, not on any particular rows in any given table. The idea is that you create a transaction, then create this arbitrary lock that has an indetifier, and other processes will wait to enter that piece of code until the lock is released. This works just like lock() at the app layer. The #Resource parameter is the label of the arbitrary "concept". In more complex situations, you can even concatenate a CustomerID or something in there for more granular locking control.
DECLARE #LastChecked DATETIME,
#LastResult NUMERIC(18,2);
DECLARE #ReturnValue NUMERIC(18,2);
BEGIN TRANSACTION;
EXEC sp_getapplock #Resource = 'check_timing', #LockMode = 'Exclusive';
SELECT TOP 1 -- not sure if this helps the optimizer on a 1 row table, but seems ok
#LastChecked = LastRunTime,
#LastResult = LastResult
FROM LastResult;
IF (DATEDIFF(MINUTE, #LastChecked, GETDATE()) >= 10 OR #LastResult <> 0)
BEGIN
SELECT #ReturnValue = ABS(ISNULL(SUM(ISNULL(Amount, 0)), 0))
FROM Transactions
WHERE DeletedFlag = 0
OR DeletedFlag IS NULL;
UPDATE LastResult
SET LastRunTime = GETDATE(),
LastResult = #ReturnValue;
END;
ELSE
BEGIN
SET #ReturnValue = #LastResult; -- This is always 0 here
END;
SELECT #ReturnValue AS [ReturnValue];
EXEC sp_releaseapplock #Resource = 'check_timing';
COMMIT TRANSACTION;
You need to manage errors / ROLLBACK yourself (as stated in the linked MSDN documentation) so put in the usual TRY / CATCH. But, this does allow you to manage the situation.
If there are any concerns regarding contention on this process, there shouldn't be much as the lookup done right after locking the resource is a SELECT from a single-row table and then an IF statement that (ideally) just returns the last known value if the 10-minute timer hasn't elapsed. Hence, most calls should process rather quickly.
Please note: sp_getapplock / sp_releaseapplock should be used sparingly; Application Locks can definitely be very handy (such as in cases like this one) but they should only be used when absolutely necessary.
I use SQL Service Broker with internal activation to move a list of jobs to the internal activated stored procedure to complete without keeping the main thread/requestor waiting for the actualy individual jobs to finish. Essentially i'm trying to free up the UI thread. The problem is, I passed 2000+ jobs to the Service broker and the messages reached the queue in about 25 mins and free'd the UI however even after an hour, it has only finished working on close to 600+ jobs
I use below query to count the number waiting to be completed and it looks like its extremely slow
SELECT COUNT(*)
FROM [HMS_Test].[dbo].[HMSTargetQueueIntAct]
WITH(NOLOCK)
Below is my activation stored procedure for your ref. Can someone please have a look and let me know whats wrong with this? How can I get the SB to finish these items on the queue quickly? Thanks in advance :)
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[sp_SB_HMSTargetActivProc]
AS
BEGIN
DECLARE #RecvReqDlgHandle UNIQUEIDENTIFIER;
DECLARE #RecvReqMsg NVARCHAR(1000);
DECLARE #RecvReqMsgName sysname;
DECLARE #XMLPtr int
DECLARE #ExecuteSQL nvarchar(1000)
DECLARE #CallBackSP nvarchar(100)
DECLARE #CallBackSQL nvarchar(1000)
DECLARE #SBCaller nvarchar(50)
DECLARE #LogMsg nvarchar(1000)
WHILE (1=1)
BEGIN
BEGIN TRANSACTION;
WAITFOR
( RECEIVE TOP(1)
#RecvReqDlgHandle = conversation_handle,
#RecvReqMsg = message_body,
#RecvReqMsgName = message_type_name
FROM HMSTargetQueueIntAct
), TIMEOUT 5000;
IF (##ROWCOUNT = 0)
BEGIN
ROLLBACK TRANSACTION;
BREAK;
END
IF #RecvReqMsgName = N'//HMS/InternalAct/RequestMessage'
BEGIN
DECLARE #ReplyMsg NVARCHAR(100);
SELECT #ReplyMsg = N'<ReplyMsg>ACK Message for Initiator service.</ReplyMsg>';
SEND ON CONVERSATION #RecvReqDlgHandle
MESSAGE TYPE
[//HMS/InternalAct/ReplyMessage]
(#ReplyMsg);
EXECUTE sp_xml_preparedocument #XMLPtr OUTPUT, #RecvReqMsg
SELECT #ExecuteSQL = ExecuteSQL
,#CallBackSP = CallBackSP
,#SBCaller = SBCaller
FROM OPENXML(#XMLPtr, 'RequestMsg/CommandParameters', 1)
WITH (ExecuteSQL nvarchar(1000) 'ExecuteSQL'
,CallBackSP nvarchar(1000) 'CallBackSP'
,SBCaller nvarchar(50) 'SBCaller'
)
EXEC sp_xml_removedocument #XMLPtr
IF ((#ExecuteSQL IS NOT NULL) AND (LEN(#ExecuteSQL)>0))
BEGIN
SET #LogMsg='ExecuteSQL:' + #ExecuteSQL
EXECUTE(#ExecuteSQL);
SET #LogMsg='ExecuteSQLSuccess:' + #ExecuteSQL
EXECute sp_LogSystemTransaction #SBCaller,#LogMsg,'SBMessage',0,''
END
IF ((#CallBackSP IS NOT NULL) AND (LEN(#CallBackSP)>0))
BEGIN
SET #CallBackSQL = #CallBackSP + ' #Sender=''sp_SB_HMSTargetActivProc'', #Res=''' + #ExecuteSQL + ''''
SET #LogMsg='CallBackSQL:' + #CallBackSQL
EXECute sp_LogSystemTransaction #SBCaller,#LogMsg,'SBMessage',0,''
EXECUTE(#CallBackSQL);
END
END
ELSE IF #RecvReqMsgName = N'http://schemas.microsoft.com/SQL/ServiceBroker/EndDialog'
BEGIN
SET #LogMsg='MessageEnd:';
END CONVERSATION #RecvReqDlgHandle WITH CLEANUP;
END
ELSE IF #RecvReqMsgName = N'http://schemas.microsoft.com/SQL/ServiceBroker/Error'
BEGIN
DECLARE #message_body VARBINARY(MAX);
DECLARE #code int;
DECLARE #description NVARCHAR(3000);
DECLARE #xmlMessage XML;
SET #xmlMessage = CAST(#RecvReqMsg AS XML);
SET #code = (
SELECT #xmlMessage.value(
N'declare namespace
brokerns="http://schemas.microsoft.com/SQL/ServiceBroker/Error";
(/brokerns:Error/brokerns:Code)[1]',
'int')
);
SET #description = (
SELECT #xmlMessage.value(
'declare namespace
brokerns="http://schemas.microsoft.com/SQL/ServiceBroker/Error";
(/brokerns:Error/brokerns:Description)[1]',
'nvarchar(3000)')
);
IF (#code = -8462)
BEGIN
SET #LogMsg='MessageEnd:';
--EXECute sp_LogSystemTransaction #SBCaller,#LogMsg,'SBMessage',0,'';
END CONVERSATION #RecvReqDlgHandle WITH CLEANUP;
END
ELSE
BEGIN
SET #LogMsg='ERR:' + #description + ' ' + CAST(#code AS VARCHAR(20));
EXECute sp_LogSystemTransaction #SBCaller,#LogMsg,'SBError',0,'';
END CONVERSATION #RecvReqDlgHandle;
END
END
COMMIT TRANSACTION;
END
END
One thing I noticed is that this thing doesn't seem to do much. Most of the lines of code seem to be crafting a reply message for the service broker dialog.
That said, the existence of service broker means that you don't have to use sp_xml_preparedocument for your xml needs. Take a look at XQuery. In short, something like this should work:
SELECT #ExecuteSQL = #RcvReqMsg.value('(RequestMsg/CommandParameters/ExecuteSQL)[1]', 'nvarchar(1000)')
,#CallBackSP = #RcvReqMsg.value('(RequestMsg/CommandParameters/CallBackSP)[1]', 'nvarchar(1000)')
,#SBCaller = #RcvReqMsg.value('(RequestMsg/CommandParameters/SBCaller)[1]', 'nvarchar(1000)')
Secondly, it looks like the messages contain SQL to be executed in the context of the database that contains this queue. What is the performance profile of those? That is, are those your bottleneck? If those are slow, adding service broker to the mix won't magically make things go fast
Thirdly, are you allowing for more than one activation procedure to be active at a time? Check the max_readers column in sys.service_queues to answer this. If it's set to 1 and your process is such that they needn't be run serially, increase that number to run them in parallel.
Fourthly, it looks like you've written your activation procedure to process only one message before completing. Check out the example in this tutorial. Notice the while (1=1) loop. That makes the activation procedure go back to the queue for another message once it's finished with the current message
Lastly, why do you care? Service broker is an inherently asynchronous technology. If something/someone is waiting for a given message to be processed, I'd question that.
First and foremost I would recommend to threat this as a performance issue and approach it as any other performance issue: measure. See How to analyse SQL Server performance for a brief introduction and explicit advice on how to measure waits, IO, CPU overall, for a session or for a statement, and how to identify bottlenecks. Once you know where the bottleneck is then you can consider means to address it.
Now for something more specific to SSB. I would say that your procedure has three components that are interesting for the question:
the queue processing, ie. the RECEIVE, END CONVERSATION
the message parsing (XML shredding)
the execution (EXECUTE(#ExecuteSQL))
For queue processing I recommend Writing Service Broker Procedures for how to speed up things. RECEIVE TOP(1) is the slowest possible approach. Processing in batch is faster, even much faster, if possible. To dequeue a batch you need correlated messages in the queue, which means SEND-ing many messages on a single conversation handle, see Reusing Conversation. This may complicate the application significantly. Therefore I would strongly urge you to measure and determine the bottleneck before doing such drastic changes.
For the XML shredding I concur with #BenThul, using XML data type methods is better than using MSXML procedures.
And finally there is the EXECUTE(#ExecuteSQL). This for us is a black box, only you know what is actually being executed. Not only how expensive/complex the SQL executed is, but also how likely is to block. Lock contention between this background execution and your front-end code could slow down the queue processing a great deal. Again, measure and you will know. As a side note: from the numbers you posted, I would expect the problem to be here. In my experience an activated procedure that does exactly what you do (RECEIVE TOP(1), XML parsing, SEND a response), w/o the EXECUTE, should go at a rate of about 100 messages per second and drain your queue of 2000 jobs in about 20 seconds. You observe a much slower rate, which would had me suspect the actually executed SQL.
Finally, the easy thing to try: bump up MAX_QUEUE_READERS (again, as #BenThul already pointed out):
ALTER QUEUE HMSTargetQueueIntAct WITH ACTIVATION (MAX_QUEUE_READERS = 5)
This will allow parallel processing of requests.
You are missing proper error handling in your procedure, you should have a BEGIN TRY/BEGIN CATCH block. See Error Handling in Service Broker procedures, Error Handling and Activation, Handling exceptions that occur during the RECEIVE statement in activated procedures and Exception handling and nested transactions.
I wish to have a stored proc that is called every n seconds, is there a way to do this in SQL Server without depending on a separate process?
Use a timer and activation. No external process, continues to work after a clustering or mirroring failover, continues to work even after a restore on a different machine, and it works on Express too.
-- create a table to store the results of some dummy procedure
create table Activity (
InvokeTime datetime not null default getdate()
, data float not null);
go
-- create a dummy procedure
create procedure createSomeActivity
as
begin
insert into Activity (data) values (rand());
end
go
-- set up the queue for activation
create queue Timers;
create service Timers on queue Timers ([DEFAULT]);
go
-- the activated procedure
create procedure ActivatedTimers
as
begin
declare #mt sysname, #h uniqueidentifier;
begin transaction;
receive top (1)
#mt = message_type_name
, #h = conversation_handle
from Timers;
if ##rowcount = 0
begin
commit transaction;
return;
end
if #mt in (N'http://schemas.microsoft.com/SQL/ServiceBroker/Error'
, N'http://schemas.microsoft.com/SQL/ServiceBroker/EndDialog')
begin
end conversation #h;
end
else if #mt = N'http://schemas.microsoft.com/SQL/ServiceBroker/DialogTimer'
begin
exec createSomeActivity;
-- set a new timer after 2s
begin conversation timer (#h) timeout = 2;
end
commit
end
go
-- attach the activated procedure to the queue
alter queue Timers with activation (
status = on
, max_queue_readers = 1
, execute as owner
, procedure_name = ActivatedTimers);
go
-- seed a conversation to start activating every 2s
declare #h uniqueidentifier;
begin dialog conversation #h
from service [Timers]
to service N'Timers', N'current database'
with encryption = off;
begin conversation timer (#h) timeout = 1;
-- wait 15 seconds
waitfor delay '00:00:15';
-- end the conversation, will stop activating
end conversation #h;
go
-- check that the procedure executed
select * from Activity;
You can set up a SQL Agent job - that's probably the only way to go.
SQL Server Agent is a component of SQL Server - not available in the Express editions, however - which allows you to automate certain tasks, like database maintenance etc. but you can also use it to call stored procs every n seconds.
I once set up a stored procedure that ran continuously, uisng a loop with a WAITFOR at the end of it.
The WHILE condition depended upon the value read from a simple configuration table. If the value got set to 0, the loop would be exited and the procedure finished.
I put a WAITFOR DELAY at the end, so that however long it took to process a given iteration, it would wait XX seconds until it ran it again. (XX was also set in and read from the configuration table.)
If it must run at precise intervales (say, 0, 15, 30, and 45 seconds in the minute), you could calculate the appropriate WATIFOR TIME value at the end of the loop.
Lastly, I had the procedure called by a SQL Agent job once a minute. The job would always be "running" showing that the procedure was running. If the procedure was killed or crashed, the job would start it up in no more than 1 minute. If the procedure was "turned off", the procedure still gets run but the WHILE loop containing the processing does not get entered making the overhead nill.
I didn't much like having it in my database, but it fulfilled the business requirements.
WAITFOR
{
DELAY 'time_to_pass'
| TIME 'time_to_execute'
| [ ( receive_statement ) | ( get_conversation_group_statement ) ]
[ , TIMEOUT timeout ]
}
If you want to keep a SSMS query window open:
While 1=1
Begin
exec "Procedure name here" ;
waitfor delay '00:00:15';
End
I have just had a scheduled SQL Server job run for longer than normal, and I could really have done with having set a timeout to stop it after a certain length of time.
I might be being a bit blind on this, but I can't seem to find a way of setting a timeout for a job. Does anyone know the way to do it?
Thanks
We do something like the code below as part of a nightly job processing subsystem - it is more complicated than this actually in reality; for example we are processing multiple interdependent sets of jobs, and read in job names and timeout values from configuration tables - but this captures the idea:
DECLARE #JobToRun NVARCHAR(128) = 'My Agent Job'
DECLARE #dtStart DATETIME = GETDATE(), #dtCurr DATETIME
DECLARE #ExecutionStatus INT, #LastRunOutcome INT, #MaxTimeExceeded BIT = 0
DECLARE #TimeoutMinutes INT = 180
EXEC msdb.dbo.sp_start_job #JobToRun
SET #dtCurr = GETDATE()
WHILE 1=1
BEGIN
WAITFOR DELAY '00:00:10'
SELECT #ExecutionStatus=current_execution_status, #LastRunOutcome=last_run_outcome
FROM OPENQUERY(LocalServer, 'set fmtonly off; exec msdb.dbo.sp_help_job') where [name] = #JobToRun
IF #ExecutionStatus <> 4
BEGIN -- job is running or finishing (not idle)
SET #dtCurr=GETDATE()
IF DATEDIFF(mi, #dtStart, #dtCurr) > #TimeoutMinutes
BEGIN
EXEC msdb.dbo.sp_stop_job #job_name=#JobToRun
-- could log info, raise error, send email etc here
END
ELSE
BEGIN
CONTINUE
END
END
IF #LastRunOutcome = 1 -- the job just finished with success flag
BEGIN
-- job succeeded, do whatever is needed here
print 'job succeeded'
END
END
What kind of a job is this? You may want to consider putting the whole job in a TSQL script within a While loop. The condition to check would obviously be the time difference between current time and job start time.
Raj