How to speed up Service Broker with many jobs on the queue? - sql-server

I use SQL Service Broker with internal activation to move a list of jobs to the internal activated stored procedure to complete without keeping the main thread/requestor waiting for the actualy individual jobs to finish. Essentially i'm trying to free up the UI thread. The problem is, I passed 2000+ jobs to the Service broker and the messages reached the queue in about 25 mins and free'd the UI however even after an hour, it has only finished working on close to 600+ jobs
I use below query to count the number waiting to be completed and it looks like its extremely slow
SELECT COUNT(*)
FROM [HMS_Test].[dbo].[HMSTargetQueueIntAct]
WITH(NOLOCK)
Below is my activation stored procedure for your ref. Can someone please have a look and let me know whats wrong with this? How can I get the SB to finish these items on the queue quickly? Thanks in advance :)
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[sp_SB_HMSTargetActivProc]
AS
BEGIN
DECLARE #RecvReqDlgHandle UNIQUEIDENTIFIER;
DECLARE #RecvReqMsg NVARCHAR(1000);
DECLARE #RecvReqMsgName sysname;
DECLARE #XMLPtr int
DECLARE #ExecuteSQL nvarchar(1000)
DECLARE #CallBackSP nvarchar(100)
DECLARE #CallBackSQL nvarchar(1000)
DECLARE #SBCaller nvarchar(50)
DECLARE #LogMsg nvarchar(1000)
WHILE (1=1)
BEGIN
BEGIN TRANSACTION;
WAITFOR
( RECEIVE TOP(1)
#RecvReqDlgHandle = conversation_handle,
#RecvReqMsg = message_body,
#RecvReqMsgName = message_type_name
FROM HMSTargetQueueIntAct
), TIMEOUT 5000;
IF (##ROWCOUNT = 0)
BEGIN
ROLLBACK TRANSACTION;
BREAK;
END
IF #RecvReqMsgName = N'//HMS/InternalAct/RequestMessage'
BEGIN
DECLARE #ReplyMsg NVARCHAR(100);
SELECT #ReplyMsg = N'<ReplyMsg>ACK Message for Initiator service.</ReplyMsg>';
SEND ON CONVERSATION #RecvReqDlgHandle
MESSAGE TYPE
[//HMS/InternalAct/ReplyMessage]
(#ReplyMsg);
EXECUTE sp_xml_preparedocument #XMLPtr OUTPUT, #RecvReqMsg
SELECT #ExecuteSQL = ExecuteSQL
,#CallBackSP = CallBackSP
,#SBCaller = SBCaller
FROM OPENXML(#XMLPtr, 'RequestMsg/CommandParameters', 1)
WITH (ExecuteSQL nvarchar(1000) 'ExecuteSQL'
,CallBackSP nvarchar(1000) 'CallBackSP'
,SBCaller nvarchar(50) 'SBCaller'
)
EXEC sp_xml_removedocument #XMLPtr
IF ((#ExecuteSQL IS NOT NULL) AND (LEN(#ExecuteSQL)>0))
BEGIN
SET #LogMsg='ExecuteSQL:' + #ExecuteSQL
EXECUTE(#ExecuteSQL);
SET #LogMsg='ExecuteSQLSuccess:' + #ExecuteSQL
EXECute sp_LogSystemTransaction #SBCaller,#LogMsg,'SBMessage',0,''
END
IF ((#CallBackSP IS NOT NULL) AND (LEN(#CallBackSP)>0))
BEGIN
SET #CallBackSQL = #CallBackSP + ' #Sender=''sp_SB_HMSTargetActivProc'', #Res=''' + #ExecuteSQL + ''''
SET #LogMsg='CallBackSQL:' + #CallBackSQL
EXECute sp_LogSystemTransaction #SBCaller,#LogMsg,'SBMessage',0,''
EXECUTE(#CallBackSQL);
END
END
ELSE IF #RecvReqMsgName = N'http://schemas.microsoft.com/SQL/ServiceBroker/EndDialog'
BEGIN
SET #LogMsg='MessageEnd:';
END CONVERSATION #RecvReqDlgHandle WITH CLEANUP;
END
ELSE IF #RecvReqMsgName = N'http://schemas.microsoft.com/SQL/ServiceBroker/Error'
BEGIN
DECLARE #message_body VARBINARY(MAX);
DECLARE #code int;
DECLARE #description NVARCHAR(3000);
DECLARE #xmlMessage XML;
SET #xmlMessage = CAST(#RecvReqMsg AS XML);
SET #code = (
SELECT #xmlMessage.value(
N'declare namespace
brokerns="http://schemas.microsoft.com/SQL/ServiceBroker/Error";
(/brokerns:Error/brokerns:Code)[1]',
'int')
);
SET #description = (
SELECT #xmlMessage.value(
'declare namespace
brokerns="http://schemas.microsoft.com/SQL/ServiceBroker/Error";
(/brokerns:Error/brokerns:Description)[1]',
'nvarchar(3000)')
);
IF (#code = -8462)
BEGIN
SET #LogMsg='MessageEnd:';
--EXECute sp_LogSystemTransaction #SBCaller,#LogMsg,'SBMessage',0,'';
END CONVERSATION #RecvReqDlgHandle WITH CLEANUP;
END
ELSE
BEGIN
SET #LogMsg='ERR:' + #description + ' ' + CAST(#code AS VARCHAR(20));
EXECute sp_LogSystemTransaction #SBCaller,#LogMsg,'SBError',0,'';
END CONVERSATION #RecvReqDlgHandle;
END
END
COMMIT TRANSACTION;
END
END

One thing I noticed is that this thing doesn't seem to do much. Most of the lines of code seem to be crafting a reply message for the service broker dialog.
That said, the existence of service broker means that you don't have to use sp_xml_preparedocument for your xml needs. Take a look at XQuery. In short, something like this should work:
SELECT #ExecuteSQL = #RcvReqMsg.value('(RequestMsg/CommandParameters/ExecuteSQL)[1]', 'nvarchar(1000)')
,#CallBackSP = #RcvReqMsg.value('(RequestMsg/CommandParameters/CallBackSP)[1]', 'nvarchar(1000)')
,#SBCaller = #RcvReqMsg.value('(RequestMsg/CommandParameters/SBCaller)[1]', 'nvarchar(1000)')
Secondly, it looks like the messages contain SQL to be executed in the context of the database that contains this queue. What is the performance profile of those? That is, are those your bottleneck? If those are slow, adding service broker to the mix won't magically make things go fast
Thirdly, are you allowing for more than one activation procedure to be active at a time? Check the max_readers column in sys.service_queues to answer this. If it's set to 1 and your process is such that they needn't be run serially, increase that number to run them in parallel.
Fourthly, it looks like you've written your activation procedure to process only one message before completing. Check out the example in this tutorial. Notice the while (1=1) loop. That makes the activation procedure go back to the queue for another message once it's finished with the current message
Lastly, why do you care? Service broker is an inherently asynchronous technology. If something/someone is waiting for a given message to be processed, I'd question that.

First and foremost I would recommend to threat this as a performance issue and approach it as any other performance issue: measure. See How to analyse SQL Server performance for a brief introduction and explicit advice on how to measure waits, IO, CPU overall, for a session or for a statement, and how to identify bottlenecks. Once you know where the bottleneck is then you can consider means to address it.
Now for something more specific to SSB. I would say that your procedure has three components that are interesting for the question:
the queue processing, ie. the RECEIVE, END CONVERSATION
the message parsing (XML shredding)
the execution (EXECUTE(#ExecuteSQL))
For queue processing I recommend Writing Service Broker Procedures for how to speed up things. RECEIVE TOP(1) is the slowest possible approach. Processing in batch is faster, even much faster, if possible. To dequeue a batch you need correlated messages in the queue, which means SEND-ing many messages on a single conversation handle, see Reusing Conversation. This may complicate the application significantly. Therefore I would strongly urge you to measure and determine the bottleneck before doing such drastic changes.
For the XML shredding I concur with #BenThul, using XML data type methods is better than using MSXML procedures.
And finally there is the EXECUTE(#ExecuteSQL). This for us is a black box, only you know what is actually being executed. Not only how expensive/complex the SQL executed is, but also how likely is to block. Lock contention between this background execution and your front-end code could slow down the queue processing a great deal. Again, measure and you will know. As a side note: from the numbers you posted, I would expect the problem to be here. In my experience an activated procedure that does exactly what you do (RECEIVE TOP(1), XML parsing, SEND a response), w/o the EXECUTE, should go at a rate of about 100 messages per second and drain your queue of 2000 jobs in about 20 seconds. You observe a much slower rate, which would had me suspect the actually executed SQL.
Finally, the easy thing to try: bump up MAX_QUEUE_READERS (again, as #BenThul already pointed out):
ALTER QUEUE HMSTargetQueueIntAct WITH ACTIVATION (MAX_QUEUE_READERS = 5)
This will allow parallel processing of requests.
You are missing proper error handling in your procedure, you should have a BEGIN TRY/BEGIN CATCH block. See Error Handling in Service Broker procedures, Error Handling and Activation, Handling exceptions that occur during the RECEIVE statement in activated procedures and Exception handling and nested transactions.

Related

SQL Server Service Broker - Ways to improve SQL execution framework

Below is an outline of a SQL execution framework design using Service Broker that I have been playing with. I've outlined the process and have asked some questions through (highlight using a block quote) and would be interested in hearing any advice on the design.
Overview
I have a an ETL operation that needs to take data out of 5 databases and move it into 150 using select/insert statements or stored procedures. The result is about 2,000 individual queries, taking between 1 second to 1 hour each.
Each SQL query inserts data only. There is no need for data to be returned.
The operation can be broken up into 3 steps:
Pre-ETL
ETL
Post-ETL
The queries in each step can be executed in any order, but the steps have to stay in order.
Method
I am using Service Broker for asynchronous/parallel execution.
Any advice on how to tune service broker (e.g. any specific options to look at or guide for setting the number of queue workers?
Service Broker Design
Initiator
The initiator sends an XML message containing the SQL query to the Unprocessed queue, with an activation stored procedure called ProcessUnprocessedQueue. This process is wrapped in a try/catch in a transaction, rolling back the transaction when there is an exception.
ProcessUnpressedQueue
ProcessUnprocessedQueue passes the XML to procedure ExecSql
ExecSql - SQL Execution and Logging
ExecSql then handles the SQL execution and logging:
The XML is parsed, along with any other data about the execution that is going to be logged
Before the execution, a logging entry is inserted
If the transaction is started in the initiator, can I ensure the log entry insert is always committed if the outer transaction in the initiator is rolled back?
Something like SAVE TRANSACTION is not valid here, correct?
Should I not manipulate the transaction here, execute the query in a try/catch and, if it goes to the catch, insert a log entry for the exception and throw the exception since it is in the middle of the transaction?
The query is executed
Alternative Logging Solution?
I need to log:
The SQL query executed
Metadata about the operation
The time it takes for each process to finish
This is why I insert one row at the start and one at the end of the process
Any exceptions, if they exist
Would it be better to have an In-Memory OLTP table that contains the query information? So, I would have INSERT a row before the start of an operation and then do an UPDATE or INSERT to log exceptions and execution times. After the batch is done, I would then archive the data into a table stored to the disk to prevent the table from getting too big.
ProcessUnprocessedQueue - Manually process the results
After the execution, ProcessUnprocessedQueue gets back an updated version of the XML (to determine if the execution was successful, or other data about the transaction, for post-processing) and then sends that message to the ProcessedQueue, which does not have an activation procedure, so it can be manually processed (I need to know when a batch of queries has finished executing).
Processing the Queries
Since the ETL can be broken out into 3 steps, I create 3 XML variables where I will add all of the queries that are needed in the ETL operation, so I will have something like this:
#preEtlQueue xml
200 queries
#etlQueue xml
1500 queries
#postEtlQueue xml
300 queries
Why XML?
The XML queue variable is passed between different stored procedures as an OUTPUT parameter that updates it's values and/or add SQL queries to it. This variable needs to be written and read, so an alternative could be something like a global temp table or a persistent table.
I then process the XML variables:
Use a cursor to loop through the queries and send them to the service broker service.
Each group of queries contained in the XML variable is sent under the same conversation_group_id.
Values such as the to/from service, message type, etc. are all stored in the XML variable.
After the messages are sent to Service Broker, use a while loop to continuously check the ProcessedQueue until all the messages have been processed.
This implements a timeout to avoid an infinite loop
I'm thinking of redesigning this. Should I add an activation procedure on ProcessedQueue and then have that procedure insert the processed results into a physical table? If I do it this way, I wouldn't be able to use RECEIVE instead of a WHILE loop to check for processed items. Does that have any disadvantages?
I haven't built anything as massive as what you are doing now, but I will give you what worked for me, and some general opinions...
My preference is to avoid In-Memory OLTP and write everything to durable tables and keep the message queue as clean as possible
Use fastest possible hard drives in the server, write speed equivalent of NVMe or faster with RAID 10 etc.
I grab every message off the queue as soon as it hits and write it to a table I have named "mqMessagesReceived" (see code below, my all-purpose MQ handler named mqAsyncQueueMessageOnCreate)
I use a trigger in the "mqMessagesReceived" table that does a lookup to find which StoredProcedure to execute to process each unique message (see code below)
Each message has an identifier (in my case, I'm using the originating Tablename that wrote a message to the queue) and this identifier is used as a key for a lookup query run inside the the trigger of the mqMessagesReceived table, to figure out which subsequent Stored Procedure needs to be to run, to process each received message correctly.
Before sending a message on the MQ,
Can make a generic variable from the calling side (e.g. if a trigger is putting messages onto the MQ)
SELECT #tThisTableName = OBJECT_NAME(parent_object_id) FROM sys.objects
WHERE sys.objects.name = OBJECT_NAME(##PROCID)
AND SCHEMA_NAME(sys.objects.schema_id) = OBJECT_SCHEMA_NAME(##PROCID);
A configuration table is the lookup data for matching tablename with StoredProcedure that needs to be run, to process the MQ data that arrived and was written to the mqMessagesReceived table.
Here is the definition of that lookup table
CREATE TABLE [dbo].[mqMessagesConfig](
[ID] [int] IDENTITY(1,1) NOT NULL,
[tSourceTableReceived] [nvarchar](128) NOT NULL,
[tTriggeredStoredProcedure] [nvarchar](128) NOT NULL,
CONSTRAINT [PK_mqMessagesConfig] PRIMARY KEY CLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
Here is the activation stored procedure that gets run as a message hits the queue
CREATE PROCEDURE [dbo].[mqAsyncQueueMessageOnCreate]
AS
BEGIN
SET NOCOUNT ON
DECLARE
#h UNIQUEIDENTIFIER,
#t sysname,
#b varbinary(200),
#hand VARCHAR(36),
#body VARCHAR(2000),
#sqlcleanup nvarchar(MAX)
-- Get all of the messages on the queue
-- the WHILE loop is infinite, until BREAK is received when we get a null handle
WHILE 1=1
BEGIN
SET #h = NULL
--Note the semicolon..!
;RECEIVE TOP(1)
#h = conversation_handle,
#t = message_type_name,
#b = message_body
FROM mqAsyncQueue
--No message found (handle is now null)
IF #h IS NULL
BEGIN
-- all messages are now processed, but we still have the #hand variable saved from processing the last message
SET #sqlcleanup = 'EXEC [mqConversationsClearOne] #handle = N' + char(39) + #hand + char(39) + ';';
EXECUTE(#sqlcleanup);
BREAK
END
--mqAsyncMessage message type received
ELSE IF #t = 'mqAsyncMessage'
BEGIN
SET #hand = CONVERT(varchar(36),#h);
SET #body = CONVERT(varchar(2000),#b);
INSERT mqMessagesReceived (tMessageType, tMessageBody, tMessageBinary, tConversationHandle)
VALUES (#t, #body, #b, #hand);
END
--unknown message type was received that we dont understand
ELSE
BEGIN
INSERT mqMessagesReceived (tMessageBody, tMessageBinary)
VALUES ('Unknown message type received', CONVERT(varbinary(MAX), 'Unknown message type received'))
END
END
END
CREATE PROCEDURE [dbo].[mqConversationsClearOne]
#handle varchar(36)
AS
-- Note: you can check the queue by running this query
-- SELECT * FROM sys.conversation_endpoints
-- SELECT * FROM sys.conversation_endpoints WHERE NOT([State] = 'CO')
-- CO = conversing [State]
DECLARE #getid CURSOR
,#sql NVARCHAR(MAX)
,#conv_id NVARCHAR(100)
,#conv_handle NVARCHAR(100)
-- want to create a chain of statements like this, one per conversation
-- END CONVERSATION 'FE851F37-218C-EA11-B698-4CCC6AD00AE9' WITH CLEANUP;
-- END CONVERSATION 'A4B4F603-208C-EA11-B698-4CCC6AD00AE9' WITH CLEANUP;
SET #getid = CURSOR FOR
SELECT [conversation_id], [conversation_handle]
FROM sys.conversation_endpoints
WHERE conversation_handle = #handle;
OPEN #getid
FETCH NEXT
FROM #getid INTO #conv_id, #conv_handle
WHILE ##FETCH_STATUS = 0
BEGIN
SET #sql = 'END CONVERSATION ' + char(39) + #conv_handle + char(39) + ' WITH CLEANUP;'
EXEC sys.sp_executesql #stmt = #sql;
FETCH NEXT
FROM #getid INTO #conv_id, #conv_handle --, #conv_service
END
CLOSE #getid
DEALLOCATE #getid
and the table named "mqMessagesReceived" has this trigger
CREATE TRIGGER [dbo].[mqMessagesReceived_TriggerUpdate]
ON [dbo].[mqMessagesReceived]
AFTER INSERT
AS
BEGIN
DECLARE
#strMessageBody nvarchar(4000),
#strSourceTable nvarchar(128),
#strSourceKey nvarchar(128),
#strConfigStoredProcedure nvarchar(4000),
#sqlRunStoredProcedure nvarchar(4000),
#strErr nvarchar(4000)
SELECT #strMessageBody= ins.tMessageBody FROM INSERTED ins;
SELECT #strSourceTable = (select txt_Value from dbo.fn_ParseText2Table(#strMessageBody,'|') WHERE Position=2);
SELECT #strSourceKey = (select txt_Value from dbo.fn_ParseText2Table(#strMessageBody,'|') WHERE Position=3);
-- look in mqMessagesConfig to find the name of the final stored procedure
-- to run against the SourceTable
-- e.g. #strConfigStoredProcedure = mqProcess-tblLabDaySchedEventsMQ
SELECT #strConfigStoredProcedure =
(select tTriggeredStoredProcedure from dbo.mqMessagesConfig WHERE tSourceTableReceived = #strSourceTable);
SET #sqlRunStoredProcedure = 'EXEC [' + #strConfigStoredProcedure + '] #iKey = ' + #strSourceKey + ';';
EXECUTE(#sqlRunStoredProcedure);
INSERT INTO [mqMessagesProcessed]
(
[tMessageBody],
[tSourceTable],
[tSourceKey],
[tTriggerStoredProcedure]
)
VALUES
(
#strMessageBody,
#strSourceTable,
#strSourceKey,
#sqlRunStoredProcedure
);
END
Also, just some general SQL Server tuning advice that I found I also had to do (for dealing with a busy database)
By default there is just one single TempDB file per SQL Server, and TempDB has initial size of 8MB
However TempDB gets reset back to the initial 8MB size, every time the server reboots, and this company was rebooting the server every weekend via cron/taskscheduler.
The problem we saw was slow database and lots of record locks but only first thing Monday morning when everyone was hammering the database at once as they began their work-week.
When TempDB gets automatically re-sized, it is "locked" and therefore nobody at all can use that single TempDB (which is why the SQL Server was regularly becoming non-responsive)
By Friday the TempDB had grown to over 300MB.
So... to solve the following best practice recommendation, I created one TempDB file per vCPU, so I created 8 TempDB files, and I have distributed them across two available hard drives on that server, and most importantly, set their initial size to more than we need (200MB each is what I chose).
This fixed the problem with the SQL Server slowdown and record locking that was experienced every Monday morning.

Service Broker Internal Activation Poisoning - Where?

I am experiencing poison messages and I am not sure why.
My broker setup looks like this:
CREATE MESSAGE TYPE
[//DB/Schema/RequestMessage]
VALIDATION = WELL_FORMED_XML;
CREATE MESSAGE TYPE
[//DB/Schema/ReplyMessage]
VALIDATION = WELL_FORMED_XML;
CREATE CONTRACT [//DB/Schema/Contract](
[//DB/Schema/RequestMessage] SENT BY INITIATOR,
[//DB/Schema/ReplyMessage] SENT BY TARGET
)
CREATE QUEUE Schema.TargetQueue
CREATE SERVICE [//DB/Schema/TargetService]
ON QUEUE Schema.TargetQueue (
[//DB/Schema/Method3Contract]
)
CREATE QUEUE Schema.InitiatorQueue
CREATE SERVICE [//DB/Schema/InitiatorService]
ON QUEUE Schema.InitiatorQueue
Then I have my internal activation procedure:
CREATE PROCEDURE Schema.Import
AS
DECLARE #RequestHandle UNIQUEIDENTIFIER;
DECLARE #RequestMessage VARCHAR(8);
DECLARE #RequestMessageName sysname;
WHILE (1=1)
BEGIN
BEGIN TRANSACTION;
WAITFOR (
RECEIVE TOP(1)
#RequestHandle = conversation_handle,
#RequestMessage = message_body,
#RequestMessageName = message_type_name
FROM
Schema.TargetQueue
), TIMEOUT 5000;
IF (##ROWCOUNT = 0)
BEGIN
COMMIT TRANSACTION;
BREAK;
END
EXEC Schema.ImportStep1 #ID = #RequestMessage;
--EXEC Schema.ImportStep2 #ID = #RequestMessage;
END CONVERSATION #RequestHandle;
COMMIT TRANSACTION;
END
My activation is enabled by:
ALTER QUEUE Schema.TargetQueue
WITH
STATUS = ON,
ACTIVATION
( STATUS = ON,
PROCEDURE_NAME = Schema.Import,
MAX_QUEUE_READERS = 10,
EXECUTE AS SELF
)
I initiate this process with this stored procedure
CREATE PROCEDURE Schema.ImportStart
AS
BEGIN
DECLARE #ID VARCHAR(8);
DECLARE Cursor CURSOR FOR
SELECT ID FROM OtherDatabase.OtherSchema.ImportTable
EXCEPT
SELECT ID FROM Table
OPEN Cursor;
FETCH NEXT FROM Cursor INTO #ID;
WHILE ##FETCH_STATUS = 0
BEGIN
DECLARE #InitiateHandle UNIQUEIDENTIFIER;
DECLARE #RequestMessage VARCHAR(8);
BEGIN TRANSACTION;
BEGIN DIALOG
#InitiateHandle
FROM SERVICE
[//DB/Schema/InitiatorService]
TO SERVICE
N'//DB/Schema/TargetService'
ON CONTRACT
[//DB/Schema/Contract]
WITH
ENCRYPTION = OFF;
SELECT #RequestMessage = #ID;
SEND ON CONVERSATION
#InitiateHandle
MESSAGE TYPE
[//DB/Schema/RequestMessage]
(#RequestMessage);
COMMIT TRANSACTION;
FETCH NEXT FROM Cursor INTO #ID;
END
CLOSE Cursor;
DEALLOCATE Cursor;
END
So how this should work is:
I execute ImportStart
A message for each ID gets generated
Internal activation makes Import steps execute
Instead, I get poison messaging and the queue becomes disabled.
If however,
I I set Schema.TargetQue Activation to OFF
EXEC schema.ImportStart
EXEC schema.Import manually
It works fine.
Any insights anyone?
Well:
Your message types are defined as well_formed_xml, yet you send varchar(8) as a message body. Does it really work?
You use [//DB/Schema/Method3Contract] for the target queue, but do not define it. A misspelling, most likely.
You specify EXECUTE AS SELF in the queue activation. BOL says some mystical thing about this case:
SELF
Specifies that the stored procedure executes as the current user. (The database principal executing this ALTER QUEUE statement.)
I'm not really sure I understand the quoted statement, because it apparently contradicts with your experience. If it would be your user account, everything should have been fine, because you seem to have all permissions necessary to do the job.
So, just in case - who is the owner of the Schema schema? What permissions does this principal possess? And, if it's not you, who executes the alter queue statement (and why)?
Without access to logs, it's significantly more difficult to diagnose the problem, but I would start with creating a new user account with permissions identical to yours, setting it the owner of the Schema schema and then slowly working it down, revoking unnecessary permissions until it breaks. Assuming, of course, it will work at all.

SQL Service Broker and Internal Activation.. Is the infinite loop in the official tutorial correct?

One of the basics of building a distributed application that uses the asynchronous communication can be expressed as Do not wait actively for any event! This way, the natural solution based on SQL Service Broker is to use the activation of a stored procedure by the message that arrived to the queue.
The Lesson 2: Creating an Internal Activation Procedure from the official Microsoft tutorial shows how to bind the stored procedure to the message queue. It also suggests the way how the sp should be implemented.
(I am new to SQL. But should not be there one more BEGIN after the CREATE PROCEDURE... AS, and one more END before the GO?)
Do I understand it corectly? See my questions below the code...
CREATE PROCEDURE TargetActivProc
AS
DECLARE #RecvReqDlgHandle UNIQUEIDENTIFIER;
DECLARE #RecvReqMsg NVARCHAR(100);
DECLARE #RecvReqMsgName sysname;
WHILE (1=1)
BEGIN
BEGIN TRANSACTION;
WAITFOR
( RECEIVE TOP(1)
#RecvReqDlgHandle = conversation_handle,
#RecvReqMsg = message_body,
#RecvReqMsgName = message_type_name
FROM TargetQueueIntAct
), TIMEOUT 5000;
IF (##ROWCOUNT = 0)
BEGIN
ROLLBACK TRANSACTION;
BREAK;
END
IF #RecvReqMsgName =
N'//AWDB/InternalAct/RequestMessage'
BEGIN
DECLARE #ReplyMsg NVARCHAR(100);
SELECT #ReplyMsg =
N'<ReplyMsg>Message for Initiator service.</ReplyMsg>';
SEND ON CONVERSATION #RecvReqDlgHandle
MESSAGE TYPE
[//AWDB/InternalAct/ReplyMessage]
(#ReplyMsg);
END
ELSE IF #RecvReqMsgName =
N'http://schemas.microsoft.com/SQL/ServiceBroker/EndDialog'
BEGIN
END CONVERSATION #RecvReqDlgHandle;
END
ELSE IF #RecvReqMsgName =
N'http://schemas.microsoft.com/SQL/ServiceBroker/Error'
BEGIN
END CONVERSATION #RecvReqDlgHandle;
END
COMMIT TRANSACTION;
END
GO
When the message arrives, the procedure is called, and it enters to the "infinite" loop. Actually, the loop is not infinite because of the BREAK after the ROLLBACK when no data arrived (after the TIMEOUT).
If the data arrived, the BREAK is skipped. If the expected message arrived, the reply is sent back. If the ...EndDialog or ...Error message is received, the END CONVERSATION is executed. Can also some other kind of message be observed here?
As some message arrived (and was processed) the transaction is commited.
But why the loop now? Is the intention to process the other messages that got stuck in the queue because of the broken communication line in the past? Or because of more messages came at once and could not be processed so quickly?
What happens when another message is queued up, and the stored procedure is still running. Is another working process assigned for its processing? Can another stored procedure be launched in parallel? If yes, then why the loop?
Thanks for your help, Petr
Internal activation is not like a trigger. Specifically, the activated procedure does not get launched for each message that arrived. Instead the procedure is launched when there is something to process and is supposed to dequeue messages contiguously (in a loop) while the SSB infrastructure is monitoring the progress and, if necessary, launches a second procedure to help, up to the max specified. See Understanding Queue Monitors.
Having a loop in the activated procedure is not not strictly required, things should work fine even w/o the loop. The loop should perform better in a very busy environment. See also this old MSDN discussion.

If not Exists logic

I am doing something like this:
exec up_sp1 -- executing this stored procedure which populates the table sqlcmds
---Check the count of the table sqlcmds,
---if the count is zero then execute up_sp2,
----otherwise wait till the count becomes zero,then exec up_sp2
IF NOT EXISTS ( SELECT 1 FROM [YesMailReplication].[dbo].[SQLCmds])
BEGIN
exec up_sp2
END
What would the correct t-sql look like?
T-SQL has no WAITFOR semantics except for Service Broker queues. So all you can do, short of using Service Broker, is to poll periodically and see if the table was populated. For small scale this works fine, but for high scale it breaks as the right balance between wait time and poll frequency is difficult to achieve, and is even harder to make it adapt to spikes and lows.
But if you are willing to use Service Broker, then you can do much more elegant and scalable solution by leveraging Activation: up_sp1 drops a message into a queue and this message activates the queue procedure that starts and launches up_sp2 in turn, after the up_sp1 has committed. This is a reliable mechanism that handles server restarts, mirroring and clustering failover and even rebuilding of the server from backups. See Asynchronous procedure execution for a an example of achieving something very similar.
The Service Broker solution is surely the best - but there is a WAITFOR solution as well:
exec up_sp1;
while exists (select * from [YesMailReplication].[dbo].[SQLCmds]) begin
waitfor delay ('00:00:10'); -- wait for 10 seconds
end;
exec up_sp2;
Try this:
DECLARE #Count int
SELECT #Count = COUNT(*) FROM [YesMailReplication].[dbo].[SQLCmds])
IF #Count > 0 BEGIN
exec up_sp2
END
Why not keep it simple and self-documenting?
DECLARE #Count int;
SELECT #Count = Count(*) FROM [YesMailReplication].[dbo].[SQLCmds]
If #Count = 0 exec up_sp2

Service Broker : Sys.Conversation_endpoints filling up with CO/CONVERSING messages when using With Cleanup

We recently identified a problem with one of our databases where as a result of a 'fire & forget' setup (i.e: conversations being closed immediately after sending), our sys.conversation_endpoints table was filling up with DI/DISCONNECTED_INBOUND messages. This eventually spilled over into the tempDB, causing it to grow enormously and eat up precious disk space. We eventually resolved this issue by commenting out the line
END CONVERSATION #handle WITH CLEANUP
in our sending SP and closing the conversations in our receiving SP using the same code,
END CONVERSATION #handle WITH CLEANUP
However, we now have a new issue. Since moving servers (and migrating from SQL Server 2005 to SQL Server 2008) we've recently discovered that sys.conversation_endpoints is now filling up with CO/CONVERSING messages, indicating that the conversations are not being closed. The receiving SP is closing them, or at least is running the command to do so, so I don't understand where these messages are coming from.
I've tried going back to ending the conversation at the point of send, but it has no effect. Is it wrong to end conversations on the receiving end using WITH CLEANUP? Or is there some other problem?
This post on techtarget seems to suggest its a bug, and that running a job to cleanup the leftovers is the only solution...
UPDATE:
Pawel pointed out below that I should be avoiding the Fire & Forget Pattern, and I've added an activated SP to the initiator queue to end any conversations. However, sys.conversation_endpoints is STILL filling up, this time with CD/CLOSED messages. Here's the structure of my queues
Send_SP:
DECLARE #h UNIQUEIDENTIFIER
BEGIN DIALOG CONVERSATION #h
FROM SERVICE 'InitiatorQueue' TO SERVICE 'TargetQueue'
ON CONTRACT 'MyContract' WITH ENCRYPTION = OFF;
SEND ON CONVERSATION #h MESSAGE TYPE 'MyMessage' (#msg)
Receive_SP (Activated SP on TargetQueue)
DECLARE #type SYSNAME, #h UNIQUEIDENTIFIER, #msg XML;
DECLARE #target TABLE (
[message_type_name] SYSNAME,
[message_body] VARBINARY(MAX),
[conversation_handle] UNIQUEIDENTIFIER
)
WHILE(1=1)
BEGIN TRANSACTION
WAITFOR(RECEIVE TOP (1000)
[message_type_name],[message_body],[conversation_handle]
FROM TargetQueue INTO #target), TIMEOUT 2000
IF(##rowcount!=0)
BEGIN
WHILE((SELECT count(*) FROM #target) > 0)
BEGIN
SELECT TOP (1) #type = [message_type_name],
#msg = [message_body],
#h = [conversation_handle] FROM #target;
// Handle Message Here
END CONVERSATION #h;
DELETE TOP (1) FROM #target;
END
END
COMMIT TRANSACTION;
End_SP (Activated SP on InitiatorQueue)
DECLARE #type SYSNAME, #h UNIQUEIDENTIFIER, #msg XML;
DECLARE #init TABLE (
[message_type_name] SYSNAME,
[message_body] VARBINARY(MAX),
[conversation_handle] UNIQUEIDENTIFIER
)
WHILE(1=1)
BEGIN TRANSACTION
WAITFOR(RECEIVE TOP (1000)
[message_type_name],[message_body],[conversation_handle]
FROM InitiatorQueue INTO #init), TIMEOUT 2000
IF(##rowcount!=0)
BEGIN
WHILE((SELECT count(*) FROM #init) > 0)
BEGIN
SELECT TOP (1) #type = [message_type_name],
#msg = [message_body],
#h = [conversation_handle] FROM #init;
END CONVERSATION #h;
DELETE TOP (1) FROM #init;
END
END
COMMIT TRANSACTION;
Using the fire-and-forget pattern will inevitably lead to this and other types of issues. Additionally it will make any hypothetical errors go unnoticed. Is there any reason why you can't change the message exchange pattern so that the target issues END CONVERSATION (without cleanup!) once it receives a message and then the initiator only calls END CONVERSATION (again, without cleanup) upon receiving end conversation message from the target?

Resources