Service Broker message flow - sql-server

I have two different servers in two locations. I need to use asynchronous exchange of data.
Server A is our data server, we store customer info here.
Server B is our proccesing server, we process production.
Each production operation on server B has a production group. What I need to do is:
A to send a message to B with a question: What operations are planned for today in this group(GUID).
B has to answer with an XML list of operations scheduled for today.
A has to answer with an XML list of operations to cancel
B has to cancel operations and end conversation
My question is: What is the right way to go about this? Can I do this in just a single dialog using one contract? Should I?
With a contract like this:
CREATE CONTRACT [GetScheduledContract]
AUTHORIZATION [xxx]
(GetScheduledOutCalls SENT BY INITIATOR,
ReturnScheduledOutCalls SENT BY TARGET,
DeleteScheduledOutCalls SENT BY INITIATOR)
Or should I separate the tasks to different contracts and dialogs?

What you have seems good to me as an MVP (i.e. if things go right, it'll work). A couple of things:
Consider adding one more reply from the target saying "operation completed successfully" before closing the conversation. Upon receipt, the initiator can also close their end of it.
What happens if any of those operations is explicitly not able to be completed (e.g. in your step 4, the request is to delete something that's not present or that delete causes a foreign key violation)? I'd add in some sort of error message type (sent by any) that allows either side to tell the other "hey… something went wrong".
What happens if any of those operations is implicitly not able to be completed (e.g. the message never gets delivered)? The other side may not respond for some reason. Build in some way to at least detect and alert on that.

Related

SQLWatch - notifications not being sent

I’m wondering if someone with knowledge/experience of SQLWatch could help me out with something.
We have SQLWatch set up on 2 DEV servers and 1 Central monitoring server, its working fine and the data from the 2 DEV servers is coming over to the central server, I can see alerts are being recorded in the table - [dbo].[sqlwatch_logger_check].
However, our issue that we are not being notified by any means (email, Powershell script running).
What’s interesting is that if we drop a row into the table [dbo].[sqlwatch_meta_action_queue] then alert notification does happen.
So our issue seems to be for some reason alerts are being raised but the record is not being inserted into the queue table. I suspect some sort of mapping issue but as it stands now it all looks ok, I use the following to check
SELECT C.check_id,check_name,check_description,check_enabled,A.action_description,A.action_exec_type,A.action_exec
FROM [dbo].[sqlwatch_config_check] C
LEFT JOIN [dbo].[sqlwatch_config_check_action] CA ON C.check_id = CA.check_id
LEFT JOIN [dbo].[sqlwatch_config_action] A ON CA.action_id = A.action_id
WHERe C.check_id = -1
And it shows the failed job is set to run our PowerShell script, which it does when the row is manually inserted.
Any ideas on what the cause may be here?
Thanks,
Nic
I am the creator of SQLWATCH.
Firstly, just to clarify, default notifications that come with SQLWATCH only work in a local scope i.e. they will happen on each monitored instance where ##SERVERNAME = sql_instance. If you are expecting the default notifications to fire from the central server for a remote instance this will not happen. The default notifications on the central server will only fire for the central server itself and not for data imported from the remote instances. This is done to avoid a situation where pull into the central repository is rare and thus notifications could be well delayed.
However, there is nothing stopping you from creating Check Rules or Reports to fire on the back of the imported data.
Secondly, the checks are not alerts per se. Checks are just... well, checks... that run periodically and make sure everything is in order. Checks can trigger an action to send an email. For this, as you have worked out, there is an association table that links together checks and actions.
As for your problem, is the actual action enabled? All actions that are not associated with report are disabled by default as they need to be configured first:
Add a column to your query to bring action_enabled column:
SELECT C.check_id, check_name, check_description, check_enabled, A.action_description, A.action_exec_type, A.action_exec, [action_enabled]
FROM [dbo].[sqlwatch_config_check] C
LEFT JOIN [dbo].[sqlwatch_config_check_action] CA ON C.check_id = CA.check_id
LEFT JOIN [dbo].[sqlwatch_config_action] A ON CA.action_id = A.action_id
WHERE C.check_id = -1
Or, there is already a view that should provide you with the complete mapping:
SELECT *
FROM [dbo].[vw_sqlwatch_report_config_check_action]
WHERE check_id = -1
The application log table [dbo].[sqlwatch_app_log] should also contain valuable information. Did you look in there for anything out of ordinary?
Summarising
In order to enable alerts in a brand new install of SQLWATCH, all it's needed is setting up action_exec with your email details and action_enabled set to 1. If you have made some other changes it may be easier to reinstall back to default.

External Service too fast for database in laravel

I have an application which is connected to an external webservice. The webservice sends messages with an ID to the laravel application. Within the controller I check if the ID of the message already exists in the database. If not, I store the message with the ID, if it exists I skip the message.
Unfortunately sometimes the webservice sends a message with the same ID multiple times within the same second. Its an external service, so I have no control over it.
The problem now is, that the messages come so fast, that the database has not saved the message before the next message comes into the controller. As a result, the check if the ID already exists fails and it tries to save the same message once more. This leads to an exception, because I have a unique identifier on the ID column.
What is the best strategy to handle this? To use a queue for it, is not a good solution, because the messages are time critical and the queue is even slower and it would lead to a message jam/congestion within the queue.
Any idea or help is appreciated a lot! Thanks!
You can send to your database INSERT IGNORE requests
INSERT IGNORE INTO messages (...) VALUES (...)
or
INSERT INTO messages (...) VALUES (...) ON DUPLICATE KEY UPDATE id=id.
You can try updating on duplicate. That is a way I have used in the past to get around issues like this. Not sure if it's the perfect solution, but definitely an option. I assume you are using mysql.
https://dev.mysql.com/doc/refman/8.0/en/insert-on-duplicate.html

About multiple conversations and/or queues

I'm wondering about the exact definition of a conversation and MS docs and tutorials are not quite on point with this.
First... is there a difference between a dialog and a conversation ?
Assuming a queue should only contain identical messages or equivalent messages (I.E. message types being handled by an activated procedures in a way similar to a CASE WHEN / SWITCH scenario)
Does each conversation revolve around a unique queue?
If a procedure A sends a message to a queue activating a procedure B which handle the message then emits an answer, can procedure A wait for the answer or should I use a procedure C? Am I right to assume that I must create two queues operating on the same contract? But how many services? In that scenario how and where would I use END CONVERSATION?
If a procedure A sends a message to a queue activating a procedure B which handle the message then emits another/several messages(s) for another/some other procedure(s) C, are all those queues/services / etc. on the same conversation? The same conversation group? (what would I do after the GET CONVERSATION GROUP to ensure my conversations are in the same group?) Does that imply passing the same conversation handle when issuing BEGIN TRANSACTION / BEGIN DIALOG or using
[ WITH
[ { RELATED_CONVERSATION = related_conversation_handle
| RELATED_CONVERSATION_GROUP = related_conversation_group_id } ]
? And... last but not least, If I'm using multiple messages to parallel/fork calls to C with different parameters, in which case would I want to start totally different conversations/conversations groups doing the same thing or is it always better to have a unique "narration"
Oh... another thing... is there a best practice to use several messages to call some treatments then wait for every one of them to finish before starting another one? Is there a way in which each procedure would receive a message, send an answer, and then the procedure activated by the answers could check/count the previous messages in its queue and go on only if they are all there? Would that need to check the conversation id (or conversation group id) to be sure those messages are all emitted by the same group of answers?
I hope that's not too much confusing but MS tutorials are... well... a bit simplistic.
First, a dialog is the same as a conversation as far as I can tell. Two names for the same thing.
Queues can contain many different message types. It's up to the the thing processing the messages (whether that's an internally activated stored procedure or an external application) to discriminate on the type and do the "right thing" with it. A service can have only one queue, but a queue can have many services (though I haven't actually seen that in practice). A service defines what message types is can both accept and produce through the service contract.
In regards to your question about whether you want a queue processor to respond on the same conversation or start a new one is completely up to you. My suggestion would be to respond on the same conversation unless you know that you have a good reason not to. As to how to use the same conversation, you can get the conversation handle when you issue the receive statement. Use that as the conversation handle when you issue the subsequent send with your reply.
The way I think about conversation groups is you may need to talk to different services in regards to the same thing. Here's a contrived example:
Let's say that I have a new hire process. It has the following steps:
Create a login
Create an entry in the payroll system
Register them with your insurance provider
They're all logically for the same event though (i.e. "I hired a new employee"). So, you could bundle all of the conversations in one conversation group and keep track of the individual conversations separately. Something like this:
declare #handle uniqueidentifier, #group uniqueidentifier = NEWID(),
#message XML = '<employee name="Ben Thul" />';
BEGIN TRAN
begin dialog #handle
from service [EmployeeService]
to service 'LoginService'
on contract [LoginContract]
with related_conversation_group = #group;
SEND ON CONVERSATION (#handle)
MESSAGE TYPE [NewLoginRequest]
(#message);
INSERT INTO [dbo].[OpenRequests]
(
[GroupIdentifier],
[ConversationIdentifier],
[ServiceName],
[Status],
[date_modified]
)
VALUES
(#group, #handle, 'LoginService', 'RequestSent', GETUTCDATE());
BEGIN DIALOG #handle
FROM SERVICE [EmployeeService]
TO SERVICE 'PayrollService'
ON CONTRACT [PayrollContract]
WITH RELATED_CONVERSATION_GROUP = #group;
SEND ON CONVERSATION (#handle)
MESSAGE TYPE [NewPayrollRequest]
(#message);
INSERT INTO [dbo].[OpenRequests]
(
[GroupIdentifier],
[ConversationIdentifier],
[ServiceName],
[Status],
[date_modified]
)
VALUES
(#group, #handle, 'PayrollService', 'RequestSent', GETUTCDATE());
BEGIN DIALOG #handle
FROM SERVICE [EmployeeService]
TO SERVICE 'InsuranceService'
ON CONTRACT [InsuranceContract]
WITH RELATED_CONVERSATION_GROUP = #group;
SEND ON CONVERSATION (#handle)
MESSAGE TYPE [NewInsuranceRequest]
(#message);
INSERT INTO [dbo].[OpenRequests]
(
[GroupIdentifier],
[ConversationIdentifier],
[ServiceName],
[Status],
[date_modified]
)
VALUES
(#group, #handle, 'InsuranceService', 'RequestSent', GETUTCDATE());
COMMIT
Now, you have a way to track each of those requests separately and a way to tie them all to the same logical operation. As each service processes the message, it will respond back with either a success, failure, or "I need something else" message. At which point you can update the OpenRequests table with the current status.
Service broker can be overwhelming. My advice for you is to think about what messages need to be passed from where to where and start designing services, message types, contracts, etc around that. It's unlikely that you're going to use all of the functionality that SB has to offer.

What does SQL Server sys.dm_broker_activated_tasks tell me?

I have a server broker application I inherited that abends with a false negative.
I think it is using sys.dm_broker_activated_tasks incorrectly, and I want to validate that my understanding of what that view shows is correct.
Can I assume that this view is showing tasks being activated, and no so much those that were activated, but are now in the process of completing?
The procedure I have monitors for completion of processing by looking for when there are no entries in sys.dm_broker_activated_tasks for that queue.
This appears to work (mostly), except occasionally at the end when processing in the queue is winding down.
The row in that table seems to disappears before the final message in the queue has completed.
And unfortunately, as this uses the fire and forget anti-pattern, I can't really at this time do more than make the polling monitor a bit smarter.
That view doesn't do much apart from:
Returns a row for each stored procedure activated by Service Broker.
https://msdn.microsoft.com/en-us/library/ms175029.aspx
Not sure if you have looked at the code, but I think a better usage of it is to combine it with sys.dm_exec_sessions
select
at.spid
,DB_NAME(at.database_id) AS [DatabaseName]
,at.queue_id
,at.[procedure_name]
,s.[status]
,s.login_time
from
sys.dm_broker_activated_tasks at
inner join
sys.dm_exec_sessions s
on
at.spid = s.session_id;
Another good place to troubleshoot Service Broker is sys.transmission_queue. You will see every message sent there until there is an acknowledgement that it was received.

Sql Server Service Broker - thorough, in-use example of externally activated console app

I need some guidance from anyone who has deployed a real-world, in-production application that uses the Sql Server Service Broker external activation mechanism (via the Service Broker External Activator from the Feature Pack).
Current mindset:
My specs are rather simple (or at least I think so), so I'm thinking of the following basic flow:
order-like entity gets inserted into a Table_Orders with state "confirmed"
SP_BeginOrder gets executed and does the following:
begins a TRANSACTION
starts a DIALOG from Service_HandleOrderState to Service_PreprocessOrder
stores the conversation handle (from now on PreprocessingHandle) in a specific column of the Orders table
sends a MESSAGE of type Message_PreprocessOrder containing the order id using PreprocessingHandle
ends the TRANSACTION
Note that I'm not ending the conversation, I don't want "fire-and-forget"
event notification on Queue_PreprocessOrder activates an instance of PreprocessOrder.exe (max concurrent of 1) which does the following:
begins a SqlTransaction
receives top 1 MESSAGE from Queue_PreprocessOrder
if message type is Message_PreprocessOrder (format XML):
sets the order state to "preprocessing" in Table_Orders using the order id in the message body
loads n collections of data of which computes an n-ary Carthesian product (via Linq, AFAIK this is not possible in T-SQL) to determine the order items collection
inserts the order items rows into a Table_OrderItems
sends a MESSAGE of type Message_PreprocessingDone, containing the same order id, using PreprocessingHandle
ends the conversation pertaining to PreprocessingHandle
commits the SqlTransaction
exits with Environment.Exit(0)
internal activation on Queue_HandleOrderState executes a SP (max concurrent of 1) that:
begins a TRANSACTION
receives top 1 MESSAGE from Queue_InitiatePreprocessOrder
if message type is Message_PreprocessingDone:
sets the order state to "processing" in Table_Orders using the order id in the message body
starts a DIALOG from Service_HandleOrderState to Service_ProcessOrderItem
stores the conversation handle (from now on ProcessOrderItemsHandle) in a specific column of Table_Orders
creates a cursor for rows in Table_OrderItems for current order id and for each row:
sends a MESSAGE of type Message_ProcessOrderItem, containing the order item id, using ProcessOrderItemsHandle
if message type is Message_ProcessingDone:
sets the order state to "processed" in Table_Orders using the order id in the message body
if message type is http://schemas.microsoft.com/SQL/ServiceBroker/EndDialog (END DIALOG):
ends the conversation pertaining to conversation handle of the message
ends the TRANSACTION
event notification on Queue_ProcessOrderItem activates an instance of ProcessOrderItem.exe (max concurrent of 1) which does the following:
begins a SqlTransaction
receives top 1 MESSAGE from Queue_ProcessOrderItem
if message type is Message_ProcessOrderItem (format XML):
sets the order item state to "processing" in Table_OrdersItems using the order item id in the message body, then:
loads a collection of order item parameters
makes a HttpRequest to a URL using the parameters
stores the HttpResponse as a PDF on filesystem
if any errors occurred in above substeps, sets the order item state to "error", otherwise "ok"
performs a lookup in the Table_OrdersItems to determine if all order items are processed (state is "ok" or "error")
if all order items are processed:
sends a MESSAGE of type Message_ProcessingDone, containing the order id, using ProcessOrderItemsHandle
ends the conversation pertaining to ProcessOrderItemsHandle
commits the SqlTransaction
exits with Environment.Exit(0)
Notes:
specs specify MSSQL compatibility 2005 through 2012, so:
no CONVERSATION GROUPS
no CONVERSATION PRIORITY
no POISON_MESSAGE_HANDLING ( STATUS = OFF )
I am striving to achieve overall flow integrity and continuity, not speed
given that tables and SPs reside in DB1 whilst Service Broker objects (messages, contracts, queues, services) reside in DB2, DB2 is SET TRUSTWORTHY
Questions:
Are there any major design flaws in the described architecture ?
Order completion state tracking doesn't seem right. Is there a better method ? Maybe using QUEUE RETENTION ?
My intuition tells me that in no case whatsoever should the activated external exe terminate with an exit code other than 0, so there should be try{..}catch(Exception e){..} finally{ Environment.Exit(0) } in Main. Is this assumption correct ?
How would you organize error handling in DB code ? Is an error log table enough?
How would you organize error handling in external exe C# code ? Same error logging
table ?
I've seen the SQL Server Service Broker Product Samples, but the Service Broker Interface seems overkill for my seemingly simpler case. Any alternatives for a simpler Service Broker object model ?
Any cross-version "portable" admin tool for Service Broker capable of at least draining poison messages ?
Have you any decent code samples for any of the above ?
Q: Are there any major design flaws in the described architecture ?
A: Couple of minor perks:
- waiting for an HTTP request to complete while holding open a transaction is bad. You can't achieve transactional consistency between a database and HTTP anyway, so don't risk to have a transaction stretch for minutes when the HTTP is slow. The typical pattern is to {begin tran/receive/begin conversation timer/commit} then issue the HTTP call w/o any DB xact. If the HTTP call succeeds then {begin xact/send response/end conversation/commit}. If the HTTP fails (or client crashes) then let the conversation time activate you again. You'll get a timer message (no body), you need to pick up the item id associated with the handle from your table(s).
Q: Order completion state tracking doesn't seem right. Is there a better method ? Maybe using QUEUE RETENTION ?
A: My one critique of your state tracking is the dependency on scanning the order items to determine that the current processed one is the last one (5.3.4). For example you could add the information that this is the 'last' item to be processed in the item state so you know, when processing it, that you need to report the completion. RETENTION is only useful in debugging or when you have logic that require to run 'logical rollback' and to compensating actions on conversation error.
Q: My intuition tells me that in no case whatsoever should the activated external exe terminate with an exit code other than 0, so there should be try{..}catch(Exception e){..} finally{ Environment.Exit(0) } in Main. Is this assumption correct ?
A: The most important thing is for the activated process to issue a RECEIVE statement on the queue. If it fails to do so the queue monitor may enter the notified state forever. Exit code is, if I remember correctly, irrelevant. As with any background process is important to catch and log exceptions, otherwise you'll never even know it has a problem when it start failing. In addition to disciplined try/catch blocks, Hookup Application.ThreadException for UI apps and AppDomain.UnhandledException for both UI and non-UI apps.
Q: How would you organize error handling in DB code ? Is an error log table enough?
A: I will follow up later on this. Error log table is sufficient imho.
Q: How would you organize error handling in external exe C# code ? Same error logging table ?
A: I created bugcollect.com exactly because I had to handle such problems with my own apps. The problem is more than logging, you also want some aggregation and analysis (at least detect duplicate reports) and suppress floods of errors from some deployment config mishap 'on the field'. Truth be told nowadays there are more options, eg. exceptron.com. And of course I think FogBugs also has logging capabilities.
Q: I've seen the SQL Server Service Broker Product Samples, but the Service Broker Interface seems overkill for my seemingly simpler case. Any alternatives for a simpler Service Broker object model ?
finally, an easy question: Yes, it is overkill. There is no simple model.
Q: Any cross-version "portable" admin tool for Service Broker capable of at least draining poison messages ?
A: The problem with poison messages is that the definition of poison message changes with your code: the poison message is whatever message breaks the current guards set in place to detect it.
Q: Have you any decent code samples for any of the above ?
A: No
One more point: try to avoid any reference from DB1 to DB2 (eg. 4.3.4 is activated in DB1 and reads the items table from DB2). This creates cross DB dependencies which break when a) one DB is offline (eg. for maintenance) or overloaded or b) you add database mirroring for HA/DR and one DB fails over. Try to make the code to work even if DB1 and DB2 are on different machines (and no linked servers). If necessary, add more info to the messages payload. And if you architect it that way that DB2 can be on a different machine and even multiple DB2 machines can exists to scale out the HTTP/PDF writing work.
And finally: this design will be very slow. I'm talking low tens messages per second slow, with so many dialogs/messages involved and everything with max_queue_readers 1. This may or may not be acceptable for you.

Resources