Sending and Receiving SQL Server Service Broker Messages within Nested Transactions - sql-server

I'd like to use SQL Server 2008 Service Broker to log the progress of a long-running (up to about 30 minutes) transaction that is dynamically created by a stored procedure. I have two goals:
1) To get real-time logging of the dynamically-created statements that make up the transaction so that the progress of the transaction can be monitored remotely,
2) To be able to review the steps that made up the transaction up to a point where a failure may have occurred requiring a rollback.
I cannot simply PRINT (or RAISERROR(msg,0,0)) to the console because I want to log the progress messages to a table (and have that log remain even if the stored procedure rollsback).
But my understanding is that messages cannot be received from the queue until the sending thread commits (the outer transaction). Is this true? If so, what options do I have?

It is true that you cannot read messages from the service queue until the transaction is committed.
You could try some other methods:
use a sql clr procedure to send a .net remoting message to a .net app that receives the messages and them log them.
use a sql clr procedure to write a text or other log file to disk.
Some other method...
Regards
AJ

Related

Debezium transaction metadata - no `END` event received with SQL Server

I'm trying to get the transaction metadata working with debezium 1.4.1.Final using a SQL Server database.
It seems to be working to some extent - the dbservername.transaction topic has been created, and when I run a stored proc which contains a transaction, then the "status":"BEGIN" event is received, along with the CDC packets on the table topic.
However, no "status":"END" event is received... until I run the stored proc again.
It may very well be that I'm not closing the transaction in the stored proc correctly (I'm not a MSSQL expert by any means)...
This is the structure I'm using:
CREATE PROCEDURE schema.myproc
AS
BEGIN
BEGIN TRANSACTION
...
COMMIT;
END
GO
Any ideas what I need to do to get the END event at the end of the proc?
Unfortunately, there's no reliable way to emit the END event before the connector has received another transaction BEGIN event. Hence END events are delayed in case you don't have a continuous transaction load on your database.
You may consider to run some dummy transactions in a loop for triggering this. We're planning to support this OOTB via Debezium's heartbeat feature (see DBZ-3263); any help with that will surely be welcomed.

What happens in the case of a Postgres Transaction Rollback command not reaching the database?

Section 3.4 of the Postgres documentation covers transactions.
I thought a transaction worked according to the following rules:
The client sends a BEGIN statement to the Database server on a connection. Call this connection “connection_one”.
The client sends whatever queries they want to the Database server. All of these queries are sent via “connection_one”.
If at any time the connection (in this example “connection_one”) is lost before a COMMIT statement reaches the Database server, the Database server rollsback to before the BEGIN statement.
If a COMMIT statement is issued and received by the Database server, then the changes are saved and then transaction block has completed.
It looks like the above is not the case though. My confusion is that it looks like I have to actually issue a ROLLBACK command and have it reach the Database Server in order for partial changes not to be saved. Is this really the case or am I missing something? If it is the case is there any way I can get the above behavior to occur or is there some reason I would not want the above behavior to occur? My concern is what if the connection is lost before I am able to ROLLBACK.
Thanks.

SignalR SQL Server Broker - Orphaned Service Broker Queue Errors

I am using SQL Server Broker on SQL Server 2008 for Scaleout with SignalR v2.1.2. It was recently discovered that we are producing 50k+ errors per day in our DB logs. After some research, there are 3 orphaned Service Broker queues from December. Error example:
2016-02-27 23:58:01.79 spid30s The activated proc '[dbo].[SqlQueryNotificationStoredProcedure-2ffbddba-6ddc-4ad0-88b4-45a405e975e0]' running on queue 'MY_SIGNALR_DB.dbo.SqlQueryNotificationService-2ffbddba-6ddc-4ad0-88b4-45a405e975e0' output the following: 'Could not find stored procedure 'dbo.SqlQueryNotificationStoredProcedure-2ffbddba-6ddc-4ad0-88b4-45a405e975e0'.'
These queues were created in December and were NOT dropped for some reason. The corresponding SPs were apparently dropped as expected. The DB will produce an error every 5 seconds for this (equates to 50k per day with 3 queues). Each queue DOES contain a message.
Questions:
What can cause this?
Are there additional SignalR settings that can be implemented to ensure these are cleaned up?
Is this a bug in SQL Server Service Broker?
Is there a document which describes SignalR's expected behavior with regards to Queues and their expiration?
Thank you for your time.
These are leftover from SqlDependency. The implementation of the SqlDependency.Start() is to create a just-in-time service, queue and activated procedure (see the reference source). This has some issues, and even a simple Visual Studio debugging session can leave stranded queues/activated procedures.
You can clean up these left-over service/queue/procedures as they happen, or you can choose to use the lower level SqlNotificationRequest class and handle the service/queue deployment on your own. Pick your poison.

Get hostname when reading Service Broker Queue (SQL Server 2005)

I am trying to configure auditing on my SQL Server using Service Broker. I did all the configuration needed to capture the DDL Events (queue, routes, endpoints, event notification). It is working properly except that I am not able to get the hostname of the client from where the DDL event originated from.
Using the service broker's activation procedure, I tried reading the value from the message_body, but there's no xml element that contains the hostname. I can see a value for the SPID but am unable to make use of it. Exec'ing sp_who and querying sys.processes against this SPID doesn't return any value. And running sp_who without parameter shows only one process (I think it's the background process used by the service broker). Is it all because the message was sent asynchronously? But why will it cause the activation context to see different data on sys.processes view?
I am aware that there are DDL triggers that can achieve the same goal, but it seems it is tightly coupled to the command that causes it to fire. So if the triggers fails, the command will also fail.
UPDATE: I managed to retrieve the Hostname by using a combination of xp_cmdshell and sqlcmd (command line app). But I also realized that since the message is asynchronous, it is not always reliable (The SPID who issue the DDL command might have been disconnected already before the message is read from the queue).
I'm not exactly sure what you're trying to implement here, but it's expected that activated procedure will only see a subset of rows in DMVs. This has to do with activation context which often impersonates a different user that you use when debugging the procedure. That impersonated user will only see these rows of server-level views and DMVs to which it has permissions. See here and here for more info.

Send message from SQL Server trigger

I need to signal a running application (Windows service) when certain things happen in SQL Server (2005). Is there a possibility to send a message from a trigger to an external application on the same system?
You can use a SQL Service Broker queue to do what you want.
The trigger can create a conversation and send a message on the queue.
When it starts, the external process should connect to the database and issue a WAITFOR (RECEIVE) statement on this queue. It will receive the message when the trigger sends it.
Not sure DBA's would approve of this, but there is a way to run commands using xp_cmdshell
"Executes a given command string as an operating-system command shell and returns any output as rows of text. Grants nonadministrative users permissions to execute xp_cmdshell."
Example from MS's site:
CREATE PROC shutdown10
AS
EXEC xp_cmdshell 'net send /domain:SQL_USERS ''SQL Server shutting down
in 10 minutes. No more connections allowed.', no_output
EXEC xp_cmdshell 'net pause sqlserver'
WAITFOR DELAY '00:05:00'
EXEC xp_cmdshell 'net send /domain: SQL_USERS ''SQL Server shutting down
in 5 minutes.', no_output
WAITFOR DELAY '00:04:00'
EXEC xp_cmdshell 'net send /domain:SQL_USERS ''SQL Server shutting down
in 1 minute. Log off now.', no_output
WAITFOR DELAY '00:01:00'
EXEC xp_cmdshell 'net stop sqlserver', no_output
Either:
Use RAISERROR (severity 10) to fire a SQL agent alert and job.
Load a separate table that is polled periodically by a separate mail handling process. (as HLGEM suggested)
Use a stored procedure to send the message and write to the table.
Each solution decouples the transactional trigger from a potentially long messaging call.
You can send an email from a trigger, but it isn't a recommended practice becasue if the email ssystem is down, no data changes can be made to the table.
Personally if you can live with less than realtime, I would information about the event you are interested in to another table (so the real change of data can go smoothly even if email is down for some reason.) Then I would have a job that checks that table every 5-10 minutes for any new entries and emails those out.
You can use a dbmail email message. It should not slow the trigger down if the mail server is down because the message is queued and then sent by and external (to sql) process.
The table idea sounds good if the application can access sql server.
You could also give access to that same table via sql 2005 native XML Services - which exposes procs through xml.
http://msdn.microsoft.com/en-us/library/ms345123(SQL.90).aspx
Depending on what sort of message you want to send, you could use a CLR stored procedure to connect to a socket on the running process and write the message to that. If you have no control over the process (i.e. can't modify it) you could build a bridge or use a library that can issue a message in a suitable format.
For reliable delivery, you could do something that uses MSMQ to deliver the message.
A reminder that triggers can be problematic for stuff like this because they are embedded in set-operations. And being associated with tables, they aren't very sensitive to the context in which they are fired. The problem can be if they fire on an operation that involves multiple rows, because it's hard to avoid invoking as many instances of your action as there are records in the operation. Several hundred emails are not unlikely, for instance.
Hopefully "things that happen" can be detected in closer association with the context in which they happen (which also can be interesting to try to backtrack from a trigger.)

Resources