Can I run some SQL in a separate transaction, during a transaction - sql-server

I'm trying to create some standard logging for stored procedures along with our standard error handling. I want to call a stored procedure at the beginning of the sproc to log to a table that it has started, and call a stored procedure at the end to log that it has successfully completed. I also want to call a stored procedure from the error handler to log the failure and errors message/number.
The stored procedure may have its own transaction, or its calling stack may have started a transaction, or there may be no transaction open. I don't want any of my calls to the logging procedures to be rolled back at all, no matter what happens in the main transaction. i.e. the processing of the logging stored procedures needs to be completely separate from that of the main processing.
Is it possible to execute some SQL as though it is in a separate session during a transaction? What's the easiest way to achieve this?
Incidentally, I can't use xp_cmdshell, otherwise I'd shell a call to sqlcmd.
Thanks,
Mark

What you are describing is nested transactions. However, nested transactions are not real. https://www.sqlskills.com/blogs/paul/a-sql-server-dba-myth-a-day-2630-nested-transactions-are-real/ You would have to do your logging in another context. Since you are in a stored procedure you would have to use dynamic sql in your logging code for this to work.

Related

Execute a statement within a transaction without enlisting it in that transaction

I have some SQL statements in a batch that I want to profile for performance. To that end, I have created a stored procedure that logs execution times.
However, I also want to be able to roll back the changes the main batch performs, while still retaining the performance logs.
The alternative is to run the batch, copy the performance data to another database, restore the database from backup, re-apply all the changes made that I want to profile, plus any more, and start again. That is rather more time-consuming than not including the act of logging in the transaction.
Let us say we have this situation:
BEGIN TRANSACTION
SET #StartTime = SYSDATETIME
-- Do stuff here
UPDATE ABC SET x = fn_LongRunningFunction(x)
EXECUTE usp_Log 'Do stuff', #StartTime
SET #StartTime = SYSDATETIME
-- Do more stuff here
EXEC usp_LongRunningSproc()
EXECUTE usp_Log 'Do more stuff', #StartTime
ROLLBACK
How can I persist the results that usp_Log saves to a table without rolling them back along with the changes that take place elsewhere in the transaction?
It seems to me that ideally usp_Log would somehow not enlist itself into the transaction that may be rolled back.
I'm looking for a solution that can be implemented in the most reliable way, with the least coding or work possible, and with the least impact on the performance of the script being profiled.
EDIT
The script that is being profiled is extremely time-consuming - taking from an hour to several days - and I need to be able to see intermediate profiling results before the transaction completes or is rolled back. I cannot afford to wait for the end of the batch before being able to view the logs.
You can use a table variable for this. Table variables, like normal variables, are not affected by ROLLBACK. You would need to insert your performance log data into a table variable, then insert it into a normal table at the end of the procedure, after all COMMIT and ROLLBACK statements.
It might sound a bit overkill (given the purpose), but you can create a CLR stored procedure which will take over the progress logging, and open a separate connection inside it for writing log data.
Normally, it is recommended to use context connection within CLR objects whenever possible, as it simplifies many things. In your particular case however, you wish to disentangle from the context (especially from the current transaction), so regular connection is a way to go.
Caveat emptor: if you never dabbled with CLR programming within SQL Server before, you may find the learning curve a bit too steep. That, and the amount of server reconfiguration (both the SQL Server instance and the underlying OS) required to make it work might also seem to be prohibitively expensive, and not worth the hassle. Still, I would consider it a viable approach.
So, as Roger mentions above, SQLCLR is one option. However, since SQLCLR is not permitted, you are out of luck.
In SQL Server 2017 there is another option and that is to use the SQL Server extensibility framework and the support for Python.
You can use this to have Python code which calls back into your SQL Server instance and executes the usp_log procedure.
Another, rather obscure, option is to bind other sessions to the long-running transaction for monitoring.
At the beginning of the transaction call sp_getbindtoken and display the bind token.
Then in another session call sp_bindsession, and you can examine the intermediate state of the transaction.
Or you can read the logs with (NOLOCK).
Or you can use RAISERROR WITH LOG to send debug messages to the client and mirror them to the SQL Log.
Or you can use custom user-configurable trace events, and monitor them in SQL Trace or XEvents.
Or you can use a Loopback linked server configured to not propagate the transaction.

Stored procedure and trigger execution blocking

I have SQL Server 2017 Express database that is accessed by up to 6 tablets that connect via an Angular 7 app using REST web services.
I have a stored procedure that allows a new user to be inserted into a specific database table. The insert will always only insert 1 record at a time, but with 6 clients, the stored procedure could be called by each client almost simultaneously.
The last step of the process is to print an agreement form to a specific printer. Initially this was going to be handled on the client side, but the tablets do not have the ability to print to a network printer, so that functionality now needs to reside on the server side.
With this new requirement, the agreement form is an RTF document that is being read, placeholder values replaced with values from the insert statement, written to a temporary file and then printed to the network printer via the application (Wordpad most likely) that is associated with the RTF file format.
There is also an MS Access front-end app that uses linked servers to connect to the database, but doesn't have the ability to create new users, but will need to be able to initiate the "print agreement" operation in the case of an agreement not being printed due to printer issue, network issue, etc.
I have the C# code written to perform the action of reading/modifying/writing/printing of the form that uses the UseShellExecute StartInfo property with the Process.Start method.
Since the file read/modify/write/print process takes a few seconds, I am concerned about having the stored procedure for adding the registration blocking for that length of time.
I am pretty sure that I am going to need a CLR stored procedure so that the MS Access front-end can initiate the print operation, so what I have come up with is either the "Add_Registration" stored procedure (Transact-SQL) will call the CLR stored procedure to do the read/modify/write/print operation, or an insert trigger (either CLR or Transact-SQL) on the table that calls the CLR stored procedure to read/modify/write/print.
I could avoid the call from the trigger to the stored procedure by duplicating the code in both the CLR trigger and the CLR stored procedure if there is a compelling reason to do so, but was trying to avoid having duplicate code if possible.
The solutions that I am currently considering are as follows, but am unsure of how SQL Server handles various scenarios:
A CLR or Transact-SQL Insert trigger on the registration table that calls a CLR stored procedure that does the reading/modifying/writing/printing process.
A CLR stored procedure that does the reading/modifying/writing/printing process, being called from the current add_registration Transact-SQL stored procedure
The questions I keep coming back to are:
How are Insert CLR triggers executed if multiple inserts are done at the same or nearly the same time (only 1 per operation), are they queued up an then processed synchronously or are they executed immediately?
Same question as #1 except with a Transact-SQL trigger
How are CLR stored procedures handled if they are called by multiple clients at the same or nearly the same time, are they queued up an then processed synchronously, or is each call to the stored procedure executed immediately?
Same question as #3 except with a Transact-SQL stored procedure
If a CLR stored procedure is called from a Transact-SQL trigger, is the trigger blocked until the stored procedure returns or is the call to the stored procedure spawned out to it's own process (or similar method) with the trigger returning immediately?
Same question as #5 except with a CLR trigger calling the CLR stored procedure
I am looking for any other suggestions and/or clarifications on how SQL Server handles these scenarios.
There is no queuing unless you implement it yourself, and there are a few ways to do that. So for multiple concurrent sessions, they are all acting independently. Of course, when it comes to writing to the DB (INSERT / UPDATE / DELETE / etc) then they clearly operate in the order in which they submit their requests.
I'm not sure why you are including the trigger in any of this, but as it relates to concurrency, triggers execute within a system-generated transaction initiated by the DML statement that fired the trigger. This doesn't mean that you will have single-threaded INSERTs, but if the trigger is calling a SQLCLR stored procedure that takes a second or two to complete (and the trigger cannot continue until the stored procedure completes / exits) then there are locks being held on the table for the duration of that operation. Those locks might not prevent other sessions from inserting at the same time, but you might have other operations attempting to modify that table that require a conflicting lock that will need to wait until the insert + trigger + SQLCLR proc operation completes. Of course, that might only be a problem if you have frequent inserts, but that depends on how often you expect new users, and that might not be frequent enough to worry about.
I'm also not sure why you need to print anything in that moment. It might be much simpler on a few levels if you simply have a flag / BIT column indicating whether or not the agreement has been printed, defaulted to 0. Then, you can have an external console app, scheduled via SQL Server Agent or Windows Scheduled Tasks, executed once every few minutes, that reads from the Users table WHERE HasPrintedAgreement = 0. Each row has the necessary fields for the replacement values, it prints each one, and upon printing, it updates that UserID setting HasPrintedAgreement = 1. You can even schedule this console app to execute once every minute if you always want the agreements immediately.

What is the fastest method for autonomous transaction in SQL Server?

I am trying to simulate autonomous transaction in SQL Server, but the problem is, that using CLR DLL procedure (that is using different session) slows down the performance about 5 times.
To be clear:
Let's assume that in one transaction I am calling procedure for every of 100k rows in table, which gives 100k proc calls in one transaction. If any of this procedure fails, I want to rollback the entire transaction (AFTER all procedures calls), but I need to keep logs from the procedures that fails (in case of failure, insert to ErrorLog table).
The problem is, that in such case, I perform 100k connections, and it cost in terms of performance.
Using table variable is not a solution, because I am not able to control every transaction (some are controlled by frontend), using Loopback (to the same server) is not recommended in production (from what I read), so the solution was to use CLR for different session purpose.
Is there any solution to maybe create altered session for every session, and use that session for all those insert instead of creating new connection every time, or is my understanding of using CLR wrong, and it must open new connection every time. (From what I read, context_session uses the same session from what it was called, so in case of rollback, it will delete my logs from ErrorLog table).
You can use "SAVE TRAN XXXX" in the procedure and a "ROLLBACK TRAN XXXX" where an exception occurs. The Insert into the ERRORLOG table should be after the "ROLLBACK TRAN XXXX in the Procedure not to be rolled back.
Hope this helps...

How to get the stored procedures that get executed to print in the output

I recently learned that SET STATISTICS IO ON shows what tables get referenced and a lot of useful information.
Currently, I'm debugging a stored procedure that branches out and calls a bunch of other stored procedures during the process.
So, my question is: Is there a mechanism similar to STATISTICS IO, which prints out the tables used, that prints out which stored procedures get executed?
If so, what is it?
It sounds like you simply want to show dependencies where the object type is a Stored Procedure.
Different way to find SQL Dependencies
If you open up SQL Server Profiler, and run the Stored Procedure, you can get a trace of the nested SP calls. If you've not used it before, do check it out.

Re-run a stored procedure until it finnishes without errors in SQL Server

I have a stored procedures that does a lot of inserts using openquery. Because of reasons beyond my influence the connection often breaks between linked servers and I need to run the procedure couple of times. What is the best (I am guessing try and catch block) way to automatically re-run the procedure until it finishes without errors.

Resources