This may be a fairly straight forward process and I'm simply missing something.
I have a duplicated database in an archive scenario. some of the fields in certain tables are encrypted using wrapper functions around SQL Server's Encypt/DecryptByKey.
During the archive process we need to decrypt certain data from the database where data is being offloaded, because it was encrypted using that databases certs/keys, and re-encrypt it w/ the archive db's infrastructure.
not being a huge sql guru myself (and not seeing much online about this that pops up at least on whatever i'm searching on which may be part of the problem), I'm not seeing how I can decrypt, then re-encrypt the data in my insert/select statements.
the decrypt needs to happen using the exporting databases user defined function (at least this is how the logic seems to work in my brain), then re-encrypted using the UDF on the archiving database, though I don't see a direct way to invoke the function by server->schema->user defined function maybe this isn't how its invoked and is really why i'm posting this here... so if anybody has input on how I could do this (if its even possible)
side note, one limitation of this encryption/decryption method is that the keys can't be opened from the function itself so we have to call this OpenKeys sproc to make the keys available to the function. Those keys have a limited session life and each server will need to access their own.
here's some sample code if that helps clarify things:
sort of what we're trying to do in an archive
EXEC OpenKeys
EXEC [ArchiveDB].OpenKeys
INSERT INTO [ArchiveDB].dbo.AuditTrail
SELECT
Action,
Location,
TableName,
TableID,
-- ***
-- obv. incorrect but need to call archive db's encrypt using
-- archive keys etc., and active db's decrypt, with active db keys
-- ***
[ArchiveDB].dbo.EncryptColumn(dbo.DecryptColumn(UserName)),
EventDate
FROM
AuditTrail
WHERE
AuditTrail < #ArchiveDate
kind of what we're doing with the key session sproc
CREATE PROCEDURE [dbo].[OpenKeys]
AS
BEGIN
BEGIN TRY
OPEN SYMMETRIC KEY MySymmetricKey
DECRYPTION BY CERTIFICATE MyCertificate
END TRY
BEGIN CATCH
-- ...
END CATCH
END
sample of what the encrypt columns function does
CREATE FUNCTION [dbo].[EncryptColumn] -- must call OpenKeys before this (Decrypt similar)
(
#ValueToEncrypt varchar(max)
)
RETURNS varbinary(8000)
AS
BEGIN
-- Declare the return variable here
DECLARE #Result varbinary(8000)
SET #Result = EncryptByKey(Key_GUID('MySymmetricKey'), #ValueToEncrypt)
-- Return the result of the function
RETURN #Result
END
hopefully this makes sense
as it turns out, i had it partially right from the beginning. it was just missing explicit references to both databases
EXEC OrigDB.dbo.OpenKeys
EXEC ArchiveDB.dbo.OpenKeys
INSERT INTO ArchiveDB.dbo.ArchiveTable
( Col1, Col2, Col3, Col4 )
SELECT
Col1,
Col1,
Col1,
ArchiveDB.dbo.EncryptColumn(OrigDB.dbo.DecryptColumn(Col4))
FROM
ArchiveTable
-- WHERE
-- SOME CONDITION EXISTS
Related
I have a stored procedure where I send in an user defined type which is a table. I have simplified to make it easier to read.
CREATE TYPE [dbo].[ProjectTableType] AS TABLE(
[DbId] [uniqueidentifier] NOT NULL,
[DbParentId] [uniqueidentifier] NULL,
[Description] [text] NULL
)
CREATE PROCEDURE [dbo].[udsp_ProjectDUI] (#cmd varchar(10),
#tblProjects ProjectTableType READONLY) AS BEGIN
DECLARE #myNewPKTable TABLE (myNewPK uniqueidentifier)
IF(LOWER(#cmd) = 'insert')
BEGIN
INSERT INTO
dbo.Project
(
DbId,
DbParentId,
Description)
OUTPUT INSERTED.DbId INTO #myNewPKTable
SELECT NEWID(),
DbParentId,
Description
FROM #tblProjects;
SELECT * FROM dbo.Project WHERE dbid IN (SELECT myNewPK FROM #myNewPKTable);
END
This is for a DLL that other applications will use so we aren't in charge on validation necessarily. I want to mimic BULK INSERT where if one rows fails to insert but the other rows are fine, the correct ones will still insert. Is there a way to do this? I want to do it for UPDATE as well where if one fails, the stored procedure will continue to try updating the others.
The only option I can think of is to only do one at a time (either a loop in the code where the stored proc is called multiple times or a loop in the stored procedure), but was wondering what the performance hit would be for that or if there is a better solution.
Not sure which errors you're wanting to continue on, but unless you're running into all manner of unexpected errors, I'd try to avoid devolving into RBAR just yet.
Check Explicit Violations
The main thing I would think would be PK volations which you can avoid by just checking for existence before insert (and update). If there are other business logic failure conditions, you can check those here as well.
insert into dbo.Project
(
DbId,
DbParentId,
Description
)
output insert.DbId
into #myNewPKTable (DbId)
select
DbId = newid(),
DbParentId = s.DbParentId,
Description = s.Description
from #tblProjects s -- source
-- LOJ null check makes sure we don't violate PK
-- NOTE: I'm pretending this is an alternate key of the table.
left outer join dbo.Project t -- target
on s.dbParentId = t.dbParentId
where t.dbParentId is null
If at all possible, I'd try to stick with a batch update, and use join predicates to eliminate the possibility of most errors you expect to see. Changing to RBAR processing because you're worried you "might" get a system shutdown failure is probably a waste of time. Then, if you hit a really nasty error you can't recover from, fail the batch, legitimately.
RBAR
Alternatively, if you absolutely need row-by-row granularity of successes or failures, you could do try/catch around each statement and have the catch block do nothing (or log something).
declare
#DBParentId int,
#Description nvarchar(1000),
#Ident int
declare c cursor local fast_forward for
select
DbParentId = s.DbParentId,
Description = s.Description
from #tblProjects
open c
fetch next from c into #DBParentId, #Description
while ##fetch_status = 0
begin
begin try
insert into dbo.Project
(
DbId,
DbParentId,
Description
)
output insert.DbId
into #myNewPKTable (DbId)
select
newid(),
#DBParentId,
#Description
end try
begin catch
-- log something if you want
print error_message()
end catch
fetch next from c into #DBParentId, #Description
end
Hybrid
You might be able to get clever and hybridize things. One option might be to make the web-facing insert procedure actually insert into a lightweight, minimally keyed/constrainted table (like a queue). Then, every minute or so, have an Agent job run through the logged calls, and operate on them in a batch. Doesn't fundamentally change any of the patterns here, but it makes the processing asynchronous so the caller doesn't have to wait, and by batching requests together, you can save processing power piggybacking on what SQL does best; set based ops.
Another option might be to do the most set-based processing you can (using checks to prevent business rule or constraint violations). If anything fails, you could then spin off an RBAR process for the remaining rows. If all succeeds though, that RBAR process is never hit.
Conclusion
Several ways to go about this. I'd try to use set-based operations as much as possible unless you have a really strong reason for needing row-by-row granularity.
You can avoid most errors just by constructing your insert/update statement correctly.
If you need to, you can use a try/catch with an "empty" catch block so failures don't stop the total processing
Depending on random odds and ends of your specific situation, you might want or need to hybridize these two approaches
Ok - so the answer is probably 'everything is within a transaction in SQLServer'. But here's my problem.
I have a stored procedure which returns data based on one input parameter. This is being called from an external service which I believe is Java-based.
I have a log-table to see how long each hit to this proc is taking. Kind of like this (simplified)...
TABLE log (ID INT PRIMARY KEY IDENTITY(1,1), Param VARCHAR(10), In DATETIME, Out DATETIME)
PROC troublesome
(#param VARCHAR(10))
BEGIN
--log the 'in' time and keep the ID
DECLARE #logid INT
INSERT log (In,Param) VALUES (GET_DATE(),#param)
SET #logid = SCOPE_IDENTITY()
SELECT <some stuff from another table based on #param>
--set the 'out' time
UPDATE log SET Out = GET_DATE() WHERE ID = #logid
END
So far so easily-criticised by SQL fascists.
Anyway - When I call this eg troublesome 'SOMEPARAM' - I get the result of the SELECT back and in the log table is a nice new row with the SOMEPARAM and the in and out timestamps.
Now - I watch this table, and even though no rows are going in - if I am to generate a row, I will see that the ID has skipped many places. I guess this is being caused by the external client code hitting it. They are getting the result of the SELECT - but I am not getting their log data.
This suggests to me they are wrapping their client call in a TRAN and rolling it back. Which is one thing that would cause this behaviour. I want to know...
If there is a way I can FORCE the write of the log even if it is contained within a transaction over which I have no control and which is subsequently rolled back (seems unlikely)
If there is a way I can determine from within this proc if it is executing within a transaction (and perhaps raise an error)
If there are other possible explanations for the ID skipping (and it's skipping alot like 1000 places in an hour. So I'm sure it's caused by the client code - as they are reporting successfully retrieving the results of the SELECT also.)
Thanks!
For the why rows are skipped part of the question, your assumption sounds plausible to me.
For how to force the write even within a transaction, I don't know.
As for how to discover whether the SP is running inside a transaction how about the following:
SELECT ##trancount
I tested this out in an sql batch and seems to be ok. Maybe it can take you a bit further.
In MS SQL Server, I'm using a global temp table to store session related information passed by the client and then I use that information inside triggers.
Since the same global temp table can be used in different sessions and it may or may not exist when I want to write into it (depending on whether all the previous sessions which used it before are closed), I'm doing a check for the global temp table existence based on which I create before I write into it.
IF OBJECT_ID('tempdb..##VTT_CONTEXT_INFO_USER_TASK') IS NULL
CREATE TABLE ##VTT_CONTEXT_INFO_USER_TASK (
session_id smallint,
login_time datetime,
HstryUserName VDT_USERNAME,
HstryTaskName VDT_TASKNAME,
)
MERGE ##VTT_CONTEXT_INFO_USER_TASK As target
USING (SELECT ##SPID, #HstryUserName, #HstryTaskName) as source (session_id, HstryUserName, HstryTaskName)
ON (target.session_id = source.session_id)
WHEN MATCHED THEN
UPDATE SET HstryUserName = source.HstryUserName, HstryTaskName = source.HstryTaskName
WHEN NOT MATCHED THEN
INSERT VALUES (##SPID, #LoginTime, source.HstryUserName, source.HstryTaskName);
The problem is that between my check for the table existence and the MERGE statement, SQL Server may drop the temp table if all the sessions which were using it before happen to close in that exact instance (this actually happened in my tests).
Is there a best practice on how to avoid this kind of concurrency issues, that a table is not dropped between the check for its existence and its subsequent use?
The notion of "global temporary table" and "trigger" just do not click. Tables are permanent data stores, as are their attributes -- including triggers. Temporary tables are dropped when the server is re-started. Why would anyone design a system where a permanent block of code (trigger) depends on a temporary shared storage mechanism? It seems like a recipe for failure.
Instead of a global temporary table, use a real table. If you like, put a helpful prefix such as temp_ in front of the name. If the table is being shared by databases, then put it in a database where all code has access.
Create the table once and leave it there (deleting the rows is fine) so the trigger code can access it.
I'll start by saying that, on the long term, I will follow Gordon's advice, i.e. I will take the necessary steps to introduce a normal table in the database to store client application information which needs to be accessible in the triggers.
But since this was not really possible now because of time constrains (it takes weeks to get the necessary formal approvals for a new normal table), I came up with a solution for preventing SQL Server from dropping the global temp table between the check for its existence and the MERGE statement.
There is some information out there about when a global temp table is dropped by SQL Server; my personal tests showed that SQL Server drops a global temp table the moment the session which created it is closed and any other transactions started in other sessions which changed data in that table are finished.
My solution was to fake data changes on the global temp table even before I check for its existence. If the table exists at that moment, SQL Server will then know that it needs to keep it until the current transaction finishes, and it cannot be dropped anymore after the check for its existence. The code looks now like this (properly commented, since it is kind of a hack):
-- Faking a delete on the table ensures that SQL Server will keep the table until the end of the transaction
-- Since ##VTT_CONTEXT_INFO_USER_TASK may actually not exist, we need to fake the delete inside TRY .. CATCH
-- FUTURE 2016, Feb 03: A cleaner solution would use a real table instead of a global temp table.
BEGIN TRY
-- Because schema errors are checked during compile, they cannot be caught using TRY, this can be done by wrapping the query in sp_executesql
DECLARE #QueryText NVARCHAR(100) = 'DELETE ##VTT_CONTEXT_INFO_USER_TASK WHERE 0 = 1'
EXEC sp_executesql #QueryText
END TRY
BEGIN CATCH
-- nothing to do here (see comment above)
END CATCH
IF OBJECT_ID('tempdb..##VTT_CONTEXT_INFO_USER_TASK') IS NULL
CREATE TABLE ##VTT_CONTEXT_INFO_USER_TASK (
session_id smallint,
login_time datetime,
HstryUserName VDT_USERNAME,
HstryTaskName VDT_TASKNAME,
)
MERGE ##VTT_CONTEXT_INFO_USER_TASK As target
USING (SELECT ##SPID, #HstryUserName, #HstryTaskName) as source (session_id, HstryUserName, HstryTaskName)
ON (target.session_id = source.session_id)
WHEN MATCHED THEN
UPDATE SET HstryUserName = source.HstryUserName, HstryTaskName = source.HstryTaskName
WHEN NOT MATCHED THEN
INSERT VALUES (##SPID, #LoginTime, source.HstryUserName, source.HstryTaskName);
Although I would call it a "use it at your own risk" solution, it does prevent that the use of the global temp table in other sessions affects its use in the current one, which was the concern that made me start this thread.
Thanks all for your time! (from text formatting edits to replies)
Me and another developer are discussing which type of table would be more appropriate for our task. It's basically going to be a cache that we're going to truncate at the end of the day. Personally, I don't see any reason to use anything other than a normal table for this, but he wants to use a global temp table.
Are there any advantages to one or the other?
Use a normal table in tempdb if this is just transient data that you can afford to lose on service restart or a user database if the data is not that transient.
tempdb is slightly more efficient in terms of logging requirements.
Global temp tables get dropped once all referencing connections are the connection that created the table is closed.
Edit: Following #cyberkiwi's edit. BOL does definitely explicitly say
Global temporary tables are visible to
any user and any connection after they
are created, and are deleted when all
users that are referencing the table
disconnect from the instance of SQL
Server.
In my test I wasn't able to get this behaviour though either.
Connection 1
CREATE TABLE ##T (i int)
INSERT INTO ##T values (1)
SET CONTEXT_INFO 0x01
Connection 2
INSERT INTO ##T VALUES(4)
WAITFOR DELAY '00:01'
INSERT INTO ##T VALUES(5)
Connection 3
SELECT OBJECT_ID('tempdb..##T')
declare #killspid varchar(10) = (select 'kill ' + cast(spid as varchar(5)) from sysprocesses where context_info=0x01)
exec (#killspid)
SELECT OBJECT_ID('tempdb..##T') /*NULL - But 2 is still
running let alone disconnected!*/
Global temp table
-ve: As soon as the connection that created the table goes out of scope, it takes
the table with it. This is damaging if you use connection pooling which can swap connections constantly and possibly reset it
-ve: You need to keep checking to see if the table already exists (after restart) and create it if not
+ve: Simple logging in tempdb reduces I/O and CPU activity
Normal table
+ve: Normal logging keeps your cache with your main db. If your "cache" is maintained but is still mission critical, this keeps it consistent together with the db
-ve: follow from above More logging
+ve: The table is always around, and for all connections
If the cache is a something like a quick lookup summary for business/critical data, even if it is reset/truncated at the end of the day, I would prefer to keep it a normal table in the db proper.
I have three stored procedures Sp1, Sp2 and Sp3.
The first one (Sp1) will execute the second one (Sp2) and save returned data into #tempTB1 and the second one will execute the third one (Sp3) and save data into #tempTB2.
If I execute the Sp2 it will work and it will return me all my data from the Sp3, but the problem is in the Sp1, when I execute it it will display this error:
INSERT EXEC statement cannot be nested
I tried to change the place of execute Sp2 and it display me another error:
Cannot use the ROLLBACK statement
within an INSERT-EXEC statement.
This is a common issue when attempting to 'bubble' up data from a chain of stored procedures. A restriction in SQL Server is you can only have one INSERT-EXEC active at a time. I recommend looking at How to Share Data Between Stored Procedures which is a very thorough article on patterns to work around this type of problem.
For example a work around could be to turn Sp3 into a Table-valued function.
This is the only "simple" way to do this in SQL Server without some giant convoluted created function or executed sql string call, both of which are terrible solutions:
create a temp table
openrowset your stored procedure data into it
EXAMPLE:
INSERT INTO #YOUR_TEMP_TABLE
SELECT * FROM OPENROWSET ('SQLOLEDB','Server=(local);TRUSTED_CONNECTION=YES;','set fmtonly off EXEC [ServerName].dbo.[StoredProcedureName] 1,2,3')
Note: You MUST use 'set fmtonly off', AND you CANNOT add dynamic sql to this either inside the openrowset call, either for the string containing your stored procedure parameters or for the table name. Thats why you have to use a temp table rather than table variables, which would have been better, as it out performs temp table in most cases.
OK, encouraged by jimhark here is an example of the old single hash table approach: -
CREATE PROCEDURE SP3 as
BEGIN
SELECT 1, 'Data1'
UNION ALL
SELECT 2, 'Data2'
END
go
CREATE PROCEDURE SP2 as
BEGIN
if exists (select * from tempdb.dbo.sysobjects o where o.xtype in ('U') and o.id = object_id(N'tempdb..#tmp1'))
INSERT INTO #tmp1
EXEC SP3
else
EXEC SP3
END
go
CREATE PROCEDURE SP1 as
BEGIN
EXEC SP2
END
GO
/*
--I want some data back from SP3
-- Just run the SP1
EXEC SP1
*/
/*
--I want some data back from SP3 into a table to do something useful
--Try run this - get an error - can't nest Execs
if exists (select * from tempdb.dbo.sysobjects o where o.xtype in ('U') and o.id = object_id(N'tempdb..#tmp1'))
DROP TABLE #tmp1
CREATE TABLE #tmp1 (ID INT, Data VARCHAR(20))
INSERT INTO #tmp1
EXEC SP1
*/
/*
--I want some data back from SP3 into a table to do something useful
--However, if we run this single hash temp table it is in scope anyway so
--no need for the exec insert
if exists (select * from tempdb.dbo.sysobjects o where o.xtype in ('U') and o.id = object_id(N'tempdb..#tmp1'))
DROP TABLE #tmp1
CREATE TABLE #tmp1 (ID INT, Data VARCHAR(20))
EXEC SP1
SELECT * FROM #tmp1
*/
My work around for this problem has always been to use the principle that single hash temp tables are in scope to any called procs. So, I have an option switch in the proc parameters (default set to off). If this is switched on, the called proc will insert the results into the temp table created in the calling proc. I think in the past I have taken it a step further and put some code in the called proc to check if the single hash table exists in scope, if it does then insert the code, otherwise return the result set. Seems to work well - best way of passing large data sets between procs.
This trick works for me.
You don't have this problem on remote server, because on remote server, the last insert command waits for the result of previous command to execute. It's not the case on same server.
Profit that situation for a workaround.
If you have the right permission to create a Linked Server, do it.
Create the same server as linked server.
in SSMS, log into your server
go to "Server Object
Right Click on "Linked Servers", then "New Linked Server"
on the dialog, give any name of your linked server : eg: THISSERVER
server type is "Other data source"
Provider : Microsoft OLE DB Provider for SQL server
Data source: your IP, it can be also just a dot (.), because it's localhost
Go to the tab "Security" and choose the 3rd one "Be made using the login's current security context"
You can edit the server options (3rd tab) if you want
Press OK, your linked server is created
now your Sql command in the SP1 is
insert into #myTempTable
exec THISSERVER.MY_DATABASE_NAME.MY_SCHEMA.SP2
Believe me, it works even you have dynamic insert in SP2
I found a work around is to convert one of the prods into a table valued function. I realize that is not always possible, and introduces its own limitations. However, I have been able to always find at least one of the procedures a good candidate for this. I like this solution, because it doesn't introduce any "hacks" to the solution.
I encountered this issue when trying to import the results of a Stored Proc into a temp table, and that Stored Proc inserted into a temp table as part of its own operation. The issue being that SQL Server does not allow the same process to write to two different temp tables at the same time.
The accepted OPENROWSET answer works fine, but I needed to avoid using any Dynamic SQL or an external OLE provider in my process, so I went a different route.
One easy workaround I found was to change the temporary table in my stored procedure to a table variable. It works exactly the same as it did with a temp table, but no longer conflicts with my other temp table insert.
Just to head off the comment I know that a few of you are about to write, warning me off Table Variables as performance killers... All I can say to you is that in 2020 it pays dividends not to be afraid of Table Variables. If this was 2008 and my Database was hosted on a server with 16GB RAM and running off 5400RPM HDDs, I might agree with you. But it's 2020 and I have an SSD array as my primary storage and hundreds of gigs of RAM. I could load my entire company's database to a table variable and still have plenty of RAM to spare.
Table Variables are back on the menu!
I recommend to read this entire article. Below is the most relevant section of that article that addresses your question:
Rollback and Error Handling is Difficult
In my articles on Error and Transaction Handling in SQL Server, I suggest that you should always have an error handler like
BEGIN CATCH
IF ##trancount > 0 ROLLBACK TRANSACTION
EXEC error_handler_sp
RETURN 55555
END CATCH
The idea is that even if you do not start a transaction in the procedure, you should always include a ROLLBACK, because if you were not able to fulfil your contract, the transaction is not valid.
Unfortunately, this does not work well with INSERT-EXEC. If the called procedure executes a ROLLBACK statement, this happens:
Msg 3915, Level 16, State 0, Procedure SalesByStore, Line 9 Cannot use the ROLLBACK statement within an INSERT-EXEC statement.
The execution of the stored procedure is aborted. If there is no CATCH handler anywhere, the entire batch is aborted, and the transaction is rolled back. If the INSERT-EXEC is inside TRY-CATCH, that CATCH handler will fire, but the transaction is doomed, that is, you must roll it back. The net effect is that the rollback is achieved as requested, but the original error message that triggered the rollback is lost. That may seem like a small thing, but it makes troubleshooting much more difficult, because when you see this error, all you know is that something went wrong, but you don't know what.
I had the same issue and concern over duplicate code in two or more sprocs. I ended up adding an additional attribute for "mode". This allowed common code to exist inside one sproc and the mode directed flow and result set of the sproc.
what about just store the output to the static table ? Like
-- SubProcedure: subProcedureName
---------------------------------
-- Save the value
DELETE lastValue_subProcedureName
INSERT INTO lastValue_subProcedureName (Value)
SELECT #Value
-- Return the value
SELECT #Value
-- Procedure
--------------------------------------------
-- get last value of subProcedureName
SELECT Value FROM lastValue_subProcedureName
its not ideal, but its so simple and you don't need to rewrite everything.
UPDATE:
the previous solution does not work well with parallel queries (async and multiuser accessing) therefore now Iam using temp tables
-- A local temporary table created in a stored procedure is dropped automatically when the stored procedure is finished.
-- The table can be referenced by any nested stored procedures executed by the stored procedure that created the table.
-- The table cannot be referenced by the process that called the stored procedure that created the table.
IF OBJECT_ID('tempdb..#lastValue_spGetData') IS NULL
CREATE TABLE #lastValue_spGetData (Value INT)
-- trigger stored procedure with special silent parameter
EXEC dbo.spGetData 1 --silent mode parameter
nested spGetData stored procedure content
-- Save the output if temporary table exists.
IF OBJECT_ID('tempdb..#lastValue_spGetData') IS NOT NULL
BEGIN
DELETE #lastValue_spGetData
INSERT INTO #lastValue_spGetData(Value)
SELECT Col1 FROM dbo.Table1
END
-- stored procedure return
IF #silentMode = 0
SELECT Col1 FROM dbo.Table1
Declare an output cursor variable to the inner sp :
#c CURSOR VARYING OUTPUT
Then declare a cursor c to the select you want to return.
Then open the cursor.
Then set the reference:
DECLARE c CURSOR LOCAL FAST_FORWARD READ_ONLY FOR
SELECT ...
OPEN c
SET #c = c
DO NOT close or reallocate.
Now call the inner sp from the outer one supplying a cursor parameter like:
exec sp_abc a,b,c,, #cOUT OUTPUT
Once the inner sp executes, your #cOUT is ready to fetch. Loop and then close and deallocate.
If you are able to use other associated technologies such as C#, I suggest using the built in SQL command with Transaction parameter.
var sqlCommand = new SqlCommand(commandText, null, transaction);
I've created a simple Console App that demonstrates this ability which can be found here:
https://github.com/hecked12/SQL-Transaction-Using-C-Sharp
In short, C# allows you to overcome this limitation where you can inspect the output of each stored procedure and use that output however you like, for example you can feed it to another stored procedure. If the output is ok, you can commit the transaction, otherwise, you can revert the changes using rollback.
On SQL Server 2008 R2, I had a mismatch in table columns that caused the Rollback error. It went away when I fixed my sqlcmd table variable populated by the insert-exec statement to match that returned by the stored proc. It was missing org_code. In a windows cmd file, it loads result of stored procedure and selects it.
set SQLTXT= declare #resets as table (org_id nvarchar(9), org_code char(4), ^
tin(char9), old_strt_dt char(10), strt_dt char(10)); ^
insert #resets exec rsp_reset; ^
select * from #resets;
sqlcmd -U user -P pass -d database -S server -Q "%SQLTXT%" -o "OrgReport.txt"