Using transaction in SQL Server calling stored procedure in another server - sql-server

I want to execute an stored procedure in Server1.DB1, this stored procedure will execute inside another stored procedure using dynamic SQL, it will be in Server1.DB2.
I need to use begin/end transaction to make sure everything is executed or everything fail.
The question is: will the transaction work in this case using dynamic SQL pointed to a the different database?
Like
BEGIN TRANSACT
--Set Status to "In Progress"
SET #Qry = N'EXEC '+ #DB2 + '.[dbo].[StatusUpdate] #Id, #Status'
SET #QryParams = N'#Id INT, #Status INT'
EXEC SP_EXECUTESQL #Qry,
#QryParams,
#Id = #Id,
#Status = #InProgress
INSERT DATA LOCALLY IN A TABLE
UPDATE DATA LOCALLY IN A TABLE
END TRANSACT
I'm using SQL Server 2014.

It depends on REMOTE_PROC_TRANSACTIONS definition:
Specifies that when a local transaction is active, executing a remote
stored procedure starts a Transact-SQL distributed transaction managed
by Microsoft Distributed Transaction Coordinator (MS DTC).
If it's ON:
The instance of SQL Server making the remote stored procedure call is
the transaction originator and controls the completion of the
transaction. When a subsequent COMMIT TRANSACTION or ROLLBACK
TRANSACTION statement is issued for the connection, the controlling
instance requests that MS DTC manage the completion of the distributed
transaction across the computers involved.
Otherwise remote stored procedure calls are not made part of a local transaction.
Several important notes:
Using distributed transaction is risky thus should be carefully used.
This feature is deprecated.

Related

how to get executing procedure name in snowflake in sql way

How can I get the name of a stored procedure from within that procedure while it is executing. the procedure is written in language sql (not javascript).
CREATE OR REPLACE PROCEDURE sp_test()
returns VARCHAR
language sql
execute as owner
AS
begin
....soemthing like object_name(##procid) from ms sql server to get "sp_test" as name......
commit;
end;
You can use SHOW TRANSACTIONS. See documentation
If a procedure_A is running in a transaction, and it calls procedure_B, and then procedure_B runs a query, Then the transactions output of this query should show all three levels of queries as active transactions.

Parallel merge strategy without deadlock

Using SQL Server 2016, I wish to merge data from a SourceTable to a DestinationTable with a simple procedure containing a simple insert/update/delete on the same table.
The SourceTable is filled by several different applications, and they call the MergeOrders stored procedure to merge their uploaded rows from SourceTable into DestinationTable.
There can be several instances of MergeOrders stored procedure running in parallel.
I get a lot of lock, but that's normal, the issue is that sometimes I get "RowGroup deadlocks", which I cannot afford.
What is the best way to execute such merge operation in this parallel environment.
I am thinking about TABLOCK or SERIALIZABLE hints, or maybe application locks to serialize the access, but interested if there is better way.
An app lock will serialize sessions attempting to run this procedure. It should look like this:
create or alter procedure ProcWithAppLock
with execute as owner
as
begin
set xact_abort on;
set nocount on;
begin transaction;
declare #lockName nvarchar(255) = object_name(##procid) + '-applock';
exec sp_getapplock #lockName,'Exclusive','Transaction',null,'dbo';
--do stuff
waitfor delay '00:00:10';
select getdate() dt, object_name(##procid);
exec sp_releaseapplock #lockName, 'Transaction', 'dbo';
commit transaction;
end
There are a couple of subtle things in this template. First off it doesn't have a catch block, and relies on xact_abort to release the applock in case of an error. And you want to explicitly release the app lock in case this procedure is called in the context of a longer-running transaction. And finally the principal for the lock is set to dbo so that no non-dbo user can acquire a conflicting lock. This also requires that the procedure be run with execute as owner, as the application user would not normally be dbo.

DTC not working when two transactions are being executed

I'm working on SQL Server 2008 and I'm trying to execute a stored procedure which updates a table and executes another stored procedure on a linked server.
The point is it works when no update is made, just like this:
[test_DTC] on [Server1]
CREATE PROCEDURE [dbo].[test_DTC]
#UserId int,
#Status tinyint
AS
BEGIN
EXEC [Server2].[Database].[dbo].[test_DTC];
END
GO
[test_DTC] on [Server2]
CREATE PROCEDURE [dbo].[test_DTC]
AS
BEGIN
PRINT 'Done'
END
GO
Execute on Server1:
EXEC [test_DTC]
Result:
Done
But when I include the UPDATE on Server1 procedure, it fails.
[test_DTC] on [Server1]
CREATE PROCEDURE [dbo].[test_DTC]
#UserId int,
#Status tinyint
AS
BEGIN
UPDATE
Users
SET
Status=#Status
WHERE
UserId=#UserId;
EXEC [Server2].[Database].[dbo].[test_DTC];
END
GO
[test_DTC] on [Server2]
CREATE PROCEDURE [dbo].[test_DTC]
AS
BEGIN
PRINT 'Done'
END
GO
Execute on Server1:
EXEC [test_DTC]
Result
Provider OLE DB "SQLNCLI10" from linked server "[Server2]" returned message "The transaction has already been implicitly or explicitly committed". Msg 7391, Level16, State 2, Procedure [Server2].[Database].[dbo].[test_DTC], Line 19
The operation could not be performed because OLE DB provider "SQLNCLI10" for linked server "Server2" was unable to begin a distributed transaction.
Thanks for your help
I have found a solution for this is the MSDN Blog
it says
The reason is that when transactions propagate from one machine to another they include their machine name/DNS name along with it. When it arrives on the other machine, it will use this name to attempt to communicate back to the originator machine. If this communication fails then distributed transactions will not work in the system.
Microsoft has provided a Detailed Article on the same

How to store the results of a stored procedure without requiring a distributed transaction?

I have a remote stored procedure that i am running:
EXECUTE Contoso.Frob.dbo.Grobber #StartDate='20140513', #EndDate='20140518'
and this remote stored procedure returns a rowset:
EmployeeID EmployeeName StartDateTime EndDateTime
---------- -------------- ------------- -----------------------
619 Guyer, Kirsten 2014-05-13 19:00:00.000 2014-05-13 19:00:00.000
...
Excellent. Perfect. Good. Sweet.
Now that i have these results, i need to store them in a table. Any kind of table. I don't care what kind of table:
physical table
temporary table
global temporary table
table variable
I just need them stored so that i can process them. The problem is that when i try to insert the results into a table, whether it be:
a physical table
INSERT INTO EmployeeSchedule
EXECUTE Contoso.Frob.dbo.Grobber #StartDate='20140513', #EndDate='20140518'
temporary table
INSERT INTO #EmployeeSchedule
EXECUTE Contoso.Frob.dbo.Grobber #StartDate='20140513', #EndDate='20140518'
a global temporary table
INSERT INTO ##EmployeeSchedule
EXECUTE Contoso.Frob.dbo.Grobber #StartDate='20140513', #EndDate='20140518'
a table variable
INSERT INTO #EmployeeSchedule
EXECUTE Contoso.Frob.dbo.Grobber #StartDate='20140513', #EndDate='20140518'
SQL Server insists (nay, demands) that it begin a distributed transaction:
OLE DB provider "SQLNCLI10" for linked server "Contoso" returned message "The partner transaction manager has disabled its support for remote/network transactions.".
Msg 7391, Level 16, State 2, Line 41
The operation could not be performed because OLE DB provider "SQLNCLI10" for linked server "Contoso" was unable to begin a distributed transaction.
Why not just...
Now, making changes to the Contoso server is not an option. Why? Doesn't matter. Pretend that Jack Bauer will make an appearance and Guantanamo anyone who tries to modify Contoso. This means i cannot enable or reconfigure MSDTC on \\Contoso.
Did you try using READ UNCOMMITTED?
Yes.
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
INSERT INTO #EmployeeSchedule
EXECUTE wclnightdb.NGDemo.dbo.tbtGetSchedule #StartDate, #EndDate
The partner transaction manager has disabled its support for remote/network transactions.
And:
INSERT INTO #EmployeeSchedule
WITH (NOLOCK)
EXECUTE wclnightdb.NGDemo.dbo.tbtGetSchedule #StartDate, #EndDate
Sorry. No nolock. Nolock is a no no:
Msg 1065, Level 15, State 1, Line 15
The NOLOCK and READUNCOMMITTED lock hints are not allowed for target tables of INSERT, UPDATE, DELETE or MERGE statements.
I always could give up on SQL Server
If i were doing this in a programming environment, it would be fairly easy to fix:
using (IDataReader rdr = ADOHelper.Execute(conn, "EXECUTE Contoso.Frob.dbo.Grobber #StartDate='20140513', #EndDate='20140518'")
{
while (rdr.Read())
{
InsertRowIntoTable(conn, rdr);
}
}
Although that would require me to create a binary, ship it, and schedule it. I'm looking for the option that works with SQL Server (so SQL Agent can schedule the job).
Bonus Reading
SET REMOTE_PROC_TRANSACTIONS (Transact-SQL)
How do I use the results of a stored procedure from within another?
How can one iterate over stored procedure results from within another stored procedure....without cursors?
SQL Server insists (nay, demands) that it begin a distributed
transaction:
If you can't configure your servers to use distributed transactions for whatever reason, you can tell it not to.
USE [master]
GO
EXEC master.dbo.sp_serveroption
#server = N'Contoso',
#optname = N'remote proc transaction promotion',
#optvalue = N'false'
GO
Or in SSMS GUI:
I don't know all implications of turning off this option, but at least now my INSERT ... EXEC [LinkedServer]... works.
Two options to try would be:
Since you already have a Linked Server set up, use it with OPENQUERY, as in:
SELECT column1, column2 FROM OPENQUERY(Contoso, 'EXECUTE Frob.dbo.Grobber #StartDate=''20140513'', #EndDate=''20140518''')
If the returned columns will remain consistent, create a SQLCLR Table-Valued Function. This assumes that the remote proc is Read-Only (i.e. SELECT-only). But unlike T-SQL functions, SQLCLR functions can execute Stored Procedures using the connection string "Context Connection = True;" as long as the Stored Procedure is SELECT-only (i.e. does not change the state of the DB through DML, DDL, etc).
How about this:
-- Either create a job that runs your remote sql via a SQLSMD command, or just run something like this:
EXEC master..xp_cmdshell 'SQLCMD -S Server\SQLSERVERDEV2005 -i"c:\DML.sql"'
(It might be easier with a job because you can modify the job step easily via sp_update_jobstep to get the right values in for your parameters)
-- Output the result of the sqlcmd into a file
-- Load the file into a table via bulk import.

Testing linked server conccetion inside trigger or procedure

I wrote a trigger that updates local table and similar table on linked server.
CREATE TRIGGER myTtableUpdate ON myTable
AFTER UPDATE
AS
IF (COLUMNS_UPDATED() > 0)
BEGIN
DECLARE #retval int;
BEGIN TRY
EXEC #retval = sys.sp_testlinkedserver N'my_linked_server';
END TRY
BEGIN CATCH
SET #retval = sign(##error);
END CATCH;
IF (#retval = 0)
BEGIN
UPDATE remoteTable SET remoteTable.datafield = i.datafield
FROM my_linked_server.remote_database.dbo.myTable remoteTable
INNER JOIN inserted i ON (remoteTable.id = i.id)
END
END -- end of trigger
Unfortunately when connection is down I get error message
'Msg 3616, Level 16, State 1, Line 2'
'Transaction doomed in trigger. Batch has been aborted'
and locally made update is rolled back.
Is there a way to maintain this error and keep local updates?
Note that I'm using SQL Server 2005 Express Edition on both PCs running Windows XP Pro.
edit1: SQL server is Express Edition
edit2: Both PCs run Windows XP Pro so these aren't servers
don't write to the remote server in the trigger.
create a local table to store rows that need to be pushed to the remote server
insert into this new local table in the trigger
create a job that runs every N minutes to insert from this local table into remote server.
this job can run a procedure that can test for the connection, and when it is back up, it will handle all rows in the new local table. It can process the rows in the local table this way:
declare #OutputTable table (RowID int not null)
insert into my_linked_server.remote_database.dbo.myTable remoteTable(...columns...)
OUTPUT INSERTED.RowID
INTO #OutputTable
SELECT ...columns...
from NewLocalTable
delete NewLocalTable
from NewLocalTable n
inner join #OutputTable o ON n.RowID=o.RowID
EDIT based OP comment
after inserting into this new local table start the job from the trigger (sp_start_job), it will run in its own scope. If you can't use sql server jobs, use xp_cmdshell to execute the stored procedure (lookup SQLCMD or ISQL or OSQL, I'm not sure what you have). still schedule the job every N minutes, so it will eventually run when the connection comes up.
Is at least one of the servers Workgroup edition or higher? You can use Service Broker to ship your records instead of linked servers, but it will not work between to Express editions due to licensing restrictions. Is a solution relying exclusively on SQL, offers reliability in case of incidents (one of the servers is unavailable) and your updates will propagate in real time (as soon as they are committed). My site has many examples on how to do this, you can start with this article here on how to achieve high message throughput.

Resources