Using SQL Server 2016, I wish to merge data from a SourceTable to a DestinationTable with a simple procedure containing a simple insert/update/delete on the same table.
The SourceTable is filled by several different applications, and they call the MergeOrders stored procedure to merge their uploaded rows from SourceTable into DestinationTable.
There can be several instances of MergeOrders stored procedure running in parallel.
I get a lot of lock, but that's normal, the issue is that sometimes I get "RowGroup deadlocks", which I cannot afford.
What is the best way to execute such merge operation in this parallel environment.
I am thinking about TABLOCK or SERIALIZABLE hints, or maybe application locks to serialize the access, but interested if there is better way.
An app lock will serialize sessions attempting to run this procedure. It should look like this:
create or alter procedure ProcWithAppLock
with execute as owner
as
begin
set xact_abort on;
set nocount on;
begin transaction;
declare #lockName nvarchar(255) = object_name(##procid) + '-applock';
exec sp_getapplock #lockName,'Exclusive','Transaction',null,'dbo';
--do stuff
waitfor delay '00:00:10';
select getdate() dt, object_name(##procid);
exec sp_releaseapplock #lockName, 'Transaction', 'dbo';
commit transaction;
end
There are a couple of subtle things in this template. First off it doesn't have a catch block, and relies on xact_abort to release the applock in case of an error. And you want to explicitly release the app lock in case this procedure is called in the context of a longer-running transaction. And finally the principal for the lock is set to dbo so that no non-dbo user can acquire a conflicting lock. This also requires that the procedure be run with execute as owner, as the application user would not normally be dbo.
Related
I have what seems like a simple problem but can't find a solution. I have a long-running stored procedure that will update a table at the beginning and end of the statement. The problem is, the table is locked during the whole process. Here's a simplified version:
ALTER PROCEDURE [dbo].[Proc_FullRefresh]
AS
BEGIN
UPDATE Settings SET SettingValue = 'true' WHERE SettingName = 'Running'
WAITFOR DELAY '00:00:30'
END
The problem is, I'm unable to select that row from the Settings table while the whole procedure is running. I even tried wrapping in transactions to see if that would help:
BEGIN TRAN
UPDATE Settings SET SettingValue = 'true' WHERE SettingName = 'Running'
COMMIT;
BEGIN TRAN
WAITFOR DELAY '00:00:30'
COMMIT
But that didn't work either. Is there any way to release the lock on the Settings table while the procedure is doing its other stuff?
Is there any way to release the lock on the Settings table while the procedure is doing its other stuff?
You are running the stored procedure in a transaction, otherwise the UPDATE statement would complete immediately, and be visible from other sessions. When you add additional BEGIN TRAN/COMMIT pairs you're actually creating a "nested transaction" The locks from the UPDATE will be held until the commit of the outer (real) transaction.
So just don't run the procedure in a transaction.
Some clients and data access/ORM frameworks start a transaction automatically, but most require you to explicitly start a transaction.
At work, we have production databases on which developers have read permission. When developers have to fix something in the database, they must test their scripts in a copy of the production databases and then ask the DBA team to execute it in production.
Sometimes however, the data that must be fixed is not in the test databases. Developers then ask for a new copy of production databases, and this can take a lot of time.
Of course, we could grant them update permission and ask them to use BEGIN TRANSACTION / ROLLBACK, but it is too risky. Nobody wants that, not even the developers.
My question: is it possible to create a profile on SQL Server - or grant special permission - that would allow to execute update and delete commands but would always, no matter what the developer wrote, rollback after a GO or after the last command issued in a session?
This would be really helpful to test scripts before sending them to production.
You could create a sproc and give EXEC access to devs on that sproc only, SOLUTION #1 - SPROCS. This is probably the most elegant solution as you want them to have a simple way to run their query and also want to control their perms on the production environment. Example to execute a command would be: EXEC [dbo].[usp_rollback_query] 'master', 'INSERT INTO table1 SELECT * FROM table2
SOLUTION #1
USE [DATABASENAME]
GO
ALTER PROC dbo.usp_rollback_query
(
#db VARCHAR(128),
#query NVARCHAR(max)
)
AS
BEGIN
DECLARE #main_query NVARCHAR(max) = 'USE [' + #db + ']
' + #query;
BEGIN TRAN
EXEC sp_executesql #main_query;
ROLLBACK TRAN
END
If you can afford to have snapshot created and dropped each time, SOLUTION #2 - DB SNAPSHOTS is the best way to go about it. It's super fast, the only two drawbacks are that you need to kick people off the DB before you can restore and it will restore all changes made since the snapshot was created.
SOLUTION #2
-- CREATE SNAPSHOT
CREATE DATABASE [DATABASENAME_SS1]
ON
(
NAME = DATABASENAME,
FILENAME = 'your\path\DATABASENAME_SS1.ss'
) AS SNAPSHOT OF [DATABASENAME];
GO
-- let devs run whatever they want
-- CLOSE CONNECTIONS
USE [master];
GO
ALTER DATABASE [DATABASENAME]
SET SINGLE_USER
WITH ROLLBACK IMMEDIATE;
GO
-- RETORE DB
RESTORE DATABASE [DATABASENAME]
FROM DATABASE_SNAPSHOT = 'DATABASENAME_SS1';
GO
-- CLEANUP SNAPSHOT COPY
DROP DATABASE [DATABASENAME_SS1];
I don't think ROLLBACK on each query is a good idea or a good design but if you have to go that route, you would need to use triggers. The limitation with triggers is that a DATABASE or SERVER level trigger can only be for DDL and not DML. Creating triggers on each TABLE object that you think is being altered is doable, however, the drawback here is that you need to know which tables are being modified and even then it's quite messy. Regardless please look at SOLUTION #3 - TABLE TRIGGERS below. To make this better you could create a role and check if the user is part of that role, then rollback.
SOLUTION #3
USE DATABASENAME
GO
ALTER TRIGGER dbo.tr_rollback_devs
ON dbo.table_name
AFTER INSERT, DELETE, UPDATE
AS
BEGIN
SET NOCOUNT ON;
IF SYSTEM_USER IN ('dev1', 'dev2')
ROLLBACK
END
GO
tldr; what is alternative of sp_getapplock in the native compiled stored procedure.
I have a memory-optimized table and few indexes on it. it is mission critical app. I am using a memory-optimized table since is does minimal logging. I am developing order matching/trade matching engine. one order is inserted at a time and matched with open orders. it is not a bulk operation. I have tried with regular table, but I was not able to achieve throughput I require. the memory-optimized table has solved throughput issue.
I want to restrict SQL server to not run more than one instance of the stored procedure. in the regular stored procedure, this can be achieved with sp_getapplock. how can I achieve this with the natively compiled stored procedure?
I googled and did not found an answer.
One method is to execute sp_getapplock in a outer stored procedure that wraps the call to the native proc:
CREATE PROC dbo.usp_NativeProcWrapper
AS
BEGIN TRY
BEGIN TRAN;
EXEC sp_getapplock 'dbo.usp_NativeProc', 'Exclusive', 'Transaction';
EXEC dbo.usp_NativeProc;
EXEC sp_releaseapplock 'dbo.usp_NativeProc', 'Transaction';
COMMIT;
END TRY
BEGIN CATCH
IF ##TRANCOUNT > 0 ROLLBACK;
THROW;
END CATCH;
GO
I want to execute an stored procedure in Server1.DB1, this stored procedure will execute inside another stored procedure using dynamic SQL, it will be in Server1.DB2.
I need to use begin/end transaction to make sure everything is executed or everything fail.
The question is: will the transaction work in this case using dynamic SQL pointed to a the different database?
Like
BEGIN TRANSACT
--Set Status to "In Progress"
SET #Qry = N'EXEC '+ #DB2 + '.[dbo].[StatusUpdate] #Id, #Status'
SET #QryParams = N'#Id INT, #Status INT'
EXEC SP_EXECUTESQL #Qry,
#QryParams,
#Id = #Id,
#Status = #InProgress
INSERT DATA LOCALLY IN A TABLE
UPDATE DATA LOCALLY IN A TABLE
END TRANSACT
I'm using SQL Server 2014.
It depends on REMOTE_PROC_TRANSACTIONS definition:
Specifies that when a local transaction is active, executing a remote
stored procedure starts a Transact-SQL distributed transaction managed
by Microsoft Distributed Transaction Coordinator (MS DTC).
If it's ON:
The instance of SQL Server making the remote stored procedure call is
the transaction originator and controls the completion of the
transaction. When a subsequent COMMIT TRANSACTION or ROLLBACK
TRANSACTION statement is issued for the connection, the controlling
instance requests that MS DTC manage the completion of the distributed
transaction across the computers involved.
Otherwise remote stored procedure calls are not made part of a local transaction.
Several important notes:
Using distributed transaction is risky thus should be carefully used.
This feature is deprecated.
EDIT This questions is no longer valid as the issue was something else. Please see my explanation below in my answer.
I'm not sure of the etiquette so i'l leave this question in its' current state
I have a stored procedure that writes some data to a table.
I'm using Microsoft Practices Enterprise library for making my stored procedure call.
I invoke the stored procedure using a call to ExecuteNonQuery.
After ExecuteNonQuery returns i invoke a 3rd party library. It calls back to me on a separate thread in about 100 ms.
I then invoke another stored procedure to pull the data I had just written.
In about 99% of cases the data is returned. Once in a while it returns no rows( ie it can't find the data). If I put a conditional break point to detect this condition in the debugger and manually rerun the stored procedure it always returns my data.
This makes me believe the writing stored procedure is working just not committing when its called.
I'm fairly novice when it comes to sql, so its entirely possible that I'm doing something wrong. I would have thought that the writing stored procedure would block until its contents were committed to the db.
Writing Stored Procedure
ALTER PROCEDURE [dbo].[spWrite]
#guid varchar(50),
#data varchar(50)
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- see if this guid has already been added to the table
DECLARE #foundGuid varchar(50);
SELECT #foundGuid = [guid] from [dbo].[Details] where [guid] = #guid;
IF #foundGuid IS NULL
-- first time we've seen this guid
INSERT INTO [dbo].[Details] ( [guid], data ) VALUES (#guid, #data)
ELSE
-- updaeting or verifying order
UPDATE [dbo].[Details] SET data =#data WHERE [guid] = #guid
END
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
Reading Stored Procedure
ALTER PROCEDURE [dbo].[spRead]
#guid varchar(50)
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
SELECT * from [dbo].[Details] where [guid] = #guid;
END
To actually block other transactions and manually commit,
maybe adding
BEGIN TRANSACTION
--place your
--transactions you wish to do here
--if everything was okay
COMMIT TRANSACTION
--or
--ROLLBACK TRANSACTION if something went wrong
could help you?
I’m not familiar with the data access tools you mention, but from your description I would guess that either the process does not wait for the stored procedure to complete execution before proceeding to the next steps, or ye olde “something else” is messing with the data in between your write and read calls.
One way to tell what’s going on is to use SQL Profiler. Fire it up, monitor all possible query execution events on the database (including stored procedure and stored procedures line start/stop events), watch the Text and Started/Ended columns, correlate this with the times you are seeing while tracing the application, and that should help you figure out what’s going on there. (SQL Profiler can be complex to use, but there are many sources on the web that explain it, and it is well worth learning how to use it.)
I'll leave my answer below as there are comments on it...
Ok, I feel shame I had simplified my question too much. What was actually happening is two things:
1) the inserting procedure is actually running on a separate machine( distributed system).
2) the inserting procedure actually inserts data into two tables without a transaction.
This means the query can run at the same time and find the tables in a state where one has been written to and the second table hasn't' yet had its write committed.
A simple transaction fixes this as the reading query can handle either case of no write or full write but couldn't handle the case of one table written to and the other having a pending commit.
Well it turns out that when I created the stored procedure the MSSQLadmin tool added a line to it by default:
SET NOCOUNT ON;
If I turn that to:
SET NOCOUNT OFF;
then my procedure actually commits to the database properly. Strange that this default would actually end up causing problems.
Easy way using try-catch, like it if useful
BEGIN TRAN
BEGIN try
INSERT INTO meals
(
...
)
Values(...)
COMMIT TRAN
END try
BEGIN catch
ROLLBACK TRAN
SET #resp = (convert(varchar,ERROR_LINE()), ERROR_MESSAGE() )
END catch