Stored procedure with transaction, stops at transaction - sql-server

So I have 5 PC's that run the same Stored Proc on 1 SQL server. For the sake of giving an example, let's say it's a program that runs over night. A PC executes a stored proc with an userID, and based on it, will update multiple tables in SQL Server. And this is done for all 2000 users in the company. However, as we have 5 PC's doing this, we are working with table-locks and transaction.
99% of the time, the stored proc works as intended, but every now and then, the SP just seems to stop before the transaction. Here is a breakdown of my SP:
CREATE PROCEDURE dbo.UpdateClient #UserID #PCID
(Declared variables etc)
BEGIN
BEGIN TRY
INSERT INTO dbo.log(event, PC_ID) values('The stored proc has been started by PC:', #PCID)
-- I have added these logging to my SP to try and find the issue but with no luck
BEGIN TRANSACTION T1
INSERT INTO dbo.log(event, PC_ID) values('Transaction has been opened', #PCID)
SELECT TOP 1 * FROM dbo.MasterTable with (tablockx, holdlock)
-- As we are having multiple PC's using the same SP at the same time,
-- we decided on using a table that is being read and locked when free by "the master" at that moment
INSERT INTO dbo.log(event, PC_ID) values('Master has been claimed', #PCID)
/**** Here we have a ton of UPDATE TABLE and INSERT INTO statements taking place ****/
INSERT INTO dbo.log(event, PC_ID) values('Tables have been updated', #PCID)
INSERT INTO dbo.log(event, PC_ID) values('Master role will be released', #PCID)
COMMIT TRANSACTION T1
END TRY
BEGIN CATCH
INSERT INTO dbo.log(event, PC_ID) values(CONCAT('An Error has occurred:',ERROR_MESSAGE()), #PCID)
END CATCH
INSERT INTO dbo.log(event, PC_ID) values('Stored procedure is now ending', #PCID)
END
Having these run by external PC's, makes it harder to log so the dbo.log is all I have in this case. As said before, sometimes the transaction doesn't get executed and then I end up with a log like this (let's say I only had PC1 and PC2 running that night):
ID| Event | PC_ID
1 | The stored proc has been started by PC: | PC1
2 | The stored proc has been started by PC: | PC2
3 | Transaction has been opened | PC2
4 | Master has been claimed | PC2
5 | Tables have been updated | PC2
6 | Master role will be released | PC2
7 | Stored procedure is now ending | PC2
8 | The stored proc has been started by PC: | PC1
9 | Transaction has been opened | PC1
etc
This is where I am lost. PC1 starts the Stored proc, but doesn't get further then the first log line. PC2, which started at the exact same time, did however go through the whole thing. My first thought was that (somehow) the transactions colide. But what I find weird is that PC1's SP doesn't even execute stuff outside the transaction. My CATCH statement nor the extra log line in the end of the SP are executed.
In the end, PC1 just executes it's next SP with a new UserID (but that ultimately breaks stuff in the end)
Any ideas/suggestion on how to fix this problem? As said before, 99% of the time it works but that 1% is causing a lot of issues.

Related

SQL Server testing deadlocks on distributed view

I'm trying to test some deadlock cases on a distributed view. I set up three nodes with docker. All servers are registered as linked server and the distributed view is working fine.
Datanode1 has the table movie_33 and holds all movies up to the id 333
Datanode2 has the table movie_66 and holds alld movies from 334 up to 666
Datanode3 has the table movie_99 and holds all movies from 667 up to 999
The view connects all tables.
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER OFF
GO
CREATE VIEW [dbo].[movie]
AS
SELECT *
FROM dbo.movie_33
UNION ALL
SELECT *
FROM [172.16.1.3].Sakila.dbo.movie_66
UNION ALL
SELECT *
FROM [172.16.1.4].Sakila.dbo.movie_99
GO
Now I want that one transaction is the victim. I do it with the following code:
Window 1:
--
-- Example on (distributed) transactions in SQL Server
--
-- Deadlock tracing requires administrative permissions (sysadmin)!
--
-- IDs for additional testing:
-- 321 & 123 are both on mysql1
-- 123 & 456 are on mysql1 and mysql2
-- 456 & 654 are both on mysql2
-- 456 & 789 are on mysql2 and mysql3
--
-- Query Window 1 [TA1]
--
use sakila
-- allow the entire transaction to be aborted, if a sub-transaction fails
set xact_abort on
-- enable tracing for deadlocks and output status
dbcc traceon (1204,-1)
dbcc traceon (1222,-1)
dbcc tracestatus(-1)
-- we want TA1 to become the victim and TA2 to be successful
set deadlock_priority LOW
set transaction isolation level read committed
begin transaction
PRINT 'Start'
update dbo.movie set title='test1' where movie_id = 456 -- obtains lock for this row
-- allow other transaction to acquire locks
waitfor delay '00:00:10'
update dbo.movie set title='test1' where movie_id = 789 -- results in deadlock with TA2
rollback
Window 2:
--
-- Query Window 2 [TA2] (execute immediately after Query Window 1)
--
use sakila
-- allow the entire transaction to be aborted, if a sub-transaction fails
set xact_abort on
-- enable tracing for deadlocks and output status
dbcc traceon (1204,-1)
dbcc traceon (1222,-1)
dbcc tracestatus(-1)
-- we want TA1 to become the victim and TA2 to be successful
set deadlock_priority HIGH
set transaction isolation level read committed
begin transaction
update dbo.movie set title='test2' where movie_id = 789 -- obtains lock for this row
update dbo.movie set title='test2' where movie_id = 456 -- this row should be locked by TA1
-- we do not want to cause permanent changes to the database
rollback
So when I use ids which querying items on the same server. The deadlock test is working fine. (for example 321 & 123 are both on mysql1 or 456 & 654 are both on mysql2)
When using ids which querying items on different server i get the following error
Msg 7399, Level 16, State 1, Line 21
The OLE DB provider "MSOLEDBSQL" for linked server "172.16.1.3" reported an error. Execution terminated by the provider because a resource limit was reached.
Msg 7320, Level 16, State 2, Line 21
Cannot execute the query "UPDATE "Sakila"."dbo"."movie_66" set "title" = 'test2' WHERE "movie_id"=(456)" against OLE DB provider "MSOLEDBSQL" for linked server "172.16.1.3".
Can someone help me with the problem?
Thank you in advance

Locking for SQL Server concurrent accessing and modifying one record

I have a table saving a list of completed jobs. Each job is done and inserted into that table after completion. There are multi-users who can fetch and run the same jobs. But before running the job should be checked (against the completed jobs table I've just mentioned) to ensure that it's not been run by anyone.
In fact the job is inserted into that table right before running the job, if the job is failed it will be removed from that table later. I have a stored procedure to check if a job exists in the table but I'm not really sure about the situation when multi-users can accidentally run the same jobs.
Here is the basic logic (for each user's app)
check if job A has been existed in the completed jobs table:
if exists(select * from CompletedJobs where JobId = JobA_Id)
select 1
else select 0
if job A has been existed (actually being run or has been completed), the current user's action should stop here. Otherwise the current user can continue by first inserting job A into the completed jobs table:
insert into CompletedJobs(...) values(...)
then it can just continue actually run the job and if it's failed, the Job A will be deleted from the table.
So in multi-threading, I can use lock to ensure that there is no other user's action involved between checking-inserting (kind of marking completion), so it should work safely. But in SQL Server I'm not so sure how that could be done. For example what if there are 2 users passing the step 1 (and both have the same result of 0 - meaning job is free to run)?
I guess both will then continue running the same job and that should be avoided. Unless at the phase of inserting the job (at the beginning of step 2), somehow I take benefit of unique constraint or primary key constraint to make SQL Server throw exception so that only one job can be continued successfully. But I feel that it's a bit hacky and not a nice solution. Are there some better (and more standard) solutions to this issue.
I think the primary/unique key approach is a valid one. But there are other options, for example you can try to lock the completed job row and if it success then run the job and insert it into the completed jobs table. You can lock the row even it doesn't exist yet.
Here is the code:
DECLARE #job_id int = 1
SET LOCK_TIMEOUT 100
BEGIN TRANSACTION
BEGIN TRY
-- it will try to exclusively lock the row. If it success, the
-- lock will be held during the transaction.
-- If the row is locked, it will wait for 100 ms before failing
-- with error 1222
IF EXISTS (SELECT * FROM completed_jobs WITH (ROWLOCK, HOLDLOCK, XLOCK) WHERE job_id = #job_id)
BEGIN
SELECT 1
COMMIT
RETURN
END
SET LOCK_TIMEOUT -1
-- execute the job and insert it into completed_jobs table
SELECT 0;
END TRY
BEGIN CATCH
IF ##TRANCOUNT > 0 ROLLBACK
SET LOCK_TIMEOUT -1
-- 1222: Lock request time out period exceeded.
IF ERROR_NUMBER() = 1222 SELECT 2
ELSE THROW
END CATCH
The script returns:
SELECT 0 if it completes the job
SELECT 1 if the job is already completed
SELECT 2 if the job is running by other one.
Two connections can run this script concurrently as long as #job_id is different.
If two connections run this script at the same time with the same #job_id and the job is not completed yet, one of them completes the job and the other one sees it as a completed job (SELECT 1) or as a running job (SELECT 2).
If one connection A executes SELECT * FROM completed_jobs WHERE job_id = #job_id while other connection B is executing this script with the same #job_id, then connection A will be blocked until B completes the script. It is true only if A runs under READ COMMITTED, REPEATABLE READ and SERIALIZABLE isolation level. If A runs under READ UNCOMMITTED, READ COMMITTED SNAPSHOT or SNAPHOST, it won't be blocked, and it will see the job as uncompleted.

Column name or number of supplied values does not match table definition during execution plan on

Currently we are facing the issue only when we are executing a stored procedure keeping the Include Actual Execution Plan - ON. Otherwise the stored procedure is executing fine and returning results as expected.
What would be the reason for this kind of behavior?
I have already went through this links but the error is different here as it only occurs when we have kept the Include Actual Execution Plan - ON.
Link1
Link2
Sample code (PROC1) -
CREATE PROCEDURE PROC1 (blah blah blah)
AS
BEGIN
BEGIN TRY
-------------
code
--------------
-----issue code-----
INSERT INTO #temptable (col1,col2,.....)
EXECUTE PROC2
-------------
code
--------------
END TRY
BEGIN CATCH
---------
RAISERROR(............);
END CATCH
END
Sample code (PROC2) -
CREATE PROCEDURE PROC2
BEGIN
BEGIN TRY
---------------
code
---------------
SELECT COL1,COL2,COL3,..... FROM #innersptemptable
END TRY
BEGIN CATCH
--------------------
RAISERROR();
--------------------
END CATCH
END
Note: PROC2 returns exact same number of columns which we have taken care while inserting into #temptable
Do let me know if any further information is required.
Environment -
Microsoft SQL Server 2014 - 12.0.2000.8 (X64)
Feb 20 2014 20:04:26
Copyright (c) Microsoft Corporation
Enterprise Edition (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1)
Edit1: When the error occurs in PROC1 and it is captured, it is noted that the ERROR_PROCEDURE() return the value of PROC2 but again PROC2 runs fine and gives results as expected with and without the Include Actual Execution Plan kept ON.
Edit2: When we replaced local temp table with global temp table (the temp table I am talking about is used to pass the result set from PROC2) inside PROC2 then the execution of PROC1 happened successfully.
Edit3: When we removed the TRY-CATCH block from the inner sp (PROC2) and executed the PROC1 keeping Include Actual Execution Plan - ON no errors were reported and execution was completed successfully.

Sql Server 2008 - Catch execution error thrown by a stored proc running on a linked server

To give some background detail, let's say there's a 'Person' table and a stored proc 'PersonPerformance' that returns the performance of a Person given a PersonId. Both of these objects reside on a linked sql server 2000 instance.
My goal is to run the PersonPerformance proc for each PersonId and store the results in a table that resides on a Sql Server 2008 instance. If an execution error occurs during the execution of the stored proc, I'd like to note that it 'failed' and send out an email listing all the failed PersonId's when the storage process is complete.
I've tried using a try..catch block and checking for ##error != 0 but neither have worked. I assuming it's because the stored proc continues executing after throwing the error and eventually returns results even though they aren't valid. The stored procedure itself is one long batch and has no error handling.
Here's the code I have so far:
{#ReturnTable is defined here}
declare cur cursor for
select PersonId from [linked-server-name].DB.dbo.Person
open cur
declare #pId int
fetch next from cur into #pId
while (##FETCH_STATUS = 0)
begin
begin try
print(#pId)
insert into #ReturnTable exec [linked-server-name].DB.dbo.PersonPerformance #pId
end try
begin catch
print('store person that failed here')
end catch
fetch next from cur into #pId
end
close cur
deallocate cur
I think I understand why it's not working as I expected for the reason I mentioned earlier (stored proc continues) but I'm wondering if there's way to circumvent that issue and catch the execution error even if the proc continues.
Here's a snapshot of the output generated during execution:
101
The statement has been terminated.
Msg 2627, Level 14, State 1, Procedure OSPerfAttribution_ps, Line 626
Violation of PRIMARY KEY constraint 'PK__#SecMaster__2CD2DAF1'. Cannot insert duplicate key in object '#2BDEB6B8'.
The statement has been terminated.
Msg 515, Level 16, State 2, Procedure OSPerfAttribution_ps, Line 1047
Cannot insert the value NULL into column 'BasketBenchmarkId', table '#ReturnTable'; column does not allow nulls. INSERT fails.
(3 row(s) affected)
102
(36 row(s) affected)
106
(32 row(s) affected)
The fact that the error message is propagating from the stored proc to the output of the outer sql makes me believe there must be a way to catch it. I'm just having trouble finding out how so I'm hoping to get some help here.
Last resort I'll probably just have to save the whole output to a log file and search for errors. This isn't that bad of an option but I'd still like to know if there's a way to catch the errors during execution.
I would use SET XACT_ABORT ON if you want the whole batch rolled back.
http://msdn.microsoft.com/en-us/library/ms188792.aspx

SQL hangs when executed as SP but is fine as SQL

Greetings,
I have been analyzing a problem with a delete stored procedure. The procedure simply performs a cascading delete of a certain entity.
When I break the SP out into SQL in the query editor it runs in approx. 7 seconds, however, when the SP is executed via EXEC SP it takes over 1 minute to execute.
I have tried the following with no luck:
Dropped the SP and then recreated it using WITH RECOMPILE
Added SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
The SQL runs in the editor with many concurrent connections without issue.
The EXEC Procedure hangs with or without concurrent connections
The procedure is similar to:
ALTER PROCEDURE [dbo].[DELETE_Something]
(
#SomethingID INT,
#Result INT OUT,
#ResultMessage NVARCHAR(1000) OUT
)--WITH RECOMPILE--!!!DEBUGGING
AS
--SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED--!!!DEBUGGING
SET #Result=1
BEGIN TRANSACTION
BEGIN TRY
DELETE FROM XXXXX --APPROX. 34 Records
DELETE FROM XXXX --APPROX. 227 Records
DELETE FROM XXX --APPROX. 58 Records
DELETE FROM XX --APPROX. 24 Records
DELETE FROM X --APPROX. 14 Records
DELETE FROM A -- 1 Record
DELETE FROM B -- 1 Record
DELETE FROM C -- 1 Record
DELETE FROM D --APROX. 3400 Records !!!HANGS FOR OVER ONE MINUTE TRACING THROUGH SP BUT NOT SQL
GOTO COMMIT_TRANS
END TRY
BEGIN CATCH
GOTO ROLLBACK_TRANS
END CATCH
COMMIT_TRANS:
SET #Result=1
COMMIT TRANSACTION
RETURN
ROLLBACK_TRANS:
SET #Result=0
SET #ResultMessage=CAST(ERROR_MESSAGE() AS NVARCHAR(1000))
ROLLBACK TRANSACTION
RETURN
Make sure your statistics are up to date. Assuming the DELETE statements have some reference to the parameters getting passed, you might try the OPTIMIZE FOR UNKNOWN option, if you are using SQL 2008.
As with any performance problem, you need to measure why it 'hangs'. Guessing will get you nowhere fast. Use a methodological approach, like Waits and Queues. The simplest thing to do is look at wait_type, wait_time and wait_resource in sys.dm_exec_requests, for the request doing the exec sp, while it executes the sp. Based on what is actually causing the blockage, you can take appropriate action.
This was more of a Parameter Sniffing (or Spoofing) issue. This is a rarely used SP. Using the OPTION (OPTIMIZE FOR UNKNOWN) for a statement using the parameter against a rather large table apparently solved the problem. Thank you SqlACID for the tip.
DELETE FROM
ProblemTableWithManyIndexes
WHERE
TableID=#TableID
OPTION (OPTIMIZE FOR UNKNOWN)

Categories

Resources