SQL Queue processing with UPDLOCK, READPAST and still getting deadlocks - sql-server

The following stored procedure is working like a champ. This stored procedure facilitates queue processing for thousands of records and frequently fires simultaneously (sometimes threaded up to 7 or 8 times at any given instant):
Alter Procedure sp.QueueProcessSingle
As
DECLARE #aid int
update TABLE1 set [active] = 1, #aid = aid where aid=(
SELECT TOP 1 aid
FROM TABLE1 t1 WITH (UPDLOCK, READPAST)
WHERE
(t1.status is NULL) AND (t1.groupcount = 1) AND (t1.active = 0)
order by recorddate, recordtime)
update TABLE1 set status = 'Getting worked on' where #aid = aid
I'm happy with the above stored procedure, but it's its 1st cousin that I'm not happy with. A similar stored procedure needs to do the exact same thing as the sp above, only it does it across a group of records. In other words instead of just updating 1 record as "having work performed on it, therefore don't grab this record again when the stored procedure executes again" it needs to update 2 or 3 or 4 or more records that have similar data (same name and phone).
Here's how I have the bastard cousin sp now and I'm getting deadlocks:
Alter Procedure sp.QueueProcessMultiple
As
DECLARE #aid int
DECLARE #firstname nvarchar(50)
DECLARE #lastname nvarchar(50)
DECLARE #phone nvarchar(50)
update TABLE1 set active = 1, #aid = aid where aid=(
SELECT TOP 1 aid
FROM TABLE1 t1 WITH (UPDLOCK, READPAST)
WHERE
(t1.status is NULL) AND (t1.groupcount > 1) AND (t1.active = 0)
order by recorddate, recordtime)
UPDATE TABLE1 set status = 'Getting worked on' where #aid = aid
/****** Ok, now I have updated the "parent" record of the group which has the earliest date and time but now I also need to update the other records that are in this group*****/
SELECT #firstname = firstname, #lastname = #lastname, #phone = phone
FROM TABLE1 WITH (UPDLOCK, READPAST)
WHERE #aid = aid
UPDATE TABLE1 set status = 'Getting worked on', active = 1 where #firstname = firstname AND #lastname = lastname AND #phone = phone AND status is NULL AND active = 0
And as is often the case and part of the beauty of stackoverflow, just typing this up is shedding a bit of light on it for me. It seems like it would be better to update the whole group at once instead of just updating the "parent" record of the group and then updating all records in the table that have matching data of the parent record. Now how to do that? I'll keep looking and will post a solution if/when I get it, but any input would be much appreciated. Thanks!

Should be able to do this in one statement using DENSE_RANK to get the first aid.
This works if the collected rows all have the same (recorddate, recordtime)
Also, you need ROWLOCK too
update
t1
set
set status = 'Getting worked on',
active = 1,
#aid = aid, #firstname = firstname, #lastname = lastname, #phone = phone
FROM
(
SELECT
DENSE_RANK() OVER (ORDER BY recorddate, recordtime) AS rn,
aid, firstname, lastname, status, active
FROM TABLE1 t1x WITH (UPDLOCK, READPAST, ROWLOCK)
WHERE
(t1x.status is NULL) AND (t1x.groupcount > 1) AND (t1x.active = 0)
) t1;
WHERE
rn = 1

Related

How do I loop through a table, search with that data, and then return search criteria and result to new table?

I have a set of records that need to be validated (searched) in a SQL table. I will call these ValData and SearchTable respectively. A colleague created a SQL query in which a record from the ValData can be copied and pasted in to a string variable, and then it is searched in the SearchTable. The best result from the SearchTable is returned. This works very well.
I want to automate this process. I loaded the ValData to SQL in a table like so:
RowID INT, FirstName, LastName, DOB, Date1, Date2, TextDescription.
I want to loop through this set of data, by RowID, and then create a result table that is the ValData joined with the best match from the SearchTable. Again, I already have a query that does that portion. I just need the loop portion, and my SQL skills are virtually non-existent.
Suedo code would be:
DECLARE #SearchID INT = 1
DECLARE #MaxSearchID INT = 15000
DECLARE #FName VARCHAR(50) = ''
DECLARE #FName VARCHAR(50) = ''
etc...
WHILE #SearchID <= #MaxSearchID
BEGIN
SET #FNAME = (SELECT [Fname] FROM ValData WHERE [RowID] = #SearchID)
SET #LNAME = (SELECT [Lname] FROM ValData WHERE [RowID] = #SearchID)
etc...
Do colleague's query, and then insert(?) search criteria joined with the result from the SearchTable in to a temporary result table.
END
SELECT * FROM FinalResultTable;
My biggest lack of knowledge comes in how do I create a temporary result table that is ValData's fields + SearchTable's fields, and during the loop iterations how do I add one row at a time to this temporary result table that includes the ValData joined with the result from the SearchTable?
If it helps, I'm using/wanting to join all fields from ValData and all fields from SearchTable.
Wouldn't this be far easier with a query like this..?
SELECT FNAME,
LNAME
FROM ValData
WHERE (FName = #Fname
OR LName = #Lname)
AND RowID <= #MaxSearchID
ORDER BY RowID ASC;
There is literally no reason to use a WHILE other than to destroy performance of the query.
With a bit more trial and error, I was able to answer what I was looking for (which, at its core, was creating a temp table and then inserting rows in to it).
CREATE TABLE #RESULTTABLE(
[feedname] VARCHAR(100),
...
[SCORE] INT,
[Max Score] INT,
[% Score] FLOAT(4),
[RowID] SMALLINT
)
SET #SearchID = 1
SET #MaxSearchID = (SELECT MAX([RowID]) FROM ValidationData
WHILE #SearchID <= #MaxSearchID
BEGIN
SET #FNAME = (SELECT [Fname] FROM ValidationData WHERE [RowID] = #SearchID)
...
--BEST MATCH QUERY HERE
--Select the "top" best match (order not guaranteed) in to the RESULTTABLE.
INSERT INTO #RESULTTABLE
SELECT TOP 1 *, #SearchID AS RowID
--INTO #RESULTTABLE
FROM #TABLE3
WHERE [% Score] IN (SELECT MAX([% Score]) FROM #TABLE3)
--Drop temp tables that were created/used during best match query.
DROP TABLE #TABLE1
DROP TABLE #TABLE2
DROP TABLE #TABLE3
SET #SearchID = #SearchID + 1
END;
--Join the data that was validated (searched) to the results that were found.
SELECT *
FROM ValidationData vd
LEFT JOIN #RESULTTABLE rt ON rt.[RowID] = vd.[RowID]
ORDER BY vd.[RowID]
DROP TABLE #RESULTTABLE
I know this could be approved by doing a join, probably with the "BEST MATCH QUERY" as an inner query. I am just not that skilled yet. This takes a manual process which took hours upon hours and shortens it to just an hour or so.

Modify SQL Server stored procedure to check for condition before executing Update statement

Azure SQL Server - I have an inherited stored procedure which runs asynchronously, triggered by an Azure Service Fabric service which runs infinitely:
PROCEDURE Sources.ForIndexing
(#SourceId BIGINT)
AS
BEGIN
SET NOCOUNT ON
DECLARE #BatchId uniqueidentifier
SELECT #BatchId = CaptureBatch
FROM [Data].[Raw]
WHERE CaptureId = (SELECT MAX(CaptureId)
FROM [Data].[Raw]
WHERE SourceId = #SourceId)
UPDATE [Data].[Raw]
SET [Status] = 501
WHERE SourceId = #SourceId
AND CaptureBatch = #BatchId
END
GO
In this Data.Raw table CaptureId is the Primary Key and is auto-incrementing. The records in this table are grouped by SourceId and CaptureBatch. One SourceId can have several CaptureBatch's. The first part of this procedure finds the latest CaptureBatch group by looking at the MAX CaptureId of a given SourceId. The UPDATE statement then sets the Status column of those records to 501.
What I need to do is add a condition to the stored procedure where, after the SELECT statement runs, says if the Status column of any given record over which this procedure iterates has a value of 1, do not execute the UPDATE statement on that record.
I thought it might be as simple as modifying the SELECT part to say:
WHERE CaptureId = (SELECT MAX(CaptureId)
FROM [Data].[Raw]
WHERE SourceId = #SourceId
AND Status <> 1)
But I believe that's only going to select a Status that's not 1 for that one record which contains the MAX CaptureId, correct? I may be overthinking this, but it seems I need some kind of IF statement added to this.
SELECT TOP (1)
#BatchId = r.CaptureBatch
FROM [Data].[Raw] r
WHERE r.SourceId = #SourceId
ORDER BY r.CaptureId DESC
UPDATE r SET
[Status] = 501
FROM [Data].[Raw] r
WHERE r.SourceId = #SourceId
AND r.CaptureBatch = #BatchId
AND r.Status <> 1
IF (SELECT count(CaptureId)
FROM [Data].[Raw]
WHERE SourceId = #SourceId and Status = 1) > 0 BEGIN
UPDATE [Data].[Raw]
SET [Status] = 501
WHERE SourceId = #SourceId
AND CaptureBatch = #BatchId
END

Multiple select queries execution one after other

I am having six select queries with different where conditions if first select query returns null it should check the next select query and follows. what is the best approach to follow for writing it as stored procedure in SQL server.
You can use ##rowcount
DECLARE #OperatorID INT = 4, #CurrentCalendarView VARCHAR(50) = 'month';
declare #t table (operatorID int, CurrentCalendarView varchar(50));
insert into #t values (2, 'year');
select operatorID - 1, CurrentCalendarView from #t where 1 = 2
if (##ROWCOUNT = 0)
begin
select operatorID + 1, CurrentCalendarView from #t where 1 = 1
end
If I understand your question correctly then you can achieve this like below sample. You can go in this way.
if NOT EXISTS (SELECT TOP(1) 'x' FROM table WHERE id =#myId)
BEGIN
IF NOT EXISTS (SELECT TOP(1) 'x' FROM table2 WHERE id = #myId2)
BEGIN
IF NOT EXISTS (SELECT TOP(1) 'x' FROM table 3 WHERE id = #myID3)
BEGIN
END
END
END

Need to speed up SQL Server SP that uses system metadata

Let me apologize in advance for the length of this question. I don't see how to ask it without giving all the definitions.
I've inherited a SQL Server 2005 database that includes a homegrown implementation of change tracking. Through triggers, changes to virtually every field in the database are stored in a set of three tables. In the application for this database, the user can request the history of various items, and what's returned is not just changes to the item itself, but also changes in related tables. The problem is that in some cases, it's painfully slow, and in some cases, the request eventually crashes the application. The client has also reported other users having problems when someone requests history.
The tables that store the change data are as follows:
CREATE TABLE [dbo].[tblSYSChangeHistory](
[id] [bigint] IDENTITY(1,1) NOT NULL,
[date] [datetime] NULL,
[obj_id] [int] NULL,
[uid] [varchar](50) NULL
This table tracks the tables that have been changed. Obj_id is the value that Object_ID() returns.
CREATE TABLE [dbo].[tblSYSChangeHistory_Items](
[id] [bigint] IDENTITY(1,1) NOT NULL,
[h_id] [bigint] NOT NULL,
[item_id] [int] NULL,
[action] [tinyint] NULL
This table tracks the items that have been changed. h_id is a foreign key to tblSYSChangeHistory. item_id is the PK of the changed item in the specified table. action indicates insert, delete or change.
CREATE TABLE [dbo].[tblSYSChangeHistory_Details](
[id] [bigint] IDENTITY(1,1) NOT NULL,
[i_id] [bigint] NOT NULL,
[col_id] [int] NOT NULL,
[prev_val] [varchar](max) NULL,
[new_val] [varchar](max) NULL
This table tracks the individual changes. i_id is a foreign key to tblSYSChangeHistory_Items. col_id indicates which column was changed, and prev_val and new_val indicate the original and new values for that field.
There's actually a fourth table that supports this architecture. tblSYSChangeHistory_Objects maps plain English descriptions of operations to particular tables in the database.
The code to look up the history for an item is incredibly convoluted. It's one branch of a very long SP. Relevant parameters are as follows:
#action varchar(50),
#obj_id bigint = 0,
#uid varchar(50) = '',
#prev_val varchar(MAX) = '',
#new_val varchar(MAX) = '',
#start_date datetime = '',
#end_date datetime = ''
I'm storing them to local variables right away (because I was able to significantly speed up another SP by doing so):
declare #iObj_id bigint,
#cUID varchar(50),
#cPrev_val varchar(max),
#cNew_val varchar(max),
#tStart_date datetime,
#tEnd_date datetime
set #iObj_id = #obj_id
set #cUID = #uid
set #cPrev_val = #prev_val
set #cNew_val = #new_val
set #tStart_date = #start_date
set #tEnd_date = #end_date
And here's the code from that branch of the SP:
create table #r (obj_id int, item_id int, l tinyint)
create clustered index #ri on #r (obj_id, item_id)
insert into #r
select object_id(obj_name), #iObj_id, 0
from dbo.tblSYSChangeHistory_Objects
where obj_type = 'U' and descr = cast(#cPrev_val AS varchar(150))
declare #i tinyint, #cnt int
set #i = 1
while #i <= 4
begin
insert into #r
select obj_id, item_id, #i
from dbo.vSYSChangeHistoryFK a with (nolock)
where exists (select null from #r where obj_id = a.rel_obj_id and item_id = a.rel_item_id and l = #i - 1)
and not exists (select null from #r where obj_id = a.obj_id and item_id = a.item_id)
set #cnt = ##rowcount
insert into #r
select rel_obj_id, rel_item_id, #i
from dbo.vSYSChangeHistoryFK a with (nolock)
where object_name(obj_id) not in (<this is a list of particular tables in the database>)
and exists (select null from #r where obj_id = a.obj_id and item_id = a.item_id and l between #i - 1 and #i)
and not exists (select null from #r where obj_id = a.rel_obj_id and item_id = a.rel_item_id)
set #i = case #cnt + ##rowcount when 0 then 100 else #i + 1 end
end
select date, obj_name, item, [uid], [action],
pkey, item_id, id, key_obj_id into #tCH_R
from dbo.vSYSChangeHistory a with (nolock)
where exists (select null from #r where obj_id = a.obj_id and item_id = a.item_id)
and (#cUID = '' or uid = #cUID)
and (#cNew_val = '' or [action] = #cNew_val)
declare ch_item_cursor cursor for
select distinct pkey, key_obj_id, item_id
from #tCH_R
where item = '' and pkey <> ''
open ch_item_cursor
fetch next from ch_item_cursor
into #cPrev_val, #iObj_id, #iCol_id
while ##fetch_status = 0
begin
set #SQLStr = 'select #val = ' + #cPrev_val +
' from ' + object_name(#iObj_id) + ' with (nolock)' +
' where id = #id'
exec sp_executesql #SQLStr,
N'#val varchar(max) output, #id int',
#cNew_val output, #iCol_id
update #tCH_R
set item = #cNew_val
where key_obj_id = #iObj_id
and item_id = #iCol_id
fetch next from ch_item_cursor
into #cPrev_val, #iObj_id, #iCol_id
end
close ch_item_cursor
deallocate ch_item_cursor
select date, obj_name,
cast(item AS varchar(254)) AS item,
uid, [action],
cast(id AS int) AS id
from #tCH_R
order by id
return
As you can see, the code uses a view. Here's that definition:
ALTER VIEW [dbo].[vSYSChangeHistoryFK]
AS
SELECT i.obj_id, i.item_id, c1.parent_object_id AS rel_obj_id, i2.item_id AS rel_item_id
FROM dbo.vSYSChangeHistoryItemsD AS i INNER JOIN
sys.foreign_key_columns AS c1 ON c1.referenced_object_id = i.obj_id AND c1.constraint_column_id = 1 INNER JOIN
dbo.vSYSChangeHistoryItemsD AS i2 ON c1.parent_object_id = i2.obj_id INNER JOIN
dbo.tblSYSChangeHistory_Details AS d1 ON d1.i_id = i.min_id AND d1.col_id = c1.referenced_column_id INNER JOIN
dbo.tblSYSChangeHistory_Details AS d1k ON d1k.i_id = i2.min_id AND d1k.col_id = c1.parent_column_id AND ISNULL(d1.new_val,
ISNULL(d1.prev_val, '')) = ISNULL(d1k.new_val, ISNULL(d1k.prev_val, '')) --LEFT OUTER JOIN
UNION ALL
SELECT i0.obj_id, i0.item_id, c01.parent_object_id AS rel_obj_id, i02.item_id AS rel_item_id
FROM dbo.vSYSChangeHistoryItemsD AS i0 INNER JOIN
sys.foreign_key_columns AS c01 ON c01.referenced_object_id = i0.obj_id AND c01.constraint_column_id = 1 AND col_name(c01.referenced_object_id,
c01.referenced_column_id) = 'ID' INNER JOIN
dbo.vSYSChangeHistoryItemsD AS i02 ON c01.parent_object_id = i02.obj_id INNER JOIN
dbo.tblSYSChangeHistory_Details AS d01k ON i02.min_id = d01k.i_id AND d01k.col_id = c01.parent_column_id AND ISNULL(d01k.new_val,
d01k.prev_val) = CAST(i0.item_id AS varchar(max))
And finally, that view uses one more view:
ALTER VIEW [dbo].[vSYSChangeHistoryItemsD]
AS
SELECT h.obj_id, m.item_id, MIN(m.id) AS min_id
FROM dbo.tblSYSChangeHistory AS h INNER JOIN
dbo.tblSYSChangeHistory_Items AS m ON h.id = m.h_id
GROUP BY h.obj_id, m.item_id
Working with the Profiler, it appears that view vSYSChangeHistoryFK is the big culprit, and my testing suggests that the particular problem is in the join between the two copies of vSYSChangeHistoryItemsD and the foreign_key_columns table.
I'm looking for any ideas on how to give acceptable performance here. The client reports sometimes waiting as much as 15 minutes without getting results. I've tested up to nearly 10 minutes with no result in at least one case.
If there were new language elements in 2008 or later that would solve this, I think the client would be willing to upgrade.
Thanks.
Wow that's a mess. Your big gain should be in removing the cursor. I see 'where exists' - that's nice and efficient b/c as soon as it finds one match it aborts. And I see 'where not exists' - by definition that has to scan everything. Is it finding the top 4? You can do better with using ROW_NUMBER() OVER (PARTITON BY [whatever makes it unique] ORDER BY [whatever your id is]. It's hard to tell. select object_id(obj_name), #iObj_id, 0 makes it seem like only the #i=1 loop actually does anything (?)
If that is what it's doing, you could write it as
SELECT * from
(
select ROW_NUMBER() OVER (PARTITION BY obj_id ORDER BY item_id desc) as Row,
obj_id, item_id
FROM bo.vSYSChangeHistoryFK a with (nolock)
where obj_type = 'U' and descr = cast(#cPrev_val AS varchar(150))
) paged
where Row between 1 and 4
ORDER BY Row
A DBA level change that could help would be to set up a partitioning scheme based on date. Roll over to a new partition every so often. Put the old partitions on different disks. Most queries may only need to hit the recent partition, which will be say 1/5th the size that it used to be, making it much faster without changing anything else.
Not a full answer, sorry. That mess would take hours to parse

SP: handling nulls

I have this Table structure:
Id int not null --PK
Title varchar(50)
ParentId int null --FK to same Table.Id
I'm writing a SP that returns a row's "brothers", here's the code
select * from Table
where Table.ParentId = (select Table.ParentId from Table where Table.id = #Id)
and Table.Id <> #Id
It works perfectly for rows having a parent, but for those who's parent are null (root records), it returns no row. This is working as expected since null = null is always false.
I'm looking for help on how to better design my SP to handle this specific case. I'm not a DBA and my TSQL knowledge is basic.
EDIT: I've updated my SQL query like this:
DECLARE #Id INT
SET #Id = 1
DECLARE #ParentId INT
SET #ParentId = (SELECT Table.ParentId FROM Table WHERE Table.Id = #Id)
SELECT * FROM Table
WHERE (
(#ParentId IS NULL AND (Table.ParentId IS NULL))
OR (Table.ParentId = #ParentId)
)
AND Table.Id <> #Id
It does do the job but if the Id is not in the table, it still returns the row who have no parents. Going to lunch, continue looking at this later.
Thanks in advance,
Fabian
I'm not sure this is the best solution, but you could try to use the COALESCE operator using a "not valid" id for NULL
select * from Table
where COALESCE(Table.ParentId,-1) = (select COALESCE(Table.ParentId,-1) from Table where Table.id = #Id)
and Table.Id <> #Id
Assuming -1 is never used as an ID
It's possible I have not understood your problem description however, in order to return Brothers only when they exist for a given Parent, the following query should suffice:
select Brother.*
from Table Parent
inner join Table Brother on
Parent.id = Brother.ParentID
where Parent.Id= #Id and Brother.Id <> #Id

Resources