would selecting on too many fields cause a table lock? - sql-server

I have a SELECT statement in a stored procedure that under very heavy load results in a timeout: "Lock request time out period exceeded." - at least that's what .NET throws. It is a pretty simple query on a table (CODES) that has a primary key and a clustered index (on TYPE_CODE1). The only thing that looks out of the ordinary is that there are many fields being selected on (all the fields except Dclass are bit fields). Would this cause the lock on the table? Any other ideas?
TIA
T
select
#TYPE_CODE1 = TYPE_CODE1,
#ALTERNATE_CODE = ALTERNATE_CODE,
#BANNER = BANNER,
#CODE_1 = CODE_1,
#CODE_2 = CODE_2,
#CODE_3 = CODE_3,
#CODE_4 = CODE_4,
from CODES with (nolock)
where
Dclass = #Dclass
and Ret = #Ret
and Rem = #Rem
and Ope = #Ope
and Res = #Res
and Cer = #Cer
and Cdo = #Cdo
and Del = #Del
and Sig = #Sig
and Ads = #Ads
and Adr = #Adr
and Emi = #Emi
and In1 = #In1
and In2 = #In2
and Paa = #Paa
and Reg = #Reg
and Red = #Red
and Rer = #Rer
and Ree = #Ree
and Rei = #Rei
and Spe = #Spe
and Mer = #Mer
and Hol = #Hol
and Day = #Day
and Sca = #Sca
and Sis = #Sis
and Poa = #Poa
and Haz = #Haz
and Sun = #Sun
and Out = #Out
and IsActive = 1

Lock Request Timeout exceeded doesn't always directly correlate to a table lock. That error means that the query was waiting to obtain a lock on an object in SQL server but couldn't do it fast enough, so the query timed out.
Additionally, SQL uses a process called lock escalation where, if a query requires more than 5000 locks (page/row level locks), it will request a full table lock. If you are reaching this 5000 lock threshold and trying to take a table lock out, it could be getting stuck behind some other process that already has a lock on it.
I'd try running your application and then, at the same time in management studio, use a tool like sp_Whoisactive and find out what's blocking your application and causing it to timeout. Odds are some other process has a lock on the table you are trying to query.

SELECT statements that use the NOLOCK hint on any table they access never cause locks or wait for them. So it shouldn't be this statement that caused the error.

Related

SQL Threadsafe UPDATE TOP 1 for FIFO Queue

I have a table of invoices being prepared, and then ready for printing.
[STATUS] column is Draft, Print, Printing, Printed
I need to get the ID of the first (FIFO) record to be printed, and change the record status. The operation must be threadsafe so that another process does not select the same InvoiceID
Can I do this (looks atomic to me, but maybe not ...):
1:
WITH CTE AS
(
SELECT TOP(1) [InvoiceID], [Status]
FROM INVOICES
WHERE [Status] = 'Print'
ORDER BY [PrintRequestedDate], [InvoiceID]
)
UPDATE CTE
SET [Status] = 'Printing'
, #InvoiceID = [InvoiceID]
... perform operations using #InvoiceID ...
UPDATE INVOICES
SET [Status] = 'Printed'
WHERE [InvoiceID] = #InvoiceID
or must I use this (for the first statement)
2:
UPDATE INVOICES
SET [Status] = 'Printing'
, #InvoiceID = [InvoiceID]
WHERE [InvoiceID] =
(
SELECT TOP 1 [InvoiceID]
FROM INVOICES WITH (UPDLOCK)
WHERE [Status] = 'Print'
ORDER BY [PrintRequestedDate], [InvoiceID]
)
... perform operations using #InvoiceID ... etc.
(I cannot hold a transaction open from changing status to "Printing" until the end of the process, i.e. when status is finally changed to "Printed").
EDIT:
In case it matters the DB is READ_COMMITTED_SNAPSHOT
I can hold a transaction for both UPDATE STATUS to "Printing" AND get the ID. But I cannot continue to keep transaction open all the way through to changing the status to "Printed". This is an SSRS report, and it makes several different queries to SQL to get various bits of the invoice, and it might crash/whatever, leaving the transaction open.
#Gordon Linoff "If you want a queue" The FIFO sequence is not critical, I would just like invoices that are requested first to be printed first ... "more or less" (don't want any unnecessary complexity ...)
#Martin Smith "looks like a usual table as queue requirement" - yes, exactly that, thanks for the very useful link.
SOLUTION:
The solution I am adopting is from comments:
#lad2025 pointed me to SQL Server Process Queue Race Condition which uses WITH (ROWLOCK, READPAST, UPDLOCK) and #MartinSmith explained what the Isolation issue is and pointed me at Using tables as Queues - which talks about exactly what I am trying to do.
I have not grasped why UPDATE TOP 1 is safe, and UPDATE MyTable SET xxx = yyy WHERE MyColumn = (SELECT TOP 1 SomeColumn FROM SomeTable ORDER BY AnotherColumn) (without Isolation Hints) is not, and I ought to educate myself, but I'm happy just to put the isolation hints in my code and get on with something else :)
Thanks for all the help.
My concern would be duplicate [InvoiceID]
Multiple print requests for the same [InvoiceID]
On the first update ONE row gets set [Status] = 'Printing'
On the second update all [InvoiceID] rows get set [Status] = 'Printed'
This would even set rows with status = 'draft'
Maybe that is what you want
Another process could pick up the same [InvoiceID] before the set [Status] = 'Print'
So some duplicates will print and some will not
I go with comments on use the update lock
This is non-deterministic but you could just take top (1) and skip the order by. You will tend to get the most recent row but it is not guaranteed. If you clear the queue then you get em all.
This demonstrates you can lose 'draft' = 1
declare #invID int;
declare #T table (iden int identity primary key, invID int, status tinyint);
insert into #T values (1, 2), (5, 1), (3, 1), (4, 1), (4, 2), (2, 1), (1, 1), (5, 2), (5, 2);
declare #iden int;
select * from #t order by iden;
declare #rowcount int = 1;
while (#ROWCOUNT > 0)
begin
update top (1) t
set t.status = 3, #invID = t.invID, #iden = t.iden
from #t t
where t.status = '2';
set #rowcount = ##ROWCOUNT;
if(#rowcount > 0)
begin
select #invID, #iden;
-- do stuff
update t
set t.status = 4
from #t t
where t.invID = #invID; -- t.iden = #iden;
select * from #T order by iden;
end
end
ATOMicity of a Single Statement
I think your code's fine as it is. i.e. Because you have a single statement which updates the status to printing as soon as the statement runs the status is updated; so anything running before which searches for print would have updated the same record to printing before your process saw it; so your process would pick a subsequent record, or any process hitting it after your statement had run would see it as printing so would not pick it up`. There isn't really a scenario where a record could pick it up whilst the statement's running since as discussed a single SQL statement should be atomic.
Disclaimer
That said, I'm not enough of an expert to say whether explicit lock hints would help; in my opinion they're not needed as the above is atomic, but others in the comments are likely better informed than me. However, running a test (albeit where database and both threads are run on the same machine) I couldn't create a race condition... maybe if the clients were on different machines / if there was a lot more concurrency you'd be more likely to see issues.
My hope is that others have interpreted your question differently, hence the disagreement.
Attempt to Disprove Myself
Here's the code I used to try to cause a race condition; you can drop it into LINQPad 5, select language C# Program, tweak the connectionstring (and optionally any statements) as required, then run:
const long NoOfRecordsToTest = 1000000;
const string ConnectionString = "Server=.;Database=Play;Trusted_Connection=True;"; //assumes a database called "play"
const string DropFifoQueueTable = #"
if object_id('FIFOQueue') is not null
drop table FIFOQueue";
const string CreateFifoQueueTable = #"
create table FIFOQueue
(
Id bigint not null identity (1,1) primary key clustered
, Processed bit default (0) --0=queued, null=processing, 1=processed
)";
const string GenerateDummyData = #"
with cte as
(
select 1 x
union all
select x + 1
from cte
where x < #NoRowsToGenerate
)
insert FIFOQueue(processed)
select 0
from cte
option (maxrecursion 0)
";
const string GetNextFromQueue = #"
with singleRecord as
(
select top (1) Id, Processed
from FIFOQueue --with(updlock, rowlock, readpast) --optionally include this per comment discussions
where processed = 0
order by Id
)
update singleRecord
set processed = null
output inserted.Id";
//we don't really need this last bit for our demo; I've included in case the discussion turns to this..
const string MarkRecordProcessed = #"
update FIFOQueue
set Processed = 1
where Id = #Id";
void Main()
{
SetupTestDatabase();
var task1 = Task<IList<long>>.Factory.StartNew(() => ExampleTaskForced(1));
var task2 = Task<IList<long>>.Factory.StartNew(() => ExampleTaskForced(2));
Task.WaitAll(task1, task2);
foreach (var processedByBothThreads in task1.Result.Intersect(task2.Result))
{
Console.WriteLine("Both threads processed id: {0}", processedByBothThreads);
}
Console.WriteLine("done");
}
static void SetupTestDatabase()
{
RunSql<int>(new SqlCommand(DropFifoQueueTable), cmd => cmd.ExecuteNonQuery());
RunSql<int>(new SqlCommand(CreateFifoQueueTable), cmd => cmd.ExecuteNonQuery());
var generateData = new SqlCommand(GenerateDummyData);
var param = generateData.Parameters.Add("#NoRowsToGenerate",SqlDbType.BigInt);
param.Value = NoOfRecordsToTest;
RunSql<int>(generateData, cmd => cmd.ExecuteNonQuery());
}
static IList<long> ExampleTaskForced(int threadId) => new List<long>(ExampleTask(threadId)); //needed to ensure prevent lazy loadling from causing issues with our tests
static IEnumerable<long> ExampleTask(int threadId)
{
long? x;
while ((x = ProcessNextInQueue(threadId)).HasValue)
{
yield return x.Value;
}
//yield return 55; //optionally return a fake result just to prove that were there a duplicate we'd catch it
}
static long? ProcessNextInQueue(int threadId)
{
var id = RunSql<long?>(new SqlCommand(GetNextFromQueue), cmd => (long?)cmd.ExecuteScalar());
//Debug.WriteLine("Thread {0} is processing id {1}", threadId, id?.ToString() ?? "[null]"); //if you want to see how we're doing uncomment this line (commented out to improve performance / increase the likelihood of a collision
/* then if we wanted to do the second bit we could include this
if(id.HasValue) {
var markProcessed = new SqlCommand(MarkRecordProcessed);
var param = markProcessed.Parameters.Add("#Id",SqlDbType.BigInt);
param.Value = id.Value;
RunSql<int>(markProcessed, cmd => cmd.ExecuteNonQuery());
}
*/
return id;
}
static T RunSql<T>(SqlCommand command, Func<SqlCommand,T> callback)
{
try
{
using (var connection = new SqlConnection(ConnectionString))
{
command.Connection = connection;
command.Connection.Open();
return (T)callback(command);
}
}
catch (Exception e)
{
Debug.WriteLine(e.ToString());
throw;
}
}
Other comments
The above discussion's really only talking about multiple threads taking the next record from the queue whilst avoiding any single record being picked up by multiple threads. There are a few other points...
Race Condition outside of SQL
Per our discussion, if FIFO is mandatory there are other things to worry about. i.e. whilst your threads will pick up each record in order after that it's up to them. e.g. Thread 1 gets record 10 then Thread 2 gets record 11. Now Thread 2 sends record 11 to the printer before Thread 1 sends record 10. If they're going to the same printer, your prints would be out of order. If they're different printers, not a problem; all prints on any printer are sequential. I'm going to assume the latter.
Exception Handling
If any exception occurs in a thread that's processing something (i.e. so the thread's record is printing) then consideration should be made on how to handle this. One option is to keep that thread retrying; though that may be indefinite if it's some fundamental fault. Another is to put the record to some error status to be handled by another process / where it's accepted that this record won't appear in order. Finally, if the order of invoices in the queue is an ideal rather than a hard requirement you could have the owning thread put the status back to print so that it or another thread can pick up that record to retry (though again, if there's something fundamentally wrong with the record, this may block up the queue).
My recommendation here is the error status; as then you have more visibility on the issue / can dedicate another process to dealing with issues.
Crash Handling
Another issue is that because your update to printing is not held within a transaction, if the server crashes you leave the record with this status in the database, and when your system comes back online it's ignored. Ways to avoid that are to include a column saying which thread is processing it; so that when the system comes back up that thread can resume where it left off, or to include a date stamp so that after some period of time any records with status printing which "timeout" can be swept up / reset to Error or Print statuses as required.
WITH CTE AS
(
SELECT TOP(1) [InvoiceID], [Status], [ThreadId]
FROM INVOICES
WHERE [Status] = 'Print'
OR ([Status] = 'Printing' and [ThreadId] = #ThreadId) --handle previous crash
ORDER BY [PrintRequestedDate], [InvoiceID]
)
UPDATE CTE
SET [Status] = 'Printing'
, [ThreadId] = #ThreadId
OUTPUT Inserted.[InvoiceID]
Awareness of Other Processes
We've mostly been focussed on the printing element; but other processes may also be interacting with your Invoices table. We can probably assume that aside from creating the initial Draft record and updating it to Print once ready for printing, those processes won't touch the Status field. However, the same records could be locked by completely unrelated processes. If we want to ensure FIFO we must not use the ReadPast hint, as some records may have status Print but be locked, so we'd skip over them despite them having the earlier PrintRequestedDate. However, if we want things to print as quickly as possible, with them being in order when otherwise not inconvenient, including ReadPast will allow our print process to skip the locked records and carry on, coming back to handle them once they're released.
Similarly another process may lock our record when it's at Printing status, so we can't update it to mark it as complete. Again, if we want to avoid this from causing a holdup we can make use of the ThreadId column to allow our thread to leave the record at status Printing and come back to clean it up later when it's not locked. Obviously this assumes that the ThreadId column is only used by our print process.
Have a dedicated print queue table
To avoid some of the issues with unrelated processes locking your invoices, move the Status field into its own table; so you only need to read from the invoices table; not update it.
This will also have the advantage that (if you don't care about print history) you can delete the record once done, so you'll get better performance (as you won't have to search through the entire invoices table to find those which are ready to print). That said, there's another option for this one (if you're on SQL2008 or above).
Use Filtered Index
Since the Status column will be updated several times it's not a great candidate for an index; i.e. as the record's position in the index jumps from branch to branch as the status progresses.
However, since we're filtering on it it's also something that would really benefit from having an index.
To get around this contradiction, one option's to use a filtered index; i.e. only index those records we're interested in for our print process; so we maintain a small index for a big benefit.
create nonclustered index ixf_Invoices_PrintStatusAndDate
on dbo.Invoices ([Status], [PrintRequestedDate])
include ([InvoiceId]) --just so we don't have to go off to the main table for this, but can get all we need form the index
where [Status] in ('Print','Printing')
Use "Enums" / Reference Tables
I suspect your example uses strings to keep the demo code simple, however including this for completeness.
Using strings in databases can make things hard to support. Rather than having status be a string value, instead use an ID from a related Statuses table.
create table Statuses
(
ID smallint not null primary key clustered --not identity since we'll likely mirror this enum in code / may want to have defined ids
,Name
)
go
insert Statuses
values
(0, 'Draft')
, (1,'Print')
, (2,'Printing')
, (3,'Printed')
create table Invoices
(
--...
, StatusId smallint foreign key references Statuses(Id)
--...
)

Microsoft SQL Server: wrong query execution plan taking too long

This is on Windows SQL Server Cluster.
Query is coming from 3rd party application so I can not modify the query permanently.
Query is:
DECLARE #FromBrCode INT = 1001
DECLARE #ToBrCode INT = 1637
DECLARE #Cdate DATE = '31-mar-2017'
SELECT
a.PrdCd, a.Name, SUM(b.Balance4) as Balance
FROM
D009021 a, D010014 b
WHERE
a.PrdCd = LTRIM(RTRIM(SUBSTRING(b.PrdAcctId, 1, 8)))
AND substring(b.PrdAcctId, 9, 24) = '000000000000000000000000'
AND a.LBrCode = b.LBrCode
AND a.LBrCode BETWEEN #FromBrCode AND #ToBrCode
AND b.CblDate = (SELECT MAX(c.CblDate)
FROM D010014 c
WHERE c.PrdAcctId = b.PrdAcctId
AND c.LBrCode = b.LBrCode
AND c.CblDate <= #Cdate)
GROUP BY
a.PrdCd, a.Name
HAVING
SUM(b.Balance4) <> 0
ORDER BY
a.PrdCd
This particular query is taking too much time to complete execution. The same problem happens on a different SQL Server.
No table lock was found, processor and memory usage is normal while the query is running.
Normal "select top 1000" working and showing output instantly in both tables (D009021, D010014)
Reindex and rebuild / update stats done in both tables but problem did not resolve (D009021, D010014)
The same query is working if we reduce number of branch but slowly
(
DECLARE #FromBrCode INT =1001
DECLARE #ToBrCode INT =1001
)
The same query is working faster giving output within 2 mins if we replace any one variable and use the value directly
AND a.LBrCode BETWEEN #FromBrCode AND #ToBrCode
changed to
AND a.LBrCode BETWEEN 1001 AND #ToBrCode
The same query is working faster and giving output within 2 mins if we add "OPTION (RECOMPILE)" at end
I tried to clean cache query execution plan and optimized new one but problem still exists
Found that the query estimate plan and actual execution plan are different (see screenshots)
Table D010014 is aliased twice once as b and once as c
the they are joined to the same table.
Try toto remove the sub query below and create a temp table to store
the values you need. I added * to the fields you self join
SELECT MAX(c.CblDate)
FROM D010014 c
WHERE c.PrdAcctId = b.PrdAcctId
AND c.LBrCode = b.LBrCode
AND c.CblDate <= #Cdate
if you cant do that then try
SELECT TOP 1 c.CblDate
FROM D010014 c
WHERE c.PrdAcctId = b.PrdAcctId
AND c.LBrCode = b.LBrCode
AND c.CblDate <= #Cdate
ORDER BY c.CblDate DESC

Never ending query in SQL Server 2012

SQL Server 2012, Amazon RDS
This is my simple query
update [dbo].[DeliveryPlan]
set [Amount] = dp.Amount +
case when #useAmountColumn = 1 and dbo.ConvertToInt(bs.Amount) > 0
then dbo.ConvertToInt(bs.Amount)
else #amount
end
from
BaseSpecification bs
join
BaseSpecificationStatusType t on (StatusTypeID = t.StatusTypeID)
join
[DeliveryPlan] dp on (dp.BaseSpecificationID = bs.BaseSpecificationID and dp.ItemID = #itemID)
where
bs.BaseID = 130 and t.IsActive = 1
It can't be finished. If where condition bs.BaseID=130 (update 7000 rows) change for bs.BaseID=3 (update 1000000 rows) it lasts 13 sec.
Statistics are actual, I think
In performance monitor I see 5% processor usage
When I use sp to watch active connections and for this query
tempdb_allocations is 32, tembdb_current - 32, reads - 32 000 000, cpu - 860 000 (query lasts 20 minutes)
What is the problem?
UPDATE: I added non-clustered index for [DeliveryPlan] - by BaseSpecificationID + ItemID and problem is gone. Unfortunately I see this problem every day with different queries. And problem disappears unpredicatedly.
This will perform better and in a different way as the join conditions will narrow down the number of rows in the first go itself, rather than waiting for the where clause to execute. The execution plan will be different for both (with where/ without where).
UPDATE dp
SET Amount = dp.Amount + CASE
WHEN #useAmountColumn = 1
AND dbo.ConvertToInt( bs.Amount ) > 0 THEN dbo.ConvertToInt( bs.Amount )
ELSE #amount
END
FROM BaseSpecification bs
JOIN BaseSpecificationStatusType t ON
( bs.StatusTypeID = t.StatusTypeID
AND bs.BaseID = 130
AND t.IsActive = 1
)
JOIN DeliveryPlan dp ON
( dp.BaseSpecificationID = bs.BaseSpecificationID
AND dp.ItemID = #itemID
);
You may suffer from a locking condition for your base tables.
Optimize your query to update dp directly to avoid update all rows of DeliveryPlan
update dp set [Amount] = dp.Amount +
case
when #useAmountColumn=1 and dbo.ConvertToInt(bs.Amount)>0 then
dbo.ConvertToInt(bs.Amount)
else #amount
end
from
BaseSpecification bs
join BaseSpecificationStatusType t on (bs.StatusTypeID = t.StatusTypeID)
join [DeliveryPlan] dp on (dp.BaseSpecificationID = bs.BaseSpecificationID)
where
bs.BaseID = 130
and t.IsActive = 1
and dp.ItemID = #itemID
If the problem mentioned in the update part is that it comes and goes randomly, it sounds like bad parameter sniffing. When the problem happens you could look into plan cache to check if the query plan looks ok and in case it doesn't, what are the values the plan was created with (you can find them in the leftmost object in the plan) and for example use sp_recompile and see what kind of plan you'll get the next time.

Cannot understand how will Entity Framewrok generate a SQL statement for an Update operation using timestamp?

I have the following method inside my asp.net mvc web application :
var rack = IT.ITRacks.Where(a => !a.Technology.IsDeleted && a.Technology.IsCompleted);
foreach (var r in rack)
{
long? it360id = technology[r.ITRackID];
if (it360resource.ContainsKey(it360id.Value))
{
long? CurrentIT360siteid = it360resource[it360id.Value];
if (CurrentIT360siteid != r.IT360SiteID)
{
r.IT360SiteID = CurrentIT360siteid.Value;
IT.Entry(r).State = EntityState.Modified;
count = count + 1;
}
}
IT.SaveChanges();
}
When I checked SQL Server profiler I noted that EF will generated the following SQL statement:
exec sp_executesql N'update [dbo].[ITSwitches]
set [ModelID] = #0, [Spec] = null, [RackID] = #1, [ConsoleServerID] = null, [Description] = null, [IT360SiteID] = #2, [ConsoleServerPort] = null
where (([SwitchID] = #3) and ([timestamp] = #4))
select [timestamp]
from [dbo].[ITSwitches]
where ##ROWCOUNT > 0 and [SwitchID] = #3',N'#0 int,#1 int,#2 bigint,#3 int,#4 binary(8)',#0=1,#1=539,#2=1502,#3=1484,#4=0x00000000000EDCB2
I can not understand the purpose of having the following section :-
select [timestamp]
from [dbo].[ITSwitches]
where ##ROWCOUNT > 0 and [SwitchID] = #3',N'#0 int,#1 int,#2 bigint,#3 int,#4 binary(8)',#0=1,#1=539,#2=1502,#3=1484,#4=0x00000000000EDCB2
Can anyone advice?
Entity Framework uses timestamps to check whether a row has changed. If the row has changed since the last time EF retrieved it, then it knows it has a concurrency problem.
Here's an explanation:
http://www.remondo.net/entity-framework-concurrency-checking-with-timestamp/
This is because EF (and you) want to update the updated client-side object by the newly generated rowversion value.
First the update is executed. If this succeeds (because the rowversion is still the one you had in the client) a new rowversion is generated by the database and EF retrieves that value. Suppose you'd immediately want to make a second update. That would be impossible if you didn't have the new rowversion.
This happens with all properties that are marked as identity or computed (by DatabaseGenertedOption).

How can I find what locks are involved in a process

I am running a SQL transaction with a bunch of statements in it.
The transaction was causing other processes to deadlock very occasionally, so I removed some of the things from the transaction that weren't really important. These are now done separately before the transaction.
I want to be able to compare the locking that occurs between the SQL before and after my change so that I can be confident the change will make a difference.
I expect more locking occurred before because more things were in the transaction.
Are there any tools that I can use? I can pretty easily get a SQL profile of both cases.
I am aware of things like sp_who, sp_who2, but the thing I struggle with for those things is that this is a snapshot in a particular moment in time. I would like the full picture from start to finish.
You can use SQL Server Profiler. Set up a profiler trace that includes the Lock:Acquired and Lock:Released events. Run your "before" query. Run your "after" query. Compare and contrast the locks taken (and types of locks). For context, you probably still want to also include some of the statement or batch events also, to see which statements are causing each lock to be taken.
you can use in built procedure:-- sp_who2
sp_who2 also takes a optional parameter of a SPID. If a spid is passed, then the results of sp_who2 only show the row or rows of the executing SPID.
for more detail info you can check: master.dbo.sysprocesses table
SELECT * FROM master.dbo.sysprocesses where spid=#1
below code shows reads and writes for the current command, along with the number of reads and writes for the entire SPID. It also shows the protocol being used (TCP, NamedPipes, or Shared Memory).
CREATE PROCEDURE sp_who3
(
#SessionID int = NULL
)
AS
BEGIN
SELECT
SPID = er.session_id
,Status = ses.status
,[Login] = ses.login_name
,Host = ses.host_name
,BlkBy = er.blocking_session_id
,DBName = DB_Name(er.database_id)
,CommandType = er.command
,SQLStatement =
SUBSTRING
(
qt.text,
er.statement_start_offset/2,
(CASE WHEN er.statement_end_offset = -1
THEN LEN(CONVERT(nvarchar(MAX), qt.text)) * 2
ELSE er.statement_end_offset
END - er.statement_start_offset)/2
)
,ObjectName = OBJECT_SCHEMA_NAME(qt.objectid,dbid) + '.' + OBJECT_NAME(qt.objectid, qt.dbid)
,ElapsedMS = er.total_elapsed_time
,CPUTime = er.cpu_time
,IOReads = er.logical_reads + er.reads
,IOWrites = er.writes
,LastWaitType = er.last_wait_type
,StartTime = er.start_time
,Protocol = con.net_transport
,transaction_isolation =
CASE ses.transaction_isolation_level
WHEN 0 THEN 'Unspecified'
WHEN 1 THEN 'Read Uncommitted'
WHEN 2 THEN 'Read Committed'
WHEN 3 THEN 'Repeatable'
WHEN 4 THEN 'Serializable'
WHEN 5 THEN 'Snapshot'
END
,ConnectionWrites = con.num_writes
,ConnectionReads = con.num_reads
,ClientAddress = con.client_net_address
,Authentication = con.auth_scheme
FROM sys.dm_exec_requests er
LEFT JOIN sys.dm_exec_sessions ses
ON ses.session_id = er.session_id
LEFT JOIN sys.dm_exec_connections con
ON con.session_id = ses.session_id
OUTER APPLY sys.dm_exec_sql_text(er.sql_handle) as qt
WHERE #SessionID IS NULL OR er.session_id = #SessionID
AND er.session_id > 50
ORDER BY
er.blocking_session_id DESC
,er.session_id
END

Resources