I have a table ImportSourceMetadata which I use to control an import batch process. It contains a PK column SourceId and a data column LastCheckpoint. The import batch process reads the LastCheckpoint for a given SourceId, performs some logic (on other tables), then updates the LastCheckpoint for that SourceId or inserts it if it doesn't exist yet.
Multiple instances of the process run at the same time, usually with disjunct SourceIds, and I need high parallelity for those cases. However, it can happen that two processes are started for the same SourceId; in that case, I need the instances to block each other.
Therefore, my code looks as follows:
BEGIN TRAN
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
SELECT LastCheckpoint FROM ImportSourceMetadata WITH (UPDLOCK) WHERE SourceId = 'Source'
-- Perform some processing
-- UPSERT: if the SELECT above yielded no value, then
INSERT INTO ImportSourceMetadata(SourceId, LastCheckpoint) VALUES ('Source', '2013-12-21')
-- otherwise, we'd do this: UPDATE ImportSourceMetadata SET LastCheckpoint = '2013-12-21' WHERE SourceId = 'Source'
COMMIT TRAN
I'm using the transaction to achieve atomicity, but I can only use READ COMMITTED isolation level (because of the parallelity requirements in the "Perform some processing" block). Therefore (and to avoid deadlocks), I'm including an UPDLOCK hint with the SELECT statement to achieve a "critical section" parameterized on the SourceIdvalue.
Now, this works quite well most of the time, but I've managed to trigger primary key violation errors with the INSERT statement when starting a lot of parallel processes for the same SourceIdwith an empty database. I cannot reliably reproduce this, however, and I don't understand why it doesn't work.
I've found hints on the internet (e.g., here and here, in a comment) that I need to specify WITH (UPDLOCK,HOLDLOCK) (resp. WITH (UPDLOCK,SERIALIZABLE)) rather than just taking an UPDLOCK on the SELECT, but I don't really understand why that is. MSDN docs say,
UPDLOCK
Specifies that update locks are to be taken and held until the transaction completes.
An update lock that is taken and held until the transaction completes should be enough to block a subsequent INSERT, and in fact, when I try it out in SQL Server Management Studio, it does indeed block my insert. However, in some rare cases, it seems to suddenly not work any more.
So, why exactly is it that UPDLOCK is not enough, and why is it enough in 99% of my test runs (and when simulating it in SQL Server Management Studio)?
Update: I've now found I can reproduce the non-blocking behavior reliably by executing the code above in two different windows of SQL Server Management Studio simultaneously up to just before the INSERT, but only the first time after creating the database. After that (even though I deleted the contents of the ImportSourceMetadata table), the SELECT WITH (UPDLOCK) will indeed block and the code no longer fails. Indeed, in sys.dm_tran_locks, I can see a U-lock taken even though the row does not exist on subsequent test runs, but not on the first run after creating the table.
This is a complete sample to show the difference in locks between a "newly created table" and an "old table":
DROP TABLE ImportSourceMetadata
CREATE TABLE ImportSourceMetadata(SourceId nvarchar(50) PRIMARY KEY, LastCheckpoint datetime)
BEGIN TRAN
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
SELECT LastCheckpoint FROM ImportSourceMetadata WITH (UPDLOCK) WHERE SourceId='Source'
SELECT *
FROM sys.dm_tran_locks l
JOIN sys.partitions p
ON l.resource_associated_entity_id = p.hobt_id JOIN sys.objects o
ON p.object_id = o.object_id
INSERT INTO ImportSourceMetadata VALUES('Source', '2013-12-21')
ROLLBACK TRAN
BEGIN TRAN
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
SELECT LastCheckpoint FROM ImportSourceMetadata WITH (UPDLOCK) WHERE SourceId='Source'
SELECT *
FROM sys.dm_tran_locks l
JOIN sys.partitions p
ON l.resource_associated_entity_id = p.hobt_id JOIN sys.objects o
ON p.object_id = o.object_id
ROLLBACK TRAN
On my system (with SQL Server 2012), the first query shows no locks on ImportSourceMetadata, but the second query shows a KEY lock on ImportSourceMetadata.
In other words, HOLDLOCK is indeed required, but only if the table was freshly created. Why's that?
You also need HOLDLOCK.
If the row does exist then your SELECT statement will take out a U lock on at least that row and retain it until the end of the transaction.
If the row doesn't exist there is no row to take and hold a U lock in so you aren't locking anything. HOLDLOCK will lock at least the range where the row would fit in.
Without HOLDLOCK two concurrent transactions can both do the SELECT for a non existent row. Retain no conflicting locks and both move onto the INSERT.
Regarding the repro in your question it seems the "row doesn't exist" issue is a bit more complex than I first thought.
If the row previously did exist but has since been logically deleted but still physically exists on the page as a "ghost" record then the U lock can still be taken out on the ghost explaining the blocking that you are seeing.
You can use DBCC PAGE to see ghost records as in this slight amend to your code.
SET NOCOUNT ON;
DROP TABLE ImportSourceMetadata
CREATE TABLE ImportSourceMetadata
(
SourceId NVARCHAR(50),
LastCheckpoint DATETIME,
PRIMARY KEY(SourceId)
)
BEGIN TRAN
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
SELECT LastCheckpoint
FROM ImportSourceMetadata WITH (UPDLOCK)
WHERE SourceId = 'Source'
INSERT INTO ImportSourceMetadata
VALUES ('Source', '2013-12-21')
DECLARE #DBCCPAGE NVARCHAR(100)
SELECT TOP 1 #DBCCPAGE = 'DBCC PAGE(0,' + CAST(file_id AS VARCHAR) + ',' + CAST(page_id AS VARCHAR) + ',3) WITH NO_INFOMSGS'
FROM ImportSourceMetadata
CROSS APPLY sys.fn_physloccracker(%%physloc%%)
ROLLBACK TRAN
DBCC TRACEON(3604)
EXEC (#DBCCPAGE)
DBCC TRACEOFF(3604)
The SSMS messages tab shows
Slot 0 Offset 0x60 Length 31
Record Type = GHOST_DATA_RECORD Record Attributes = NULL_BITMAP VARIABLE_COLUMNS
Record Size = 31
Memory Dump #0x000000001215A060
0000000000000000: 3c000c00 00000000 9ba20000 02000001 †<.......¢......
0000000000000010: 001f0053 006f0075 00720063 006500††††...S.o.u.r.c.e.
Slot 0 Column 1 Offset 0x13 Length 12 Length (physical) 12
Related
I'm am working in SharePoint and am trying to understand why there is a limit of 5000 records for views on lists that can have a far greater number.
My question is not about SharePoint, but SQL. Does SQL have the limit described below, or was this something imposed on SQL by the SharePoint team?
For performance reasons, whenever SQL Server executes a single query
that returns 5000 + items, locks escalation happens within the SQL
table. As a result, the entire table will be locked. Since the entire
Share Point data is stored as a single table, a single list view query
that exceeds 5000+ items will lock the entire Share Point data table
within that content. The database and all the users will face huge
performance degradation. The entire set of users using Share Point at
the time of lock escalation will have to wait for a longer time to
retrieve the data. Hence, you can see that the list threshold is a
limitation that is imposed upon Share Point by its backend SQL Server.
This issue is generated from SQL and the reason is row lock
escalation. In order to avoid this performance degradation, Share
Point has imposed the limitation of 5000 items to be queried at any
point of time. Any queries for 5000+ items will be dealt the threshold
error message. Ref link
Thanks
EDIT____________________________________
An article on this issue:
https://www.c-sharpcorner.com/article/sharepoint-list-threshold-issue-the-traditional-problem/
Does SQL Server lock tables when queries return results greater than 5000
records?
Not generally, no.
It is documented that 5,000 is a magic number for the database engine to first attempt lock escalation (followed by further attempts at 1,250 increments) but unless running at repeatable read or serializable isolation level this will not generally be hit just by returning 5,0000 items in a SELECT. The default read committed level will release locks as soon as the data is read so never hit the threshold.
You can see the effect of isolation level on this with the following example.
CREATE TABLE T(C INT PRIMARY KEY);
INSERT INTO T
SELECT TOP 10000 ROW_NUMBER() OVER (ORDER BY ##SPID)
FROM sys.all_objects o1, sys.all_objects o2
And (uses undocumented trace flags so should only be used in dev environment)
DBCC TRACEON(3604,611);
/*5,000 key locks are held*/
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
BEGIN TRAN
SELECT COUNT(*) FROM (SELECT TOP 5000 C FROM T) T
SELECT resource_type, request_mode, count(*) FROM sys.dm_tran_locks where request_session_id = ##spid GROUP BY resource_type, request_mode;
COMMIT
/*No key locks are held. They have been escalated to an object level lock. The messages tab shows the lock escalation (in my case after 6248 locks not 5,000)*/
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
BEGIN TRAN
SELECT COUNT(*) FROM (SELECT TOP 10000 C FROM T) T
SELECT resource_type, request_mode, count(*) FROM sys.dm_tran_locks where request_session_id = ##spid GROUP BY resource_type, request_mode;
COMMIT
/*No key locks are held. They are released straight away at this isolation level. The messages tab shows no lock escalation messages*/
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
BEGIN TRAN
SELECT COUNT(*) FROM (SELECT TOP 10000 C FROM T) T
SELECT * FROM sys.dm_tran_locks where request_session_id = ##spid
COMMIT
DBCC TRACEOFF(3604,611);
I have a part of application that constantly updates values in table rows (1-100 rows).
Since this data integrity is important i am using SERIALIZABLE lock on transaction in functions reading and updating those rows.
Now my question is if i execute a simple read only SELECT (without lock) on rows that is currently used by a transaction i could probably get a DEADLOCK exception right?
So would that mean that i would still need a retry mechanism in case of DEADLOCK even in case of simple SELECT?
Now that I know what your specific business scenario is (from the comments), here's how you'd do something like what you're proposing without having to implement serializable isolation
CREATE PROCEDURE [dbo].[updateUserState] (#UserID int)
AS
BEGIN
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
BEGIN TRANSACTION
SELECT [State]
FROM [dbo].[UserState] WITH (UPDLOCK)
WHERE [UserID] = #UserID;
IF ([State] = 'logged out')
BEGIN
UPDATE [us]
SET [State] = 'logged in'
FROM [dbo].[UserState] AS [us]
WHERE [UserID] = #UserID;
END
COMMIT TRANSACTION
END
Note that this is simplified, but presents the main idea. The UPDLOCK hint on the SELECT statement is the key. It says "try to select data as through I was going to do an update (which you are! just later) and keep it until the end of the transaction". In your example, if T2 comes in and T1 is still running, T2 will be unable to obtain the update lock and will thus wait until T1 is complete (either successfully or no). Also note that setting the transaction isolation level explicitly is just for completeness; READ COMMITTED is the default in SQL Server.
Is a statement in SQL Server ACID?
What I mean by that
Given a single T-SQL statement, not wrapped in a BEGIN TRANSACTION / COMMIT TRANSACTION, are the actions of that statement:
Atomic: either all of its data modifications are performed, or none of them is performed.
Consistent: When completed, a transaction must leave all data in a consistent state.
Isolated: Modifications made by concurrent transactions must be isolated from the modifications made by any other concurrent transactions.
Durable: After a transaction has completed, its effects are permanently in place in the system.
The reason I ask
I have a single statement in a live system that appears to be violating the rules of the query.
In effect my T-SQL statement is:
--If there are any slots available,
--then find the earliest unbooked transaction and mark it booked
UPDATE Transactions
SET Booked = 1
WHERE TransactionID = (
SELECT TOP 1 TransactionID
FROM Slots
INNER JOIN Transactions t2
ON Slots.SlotDate = t2.TransactionDate
WHERE t2.Booked = 0 --only book it if it's currently unbooked
AND Slots.Available > 0 --only book it if there's empty slots
ORDER BY t2.CreatedDate)
Note: But a simpler conceptual variant might be:
--Give away one gift, as long as we haven't given away five
UPDATE Gifts
SET GivenAway = 1
WHERE GiftID = (
SELECT TOP 1 GiftID
FROM Gifts
WHERE g2.GivenAway = 0
AND (SELECT COUNT(*) FROM Gifts g2 WHERE g2.GivenAway = 1) < 5
ORDER BY g2.GiftValue DESC
)
In both of these statements, notice that they are single statements (UPDATE...SET...WHERE).
There are cases where the wrong transaction is being "booked"; it's actually picking a later transaction. After staring at this for 16 hours, I'm stumped. It's as though SQL Server is simply violating the rules.
I wondered what if the results of the Slots view is changing before the update happens? What if SQL Server is not holding SHARED locks on the transactions on that date? Is it possible that a single statement can be inconsistent?
So I decided to test it
I decided to check if the results of sub-queries, or inner operations, are inconsistent. I created a simple table with a single int column:
CREATE TABLE CountingNumbers (
Value int PRIMARY KEY NOT NULL
)
From multiple connections, in a tight loop, I call the single T-SQL statement:
INSERT INTO CountingNumbers (Value)
SELECT ISNULL(MAX(Value), 0)+1 FROM CountingNumbers
In other words the pseudo-code is:
while (true)
{
ADOConnection.Execute(sql);
}
And within a few seconds I get:
Violation of PRIMARY KEY constraint 'PK__Counting__07D9BBC343D61337'.
Cannot insert duplicate key in object 'dbo.CountingNumbers'.
The duplicate value is (1332)
Are statements atomic?
The fact that a single statement wasn't atomic makes me wonder if single statements are atomic?
Or is there a more subtle definition of statement, that differs from (for example) what SQL Server considers a statement:
Does this fundamentally means that within the confines of a single T-SQL statement, SQL Server statements are not atomic?
And if a single statement is atomic, what accounts for the key violation?
From within a stored procedure
Rather than a remote client opening n connections, I tried it with a stored procedure:
CREATE procedure [dbo].[DoCountNumbers] AS
SET NOCOUNT ON;
DECLARE #bumpedCount int
SET #bumpedCount = 0
WHILE (#bumpedCount < 500) --safety valve
BEGIN
SET #bumpedCount = #bumpedCount+1;
PRINT 'Running bump '+CAST(#bumpedCount AS varchar(50))
INSERT INTO CountingNumbers (Value)
SELECT ISNULL(MAX(Value), 0)+1 FROM CountingNumbers
IF (#bumpedCount >= 500)
BEGIN
PRINT 'WARNING: Bumping safety limit of 500 bumps reached'
END
END
PRINT 'Done bumping process'
and opened 5 tabs in SSMS, pressed F5 in each, and watched as they too violated ACID:
Running bump 414
Msg 2627, Level 14, State 1, Procedure DoCountNumbers, Line 14
Violation of PRIMARY KEY constraint 'PK_CountingNumbers'.
Cannot insert duplicate key in object 'dbo.CountingNumbers'.
The duplicate key value is (4414).
The statement has been terminated.
So the failure is independent of ADO, ADO.net, or none of the above.
For 15 years i've been operating under the assumption that a single statement in SQL Server is consistent; and the only
What about TRANSACTION ISOLATION LEVEL xxx?
For different variants of the SQL batch to execute:
default (read committed): key violation
INSERT INTO CountingNumbers (Value)
SELECT ISNULL(MAX(Value), 0)+1 FROM CountingNumbers
default (read committed), explicit transaction: no error key violation
BEGIN TRANSACTION
INSERT INTO CountingNumbers (Value)
SELECT ISNULL(MAX(Value), 0)+1 FROM CountingNumbers
COMMIT TRANSACTION
serializable: deadlock
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRANSACTION
INSERT INTO CountingNumbers (Value)
SELECT ISNULL(MAX(Value), 0)+1 FROM CountingNumbers
COMMIT TRANSACTION
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
snapshot (after altering database to enable snapshot isolation): key violation
SET TRANSACTION ISOLATION LEVEL SNAPSHOT
BEGIN TRANSACTION
INSERT INTO CountingNumbers (Value)
SELECT ISNULL(MAX(Value), 0)+1 FROM CountingNumbers
COMMIT TRANSACTION
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
Bonus
Microsoft SQL Server 2008 R2 (SP2) - 10.50.4000.0 (X64)
Default transaction isolation level (READ COMMITTED)
Turns out every query I've ever written is broken
This certainly changes things. Every update statement I've ever written is fundamentally broken. E.g.:
--Update the user with their last invoice date
UPDATE Users
SET LastInvoiceDate = (SELECT MAX(InvoiceDate) FROM Invoices WHERE Invoices.uid = Users.uid)
Wrong value; because another invoice could be inserted after the MAX and before the UPDATE. Or an example from BOL:
UPDATE Sales.SalesPerson
SET SalesYTD = SalesYTD +
(SELECT SUM(so.SubTotal)
FROM Sales.SalesOrderHeader AS so
WHERE so.OrderDate = (SELECT MAX(OrderDate)
FROM Sales.SalesOrderHeader AS so2
WHERE so2.SalesPersonID = so.SalesPersonID)
AND Sales.SalesPerson.BusinessEntityID = so.SalesPersonID
GROUP BY so.SalesPersonID);
without exclusive holdlocks, the SalesYTD is wrong.
How have I been able to do anything all these years.
I've been operating under the assumption that a single statement in SQL Server is consistent
That assumption is wrong. The following two transactions have identical locking semantics:
STATEMENT
BEGIN TRAN; STATEMENT; COMMIT
No difference at all. Single statements and auto-commits do not change anything.
So merging all logic into one statement does not help (if it does, it was by accident because the plan changed).
Let's fix the problem at hand. SERIALIZABLE will fix the inconsistency you are seeing because it guarantees that your transactions behave as if they executed single-threadedly. Equivalently, they behave as if they executed instantly.
You will be getting deadlocks. If you are ok with a retry loop, you're done at this point.
If you want to invest more time, apply locking hints to force exclusive access to the relevant data:
UPDATE Gifts -- U-locked anyway
SET GivenAway = 1
WHERE GiftID = (
SELECT TOP 1 GiftID
FROM Gifts WITH (UPDLOCK, HOLDLOCK) --this normally just S-locks.
WHERE g2.GivenAway = 0
AND (SELECT COUNT(*) FROM Gifts g2 WITH (UPDLOCK, HOLDLOCK) WHERE g2.GivenAway = 1) < 5
ORDER BY g2.GiftValue DESC
)
You will now see reduced concurrency. That might be totally fine depending on your load.
The very nature of your problem makes achieving concurrency hard. If you require a solution for that we'd need to apply more invasive techniques.
You can simplify the UPDATE a bit:
WITH g AS (
SELECT TOP 1 Gifts.*
FROM Gifts
WHERE g2.GivenAway = 0
AND (SELECT COUNT(*) FROM Gifts g2 WITH (UPDLOCK, HOLDLOCK) WHERE g2.GivenAway = 1) < 5
ORDER BY g2.GiftValue DESC
)
UPDATE g -- U-locked anyway
SET GivenAway = 1
This gets rid of one unnecessary join.
Below is an example of an UPDATE statement that does increment a counter value atomically
-- Do this once for test setup
CREATE TABLE CountingNumbers (Value int PRIMARY KEY NOT NULL)
INSERT INTO CountingNumbers VALUES(1)
-- Run this in parallel: start it in two tabs on SQL Server Management Studio
-- You will see each connection generating new numbers without duplicates and without timeouts
while (1=1)
BEGIN
declare #nextNumber int
-- Taking the Update lock is only relevant in case this statement is part of a larger transaction
-- to prevent deadlock
-- When executing without a transaction, the statement will itself be atomic
UPDATE CountingNumbers WITH (UPDLOCK, ROWLOCK) SET #nextNumber=Value=Value+1
print #nextNumber
END
Select does not lock exclusively, even serializable does, but only for the time the select is executed! Once the select is over, the select lock is gone. Then, update locks take on as they now know what to lock as Select has return results. Meanwhile, anyone else can Select again!
The only sure way to safely read and lock a row is:
begin transaction
--lock what i need to read
update mytable set col1=col1 where mykey=#key
--now read what i need
select #d1=col1,#d2=col2 from mytable where mykey=#key
--now do here calculations checks whatever i need from the row i read to decide my update
if #d1<#d2 set #d1=#d2 else set #d1=#d2 * 2 --just an example calc
--now do the actual update on what i read and the logic
update mytable set col1=#d1,col2=#d2 where mykey=#key
commit transaction
This way any other connection running the same statement for the same data it will surely wait at the first (fake) update statement until the previous is done. This ensures that when lock is released only one connection will granted permission to lock request to 'update' and this one will surely read committed finalized data to make calculations and decide if and what to actually update at the second 'real' update.
In other words, when you need to select information to decide if/how to update, you need a begin/commit transaction block plus you need to start with a fake update of what you need to select - before you select it(update output will also do).
I have upgraded from SQL Server 2005 to 2008. I remember that in 2005, ROWLOCK simply did not work and I had to use PAGELOCK or XLOCK to achieve any type of actual locking. I know a reader of this will ask "what did you do wrong?" Nothing. I conclusively proved that I could edit a "ROWLOCKED" row, but couldn't if I escalated the lock level. I haven't had a chance to see if this works in SQL 2008. My first question is has anyone come across this issue in 2008?
My second question is as follows. I want to test if a value exists and if so, perform an update on relevant columns, rather than an insert of the whole row. This means that if the row is found it needs to be locked as a maintenance procedure could delete this row mid-process, causing an error.
To illustrate the principle, will the following code work?
BEGIN TRAN
SELECT ProfileID
FROM dbo.UseSessions
WITH (ROWLOCK)
WHERE (ProfileID = #ProfileID)
OPTION (OPTIMIZE FOR (#ProfileID UNKNOWN))
if ##ROWCOUNT = 0 begin
INSERT INTO dbo.UserSessions (ProfileID, SessionID)
VALUES (#ProfileID, #SessionID)
end else begin
UPDATE dbo.UserSessions
SET SessionID = #SessionID, Created = GETDATE()
WHERE (ProfileID = #ProfileID)
end
COMMIT TRAN
An explanation...
ROWLOCK/PAGELOCK is granularity
XLOCK is mode
Granularity and isolation level and mode are orthogonal.
Granularity = what is locked = row, page, table (PAGLOCK, ROWLOCK, TABLOCK)
Isolation Level = lock duration, concurrency (HOLDLOCK, READCOMMITTED, REPEATABLEREAD, SERIALIZABLE)
Mode = sharing/exclusivity (UPDLOCK, XLOCK)
"combined" eg NOLOCK, TABLOCKX
XLOCK would have locked the row exclusively as you want. ROWLOCK/PAGELOCK wouldn't have.
When I perform a select/Insert query, does SQL Server automatically create an implicit transaction and thus treat it as one atomic operation?
Take the following query that inserts a value into a table if it isn't already there:
INSERT INTO Table1 (FieldA)
SELECT 'newvalue'
WHERE NOT EXISTS (Select * FROM Table1 where FieldA='newvalue')
Is there any possibility of 'newvalue' being inserted into the table by another user between the evaluation of the WHERE clause and the execution of the INSERT clause if I it isn't explicitly wrapped in a transaction?
You are confusing between transaction and locking. Transaction reverts your data back to the original state if there is any error. If not, it will move the data to the new state. You will never ever have your data in an intermittent state when the operations are transacted. On the other hand, locking is the one that allows or prevents multiple users from accessing the data simultaneously. To answer your question, select...insert is atomic and as long as no granular locks are explicitly requested, no other user will be able to insert while select..insert is in progress.
John, the answer to this depends on your current isolation level. If you're set to READ UNCOMMITTED you could be looking for trouble, but with a higher isolation level, you should not get additional records in the table between the select and insert. With a READ COMMITTED (the default), REPEATABLE READ, or SERIALIZABLE isolation level, you should be covered.
Using SSMS 2016, it can be verified that the Select/Insert statement requests a lock (and so most likely operates atomically):
Open a new query/connection for the following transaction and set a break-point on ROLLBACK TRANSACTION before starting the debugger:
BEGIN TRANSACTION
INSERT INTO Table1 (FieldA) VALUES ('newvalue');
ROLLBACK TRANSACTION --[break-point]
While at the above break-point, execute the following from a separate query window to show any locks (may take a few seconds to register any output):
SELECT * FROM sys.dm_tran_locks
WHERE resource_database_id = DB_ID()
AND resource_associated_entity_id = OBJECT_ID(N'dbo.Table1');
There should be a single lock associated to the BEGIN TRANSACTION/INSERT above (since by default runs in an ISOLATION LEVEL of READ COMMITTED)
OBJECT ** ********** * IX LOCK GRANT 1
From another instance of SSMS, open up a new query and run the following (while still stopped at the above break-point):
INSERT INTO Table1 (FieldA)
SELECT 'newvalue'
WHERE NOT EXISTS (Select * FROM Table1 where FieldA='newvalue')
This should hang with the string "(Executing)..." being displayed in the tab title of the query window (since ##LOCK_TIMEOUT is -1 by default).
Re-run the query from Step 2.
Another lock corresponding to the Select/Insert should now show:
OBJECT ** ********** 0 IX LOCK GRANT 1
OBJECT ** ********** 0 IX LOCK GRANT 1
ref: How to check which locks are held on a table