Error Message from our log (line breaks added here):
dbo.usp_Replenish_Maintenance_Cleanup : END STEP Table replenish.ProjectionXML;
ERROR : Transaction (Process ID 59) was deadlocked on lock | communication buffer resources with another process and has been chosen as the deadlock victim. Rerun the transaction.,
ERROR NUMBER:1205 during Copy operation. Execution End Time: 2017-03-09 01:17:36.6866667. (Execution Duration: 6 seconds.)
This is the 2nd night in a row where this has occurred, and it has only run twice on this platform! SQL 2014 SP2 CU3 Developer has the exact same issue albeit on 500K rows for the same table.
Platform is a 2x12-core with 256GB, using a BPE of 512GB, running 2016 SP1. MAXDOP is 12 per Bob Dorr's "It runs faster" post on Soft-NUMA (pssql post). It was MAXDOP=8 for the first run.
An explicit transaction is started part-way into the proc. The stored proc won't run if it detects ##TRANCOUNT > 0 when started.
The statement is very simple - dynamic SQL, with name substitutions:
SELECT r.*
INTO '+#nFullCleanupTableName+'
FROM '+#nFullTableName+' r WITH (TABLOCKX)
INNER JOIN replenish.ActiveEngineRun aer
ON r.RunID = aer.RunID
AND r.InventoryID = aer.InventoryID
AND r.SupplyChainID = aer.SupplyChainID
INNER JOIN tbl_Inventory i
ON i.InventoryID = r.InventoryID
WHERE i.ActiveFlag = 1';
There are compression, default constraint, create non-cl index and rename statements in the TRAN, ( not in that order).
The table has 48M rows consuming 193GB.
The data file has 216GB free.
I have the deadlock XDL file.
Solarwinds DPA shows only one SPID involved: 1 Victim, 16 Survivors. Its knowledge base says:
Exchange Event Lock
An "exchange event" resource indicates the presence of parallelism operators in a query plan.
When large query operations are "parallelized", multiple intra-query child threads are created that need to coordinate and communicate their activities. To accomodate intra-query parallelism, SQL Server employs this type of lock.
Granularity: N/A
Deadlock Solutions
Most intra-query parallelism deadlocks are considered bugs, since technically a session should not block itself, and are often addressed by:
Ensuring that SQL Server is on the latest service pack.
It is uncommon to see exchange event locks in deadlocks. However, if you see it frequently then look for ways to minimize the parallelism of the query. For example:
Add an index or modify the query to eliminate the need for parallelism. Large scans, sorts, or joins that do not use indexes are typical culprits.
Use the MAXDOP option to force the execution of the query to run single-threaded. You can do this by adding "OPTION (MAXDOP 1)" at the end of the query. Consider applying a hint to the plan guide if you cannot modify the query.
-- End of quote
This problem makes little sense to me. Is my only recourse to set MAXDOP=1 for just this table's insert? There are 5 other tables, all compressed, one of which has 3x the rowcount, that each succeed. Perhaps it's the nvarchar(max) column (off page) that's giving me the problem (contains XML)...
(Edit: Modified for just the one table, using OPTION ( MAXDOP 1 ) it runs to COMMIT!)
Wisdom is sought, please.
UPDATE Deadlock graph image and DDL
CREATE TABLE [replenish].[ProjectionXML](
[InventoryID] [int] NOT NULL,
[SupplyChainID] [int] NOT NULL,
[RunID] [uniqueidentifier] NOT NULL,
[AdhocRunDate] [datetime] NULL,
[ProjectionData] [nvarchar](max) NULL,
[RowStatusID] [smallint] NOT NULL
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
CREATE NONCLUSTERED INDEX [IX_replenish_ProjectionXML_KeyColumns] ON [replenish].[ProjectionXML]
(
[InventoryID] ASC,
[SupplyChainID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 80) ON [PRIMARY]
GO
ALTER TABLE [replenish].[ProjectionXML] ADD CONSTRAINT [DF_replenish_ProjectionXML_RowStatusID] DEFAULT ((1)) FOR [RowStatusID]
GO
Related
I have SQL Server 2014 restarted unexpectedly and that broke straight auto-increment identity sequences on entities. All new entities inserted to tables have their identities incremented by 10 000.
Let's say, if there were entities with IDs "1, 2, 3" now all newly inserted entities are like "10004, 10005".
Here is real data:
..., 12379, 12380, 12381, (after the restart) 22350, 22351, 22352, 22353, 22354, 22355
(Extra question here is why has it inserted the very first entity after the restart with 22350? I thought it should have been 22382 as it's the latest ID by that moment 12381 + 10001 = 22382)
I searched and found out the reasons for what happened. Now I want to prevent such situations in the future and fix the current jump. It's a production server and users continuously add new stuff to the DB.
QUESTION 1
What options do I have here?
My thoughts on how to prevent it are:
Use sequences instead of identity columns
Disable T272 flag, reseed identity causing it started from the latest right value (I guess there is such an option)
What are the drawbacks of the two above? Please advise some new ways if there are.
QUESTION 2
I'm not an expert in SQL Server. And now I need to normalize and adjust the numeration of entities since it's a business requirement. I think I need to write a script that updates wrong ID values setting them to be right. Is it dangerous to update identity values? Some tables have dependent records. What does this script may look like?
OTHER INFO
Here is how my identity columns declared (got this using "Generate scripts" option in SSMS):
CREATE TABLE [dbo].[Tasks]
(
[Id] [uniqueidentifier] NOT NULL,
[Created] [datetime] NOT NULL,
...
[TaskNo] [bigint] IDENTITY(1,1) NOT NULL
CONSTRAINT [PK_dbo.Tasks]
PRIMARY KEY CLUSTERED ([Id] ASC)
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,
IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON,
ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
I also use Entity Framework 6 for database manipulating.
I will be happy to provide any other information by request if needed.
I did this once in a weekend of downtime and ended up having to reseed the whole table, by turning off the identity insert and then updating each row with a row numbers. This was based on the tables correct sort order to make sure the sequence was correct.
As it updated the whole table, (500 million rows) it generated a hell of a lot of transaction log data. Make sure you have enough space for this and presize the log if required.
As said above though, if you must rely on the identity column then amend it to a sequence. Also, make sure your rollback mechanism is good if there is an error during insert and the sequence has all ready been incremented.
hi would like to ask about how to partition the following table (see below). The problem i'm having is not in the retrieval of History records which was resolved by the clustered Index. But as you can see the index is based on the HistoryParameterID then TimeStamp, this is needed because the retrieval of rows are based on the columns stated above.
The problem here is that whenever it reaches ~1 billion records, inserts are slowing down since the scenario is there will be 15k rows\second (note this can be 30k - 100k) to be inserted and per row it corresponds to a HistoryParameterID.
Basically, the HistoryParameterID is not unique , it has a one -> many relation ship with the other columns of the table below.
My hunch is that because of the index, it slows down the inserts because inserts are not always at the bottom because it is arranged by HistoryParameterID.
I did some testing using Timestamp as index but to no avail since query performance is unacceptable.
is there any way to partition this by history ParameterID? I was trying it so i created 15k Tables for partition view. But when i created the view it didn't finish executing. Any tips? or is there any way to partition ? Please note that i'm using Standard edition and using enterprise edition is not an option.
CREATE TABLE [dbo].[HistorySampleValues]
(
[HistoryParameterID] [int] NOT NULL,
[SourceTimeStamp] [datetime2](7) NOT NULL,
[ArchiveTimestamp] [datetime2](7) NOT NULL CONSTRAINT [DF__HistorySa__Archi__2A164134] DEFAULT (getutcdate()),
[ValueStatus] [int] NOT NULL,
[ArchiveStatus] [int] NOT NULL,
[IntegerValue] [bigint] SPARSE NULL,
[DoubleValue] [float] SPARSE NULL,
[StringValue] [varchar](100) SPARSE NULL,
[EnumNamedSetName] [varchar](100) SPARSE NULL,
[EnumNumericValue] [int] SPARSE NULL,
[EnumTextualValue] [varchar](256) SPARSE NULL
) ON [PRIMARY]
CREATE CLUSTERED INDEX [Source_HistParameterID_Index] ON [dbo].[HistorySampleValues]
(
[HistoryParameterID] ASC,
[SourceTimeStamp] ASC
) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)
GO
I was trying it so i created 15k Tables for partition view. But when
i created the view it didn't finish executing. Any tips? or is there
any way to partition ? Please note that i'm using Standard edition and
using enterprise edition is not an option.
If you go down the partitioned view path (http://technet.microsoft.com/en-us/library/ms190019.aspx), I suggest fewer tables (under one hundred). Without partitioned tables, the optimizer must go through a lot of work since each table of the view could be indexed differently.
I would not expect inserts to slow down with table size if HistoryParameterID is incremental. However, in the case of a random value, inserts will become progressively slower as the table size grows due to lower buffer cache efficiency. That problem will exist with a single table, partitioned table, or partitioned view. See http://www.dbdelta.com/improving-uniqueidentifier-performance/ for an example using a guid but the issue applies to any random key value.
You might try a single table with SourceTimestamp alone as the clustered index key and a non-clustered index on HistoryID nad SourceTimestamp. That would provide the best insert performance and the non-clustered index (maybe with included columns) might be good enough for your select queries.
Everything you need is here. I'll hope you can figure it out.
http://msdn.microsoft.com/en-us/library/ms188730.aspx
and for Standard Edition alternative solutions exist like this answer.
and this is an interesting article too.
also we implement that in our enterprise automation application with custom indexing around table of users and it worked well.
Here's the cons and pros of custom implementation:
Pros:
Higher performance that partitioned table because of application's logic awareness.
Cons:
Implementing routing method and updating indexes.
Un-Centralized data.
A colleague wrote a query which uses the hints "with (NOLOCK,NOWAIT)".
e.g.
select first_name, last_name, age
from people with (nolock,nowait)
Assumptions:
NOLOCK says "don't worry about any locks at any level, just read the data now"
NOWAIT says "don't wait, just error if the table is locked"
Question:
Why use both at the same time? Surely NOWAIT will never be realised, as NOLOCK means it wouldn't wait for locks anyway ... ?
It's redundant (or at least, ineffective). In one query window, execute:
create table T (ID int not null)
begin transaction
alter table T add ID2 int not null
leave this window open, open another query window and execute:
select * from T WITH (NOLOCK,NOWAIT)
Despite the NOWAIT hint, and despite it being documented as returning a message as soon as any lock is encountered, this second query will hang, waiting for the Schema lock.
Read the documentation on Table Hints:
NOWAIT:
Instructs the Database Engine to return a message as soon as a lock is encountered on the table
Note that this is talking about a lock, any lock.
NOLOCK (well, actually READUNCOMMITTED):
READUNCOMMITTED and NOLOCK hints apply only to data locks. All queries, including those with READUNCOMMITTED and NOLOCK hints, acquire Sch-S (schema stability) locks during compilation and execution. Because of this, queries are blocked when a concurrent transaction holds a Sch-M (schema modification) lock on the table.
So, NOLOCK does need to wait for some locks.
NOLOCK is the same as READUNCOMMITTED, for which MSDN states:
... exclusive locks set by other transactions do not block the current
transaction from reading the locked data.
Based on that sentence, I would say you are correct and that issuing NOLOCK effectively means any data locks are irrelevant, so NOWAIT is redundant as the query can't be blocked.
However, the article goes on to say:
READUNCOMMITTED and NOLOCK hints apply only to data locks
You can also get schema modification locks, and NOLOCK cannot ignore these. If you issued a query with NOLOCK whilst a schema object was being updated, it is possible your query would be blocked by a lock of type Sch-M.
It would be interesting to see if in that unlikely case the NOWAIT is actually respected. However for your purposes, I would guess it's probably redundant.
It does not make any sense to use them together. NOLOCK overrides the behavior of NOWAIT. Here's a demonstration of the NOWAIT Functionality. Comment in the NOLOCK and watch the records return despite the Exclusive Lock.
Create the table. Execute the 1st SSMS window without commiting the transaction. Execute the second window get an error because of no wait. Comment out the first query and execute the second query with the NOLOCK and NOWAIT. Get results. Rollback your transaction when you are done.
DDL
USE [tempbackup]
GO
/****** Object: Table [TEST_TABLE] Script Date: 02/19/2014 09:14:00 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [TEST_TABLE](
[ID] [int] IDENTITY(1,1) NOT NULL,
[Name] [varchar](50) NULL,
CONSTRAINT [PK_TEST_TABLE] PRIMARY KEY CLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
INSERT INTO tempbackup.dbo.TEST_TABLE(Name) VALUES ('MATT')
GO
SSMS WINDOW 1
BEGIN TRANSACTION
UPDATE tempbackup.dbo.TEST_TABLE WITH(XLOCK) SET Name = 'RICHARD' WHERE ID = 1
--ROLLBACK TRANSACTION
SSMS WINDOW 2
SELECT * FROM tempbackup.dbo.TEST_TABLE WITH(NOWAIT)
--SELECT * FROM tempbackup.dbo.TEST_TABLE WITH(NOLOCK,NOWAIT)
I'm inserting into a SQL database from multiple processes. It's likely that the processes will sometimes try to insert duplicate data into the table. I've tried to write the query in a way that will handle the duplicates but I still get:
System.Data.SqlClient.SqlException: Violation of UNIQUE KEY constraint 'UK1_MyTable'. Cannot insert duplicate key in object 'dbo.MyTable'.
The statement has been terminated.
My query looks something like:
INSERT INTO MyTable (FieldA, FieldB, FieldC)
SELECT FieldA='AValue', FieldB='BValue', FieldC='CValue'
WHERE (SELECT COUNT(*) FROM MyTable WHERE FieldA='AValue' AND FieldB='BValue' AND FieldC='CValue' ) = 0
The constraint 'UK1_MyConstraint' says that in MyTable, the combination of the 3 fields should be unique.
My questions:
Why doesn't this work?
What modification do I need to make so there is no chance of an exception due to the constraint violation?
Note that I'm aware that there are other approaches to solving the original problem of "INSERT if not exists" such as (in summary):
Using TRY CATCH
IF NOT EXIST INSERT (inside a transaction with serializable isolation)
Should I be using one of the approaches?
Edit 1 SQL for Creating Table:
CREATE TABLE [dbo].[MyTable](
[Id] [bigint] IDENTITY(1,1) NOT NULL,
[FieldA] [bigint] NOT NULL,
[FieldB] [int] NOT NULL,
[FieldC] [char](3) NULL,
[FieldD] [float] NULL,
CONSTRAINT [PK_MyTable] PRIMARY KEY NONCLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON),
CONSTRAINT [UK1_MyTable] UNIQUE NONCLUSTERED
(
[FieldA] ASC,
[FieldB] ASC,
[FieldC] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)
)
Edit 2 Decision:
Just to update this - I've decided to use the "JFDI" implementation suggested in the linked question (link). Although I'm still curious as to why the original implementation doesn't work.
Why doesn't this work?
I believe the default behaviour of SQL Server is to release shared locks as soon as they are no longer needed. Your sub-query will result in a short-lived shared (S) lock on the table, which will be released as soon as the sub-query completes.
At this point there is nothing to prevent a concurrent transaction from inserting the very row you just verified was not present.
What modification do I need to make so there is no chance of an exception due to the constraint violation?
Adding the HOLDLOCK hint to your sub-query will instruct SQL Server to hold on to the lock until the transaction is completed. (In your case, this is an implicit transaction.) The HOLDLOCK hint is equivalent to the SERIALIZABLE hint, which itself is equivalent to the serializable transaction isolation level which you refer in your list of "other approaches".
The HOLDLOCK hint alone would be sufficient to retain the S lock and prevent a concurrent transaction from inserting the row you are guarding against. However, you will likely find your unique key violation error replaced by deadlocks, occurring at the same frequency.
If you're retaining only an S lock on the table, consider a race between two concurrent attempts to insert the same row, proceeding in lockstep -- both succeed in acquiring an S lock on the table, but neither can succeed in acquiring the Exclusive (X) lock required to execute the insert.
Luckily there is another lock type for this exact scenario, called the Update (U) lock. The U lock is identical to an S lock with the following difference: whilst multiple S locks can be held simultaneously on the same resource, only one U lock may be held at a time. (Said another way, whilst S locks are compatible with each other (i.e. can coexist without conflict), U locks are not compatible with each other, but can coexist alongside S locks; and further along the spectrum, Exclusive (X) locks are not compatible with either S or U locks)
You can upgrade the implicit S lock on your sub-query to a U lock using the UPDLOCK hint.
Two concurrent attempts to insert the same row in the table will now be serialized at the initial select statement, since this acquires (and holds) a U lock, which is not compatible with another U lock from the concurrent insertion attempt.
NULL values
A separate problem may arise from the fact that FieldC allows NULL values.
If ANSI_NULLS is on (default) then the equality check FieldC=NULL would return false, even in the case where FieldC is NULL (you must use the IS NULL operator to check for null when ANSI_NULLS is on). Since FieldC is nullable, your duplicate check will not work when inserting a NULL value.
To correctly deal with nulls you will need to modify your EXISTS sub-query to use the IS NULL operator rather than = when a value of NULL is being inserted. (Or you can change the table to disallow NULLs in all the concerned columns.)
SQL Server Books Online References
Locking Hints
Lock Compatibility Matrix
ANSI_NULLS
RE: "I'm still curious as to why the original implementation doesn't work."
Why would it work?
What is there to prevent two concurrent transactions being interleaved as follows?
Tran A Tran B
---------------------------------------------
SELECT COUNT(*)...
SELECT COUNT(*)...
INSERT ....
INSERT... (duplicate key violation).
The only time conflicting locks will be taken is at the Insert stage.
To see this in SQL Profiler
Create Table Script
create table MyTable
(
FieldA int NOT NULL,
FieldB int NOT NULL,
FieldC int NOT NULL
)
create unique nonclustered index ix on MyTable(FieldA, FieldB, FieldC)
Then paste the below into two different SSMS windows. Take a note of the spids of the connections (x and y) and set up a SQL Profiler Trace capturing locking events and user error messages. Apply filters of spid=x or y and severity = 0 and then execute both scripts.
Insert Script
DECLARE #FieldA INT, #FieldB INT, #FieldC INT
SET NOCOUNT ON
SET CONTEXT_INFO 0x696E736572742074657374
BEGIN TRY
WHILE 1=1
BEGIN
SET #FieldA=( (CAST(GETDATE() AS FLOAT) - FLOOR(CAST(GETDATE() AS FLOAT))) * 24 * 60 * 60 * 300)
SET #FieldB = #FieldA
SET #FieldC = #FieldA
RAISERROR('beginning insert',0,1) WITH NOWAIT
INSERT INTO MyTable (FieldA, FieldB, FieldC)
SELECT FieldA=#FieldA, FieldB=#FieldB, FieldC=#FieldC
WHERE (SELECT COUNT(*) FROM MyTable WHERE FieldA=#FieldA AND FieldB=#FieldB AND FieldC=#FieldC ) = 0
END
END TRY
BEGIN CATCH
DECLARE #message VARCHAR(500)
SELECT #message = 'in catch block ' + ERROR_MESSAGE()
RAISERROR(#message,0,1) WITH NOWAIT
DECLARE #killspid VARCHAR(10)
SELECT #killspid = 'kill ' +CAST(SPID AS VARCHAR(4)) FROM sys.sysprocesses WHERE SPID!=##SPID AND CONTEXT_INFO = (SELECT CONTEXT_INFO FROM sys.sysprocesses WHERE SPID=##SPID)
EXEC ( #killspid )
END CATCH
Off the top of my head, I have a feeling one or more of those columns accepts nulls. I would like to see the create statement for the table including the constraint.
This is a continuation from When I update/insert a single row should it lock the entire table?
Here is my problem.
I have a table that holds locks so that other records in the system don’t have to take locks out on common resources, but can still queue the tasks so that they get executed one at a time.
When I access a record in this locks table I want to be able to lock it and update it (just the one record) without any other process being able to do the same. I am able to do this with a lock hint such as updlock.
What happens though is that even though I’m using a rowlock to lock the record, it blocks a request to another process to alter a completely unrelated row in the same table that would also have specified the updlock hint along with rowlock.
You can recreate this be making a table
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[Locks](
[ID] [int] IDENTITY(1,1) NOT NULL,
[LockName] [varchar](50) NOT NULL,
[Locked] [bit] NOT NULL,
CONSTRAINT [PK_Locks] PRIMARY KEY CLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 100) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
ALTER TABLE [dbo].[Locks] ADD CONSTRAINT [DF_Locks_LockName] DEFAULT ('') FOR [LockName]
GO
ALTER TABLE [dbo].[Locks] ADD CONSTRAINT [DF_Locks_Locked] DEFAULT ((0)) FOR [Locked]
GO
Add two rows for a lock with LockName=‘A’ and one for LockName=‘B’
Then create two queries to run in a transaction at the same time against it:
Query 1:
Commit
Begin transaction
select * From Locks with (updlock rowlock) where LockName='A'
Query 2:
select * From Locks with (updlock rowlock) where LockName='B'
Please note that I am leaving the transaction open so that you can see this issue since it wouldn’t be visible without this open transaction.
When you run Query 1 locks are issues for the row and any subsequent queries for LockName=’A’ will have to wait. This behaviour is correct.
Where this gets a bit frustrating is when you run Query 2 you are blocked until Query 1 finishes even thought these are unrelated records. If you then run Query 1 again just as I have it above, it will commit the previous transaction, Query 2 will run and then Query 1 will once again lock the record.
Please offer some suggestions as to how I might be able to have it properly lock ONLY the one row and not prevent other items from being updated as well.
PS. Holdlock also fails to produce the correct behaviour after one of the rows is updated.
In SQL Server, the lock hints are applied to the objects scanned, not matched.
Normally, the engine places a shared lock on the objects (pages etc) while reading them and lifts them (or does not lift in SERIALIZABLE transactions) after the scanning is done.
However, you instruct the engine to place (and lift) the update locks which are not compatible with each other.
The transaction B locks while trying to put an UPDLOCK onto the row already locked with an UPDLOCK by transaction A.
If you create an index and force its usage (so no conflicting reads ever occur), your tables will not lock:
CREATE INDEX ix_locks_lockname ON locks (lockname)
Begin transaction
select * From Locks with (updlock rowlock INDEX (ix_locks_lockname)) where LockName='A'
Begin transaction
select * From Locks with (updlock rowlock INDEX (ix_locks_lockname)) where LockName='B'
For query 2, try using the READPAST hint - this (quote):
Specifies that the Database Engine not
read rows that are locked by other
transactions. Under most
circumstances, the same is true for
pages. When READPAST is specified,
both row-level and page-level locks
are skipped. That is, the Database
Engine skips past the rows or pages
instead of blocking the current
transaction until the locks are
released
This is typically used in queue-processing type environments - so multiple processes can pull off the next item from a queue table without being blocked out by other processes (of course, using UPDLOCK to prevent multiple processes picking up the same row).
Edit 1:
It could be caused if you don't have an index on the LockName field. With the index, query 2 could do an index seek to the exact row. But without it, it would be doing a scan (checking every row) meaning it gets held up by the first transaction. So if it's not indexed, try indexing it.
I am not sure what you are trying to accomplish, but typically those who are dealing with similar problems want to use sp_getapplock. Covered by Tony Rogerson:Assisting Concurrency by creating your own Locks (Mutexs in SQL)
If you want queueing in SQL Server, use UPDLOCK, ROWLOCK, READPAST hints. It works.
I'd consider changing your approach rather than trying to change SQL Server behaviour...