How does a SQL Server lock rows? - sql-server

I have a question to understand how to optimize a database. We have a SQL Server and the main data is ordered vertically. So a record has these columns
ID version fieldindex fieldvalue
and ID, version, fieldindex is the primary key.
So if you want to load a logical recordset you have to load all lines of an ID + version. One "record" could contain of somewhat 60 lines. I'm afraid that this causes some problems in the performance of the database.
There are around 10 users working in parallel in the application and we are getting deadlocks very often. We even get deadlocks inserting new lines, so normally a record that isn't persistent can't be locked.
So my question is, how does SQL Server lock records? Is it possible that a record is locked even if I am not selecting this record particularly?
I would be glad if someone could explain how the database is working.

You've got EAV, which is generally considered bad.
To make EAV work acceptably, you'll need the right indexing structure and possibly some care with lock hints and transactions.
Generally you'll want your clustered index to be (EntityID, AttributeId), so all the attributes for an entity are co-located. But to avoid deadlocks you may need to X lock the main Entity row when modifying the attributes, as SQL Server will otherwise use row locking on the AttributeValue table, which can lead to deadlocks, and logical inconsistencies. You can X lock it by modifying the row as the first operation in your transaction, or by reading it with an XLOCK hint.
Depending on the role of "version" in your system, it will be somewhere in the Clustered Index too. If attributes are individually versioned, then at the end. If individual Entities are viersioned then in the middle. And if a Version contains multiple Entities then as the leading column.

Related

Set Null works in Sqlite but not works in Sql Server when relating table itself

The problem is that "DeleteBehavior.SetNull" works only in Sqlite and doesn't work at all in Sql Server, is this some limitation of Sql Server with SET NULL?
I have the "User" model:
User.Id
User.Name
And I also have the "Partner" model:
Partner.Id
Partner.Title
Partner.ParentId
Partner.Parent (virtual)
Scenario:
I create Partner 1
I create Partner 2 and define that the ParentId is Partner 1 (1 is the father of 2)
I try to delete Partner 1 (I try to delete the parent)
At that moment, Sqlite defines NULL in the ParentId of Partner 2, that's correct, that's the behavior I want, but in SQL Server I can't do that at all, I tried innumerable ways and I fall into some errors, follow below:
Errors:
Delete Error:
Microsoft.Data.SqlClient.SqlException (0x80131904): The DELETE statement conflicted with the SAME TABLE REFERENCE constraint "FK_Partners_Partners_ParentId". The conflict occurred in database "master", table "dbo.Partners", column 'ParentId'.
Migrations Error:
Introducing FOREIGN KEY constraint 'FK_Partners_Partners_ParentId' on table 'Partners' may cause cycles or multiple cascade paths. Specify ON DELETE NO ACTION or ON UPDATE NO ACTION, or modify other FOREIGN KEY constraints.
Could not create constraint or index. See previous errors.
I even found some old texts saying that this is a Sql Server limitation, but it's already 2023 and this limitation still exists? Is it possible to get around this in some way that is easy and affects every table in the database?
I already tried all the DefaultBehavior and none works like Sqlite, I was programming 100% in Sqlite and I managed to develop a system and everything is working, however when generating the migration and trying to use Sql Server I came across this problem.
The same thing is asked at dba.stackexchange.com. The answers explain in detail why this isn't so easy to implement. Relational databases operate on sets of rows at a time, not individual rows. Deleting or updating rows one by one is the slowest way possible.
While SQLite is built to handle a few thousand rows for a single application running inside a watch, SQL Server has to handle thousands of concurrent operations to the same table that may contain several millions of rows spread across multiple partitions. The self-referencing ON DELETE SET NULL has to work reliably and predictably when deleting 1 row in an 1000 row table and when deleting 10K rows in a 50M row table.
As Mikael Eriksson explains in the first answer, ON DELETE SET NULL converts a DELETE operation on a table to an UPDATE operation on the same table.
This DBA question on cascading DELETEs shows what's involved in the easy case :
In this picture the server :
Finds the rows that need to be deleted in the first table,
Removes them from the parent table. That means marking rows and pages for deletion, writing records to the transaction log
Spool the deleted keys so they can be used on the related table
Repeat 1-2 on the child table
When all that finishes, commit the transaction by committing all changes in the data pages and the transaction log.
And that's a single operation. ON DELETE SET NULL on the other hand converts the DELETE operation to an DELETE and an UPDATE on the same table. The database would have to both DELETE and UPDATE index rows on the ParentID index to get this to happen. Different kinds of locks would have to be taken, and some of them could be taken
There's a similar statement that does multiple operations at once, MERGE. Aaron Bertrand's Use Caution with SQL Server's MERGE Statement shows a list of 30 bugs for that statement alone. MERGE isn't even atomic and the UPDATE/DELETE/INSERT operations are executed separately, which is the cause of some of the bugs.
I'd rather not have ON DELETE SET NULL than have a slow or unreliable one
While trying to reproduce this I found an SQLite limitation - foreign keys aren't enforced by default for compatibility with the way it worked over a decade ago. The docs warn this can change in the future:
Foreign key constraints are disabled by default (for backwards compatibility), so must be enabled separately for each database connection. (Note, however, that future releases of SQLite might change so that foreign key constraints enabled by default. Careful developers will not make any assumptions about whether or not foreign keys are enabled by default but will instead enable or disable them as necessary.) The application can also use a PRAGMA foreign_keys statement to determine if foreign keys are currently enabled.
This can seem like an illogical restriction in 2023 until one remembers that SQLite was built to run on the weakest possible devices (microcontrollers, not even processors) where the very fact of checking constraints can cause significant problems. Those devices can easily be inside a car or other hardware device with a lifetime of decades.

Avoiding Locking Contention on DB2 zOS

I want to place DB2 Triggers for Insert, Update and Delete on DB2 Tables heavily used in parallel online Transactions. The tables are shared by several members on a Sysplex, DB2 Version 10.
In each of the DB2 Triggers I want to insert a row into a central table and have one background process calling a Stored Procedure to read this table every second to process the newly inserted rows, ordered by sequence of the insert (sequence number or timestamp).
I'm very concerned about DB2 Index locking contention and want to make sure that I do not introduce Deadlocks/Timeouts to the applications with these Triggers.
Obviously I would take advantage of DB2 Features to reduce locking like rowlevel locking, but still see no real good approach how to avoid index contention.
I see three different options to select the newly inserted rows.
Put a sequence number in the table and the store the last processed sequence number in the background process. I would do the following select Statement:
SELECT COLUMN_1, .... Column_n
FROM CENTRAL_TABLE
WHERE SEQ_NO > 'last-seq-number'
ORDER BY SEQ_NO;
Locking Level must be CS to avoid selecting uncommited rows, which will be later rolled back.
I think I need one Index on the table with SEQ_NO ASC
Pro: Background process only reads rows and makes no updates/deletes (only shared locks)
Neg: Index contention because of ascending key used.
I can clean-up processed records later (e.g. by rolling partions).
Put a Status field in the table (processed and unprocessed) and change the Select as follows:
SELECT COLUMN_1, .... Column_n
FROM CENTRAL_TABLE
WHERE STATUS = 'unprocessed'
ORDER BY TIMESTAMP;
Later I would update the STATUS on the selected rows to "processed"
I think I need an Index on STATUS
Pro: No ascending sequence number in the index and no direct deletes
Cons: Concurrent updates by online transactions and the background process
Clean-up would happen in off-hours
DELETE the processed records instead of the status field update.
SELECT COLUMN_1, .... Column_n
FROM CENTRAL_TABLE
ORDER BY TIMESTAMP;
Since the table contains very few records, no index is required which could create a hot spot.
Also I think I could SELECT with Isolation Level UR, because I would detect potential uncommitted data on the later delete of this row.
For a Primary Key index I could use GENERATE_UNIQUE,which is random an not ascending.
Pro: No Index hot spot and the Inserts can be spread across the tablespace by random UNIQUE_ID
Con: Tablespace scan and sort on every call of the Stored Procedure and deleting records in parallel to the online inserts.
Looking forward what the community thinks about this problem. This must be a pretty common problem e.g. SAP should have a similar issue on their Batch Input tables.
I tend to favour Option 3, because it avoids index contention.
May be there is still another solution in your minds out there.
I think you are going to have numerous performance problems with your various solutions.
(I know premature optimazation is a sin, but experience tells us that some things are just not going to work in a busy system).
You should be able to use DB2s autoincrement feature to get your sequence number, with little or know performance implications.
For the rest perhaps you should look at a Queue based solution.
Have your trigger drop the operation (INSERT/UPDATE/DELETE) and the keys of the row into a MQ queue,
Then have a long running backgound task (in CICS?) do your post processing as its processing one update at a time you should not trip over yourself. Having a single loaded and active task with the ability to batch up units of work should give you a throughput in the order of 3 to 5 hundred updates a second.

SQL Server Optimize Large Changing Table

I have reports that perform some time consuming data calculations for each user in my database, and the result is 10 to 20 calculated new records for each user. To improve report responsiveness, a nightly job was created to run the calculations and dump the results to a snapshot table in the database. It only runs for active users.
So with 50k users, 30k of which are active, the job "updates" 300k to 600k records in the large snapshot table. The method it currently uses is it deletes all previous records for a given user, then inserts the new set. There is no PK on the table, only a business key is used to group the sets of data.
So my question is, when removing and adding up to 600k records every night, are there techniques to optimize the table to handle this? For instance, since the data can be recreated on demand, is there a way to disable logging for the table as these changes are made?
UPDATE:
One issue is I cannot do this in batch because the way the script works, it's examining one user at a time, so it looks at a user, deletes the previous 10-20 records, and inserts a new set of 10-20 records. It does this over and over. I am worried that the transaction log will run out of space or other performance issues could occur. I would like to configure the table to now worry about data preservation or other items that could slow it down. I cannot drop the indexes and all that because people are accessing the table concurrently to it being updated.
It's also worth noting that indexing could potentially speed up this bulk update rather than slow it down, because UPDATE and DELETE statements still need to be able to locate the affected rows in the first place, and without appropriate indexes it will resort to table scans.
I would, at the very least, consider a non-clustered index on the column(s) that identify the user, and (assuming you are using 2008) consider the MERGE statement, which can definitely avoid the shortcomings of the mass DELETE/INSERT method currently employed.
According to The Data Loading Performance Guide (MSDN), MERGE is minimally logged for inserts with the use of a trace flag.
I won't say too much more until I know which version of SQL Server you are using.
This is called Bulk Insert, you have to drop all indexes in destination table and send insert commands in large packs (hundreds of insert statements) separated by ;
Another way is to use BULK INSERT statement http://msdn.microsoft.com/en-us/library/ms188365.aspx
but it involves dumping data to file.
See also: Bulk Insert Sql Server millions of record
It really depends upon many things
speed of your machine
size of the records being processed
network speed
etc.
Generally it is quicker to add records to a "heap" or an un-indexed table. So dropping all of your indexes and re-creating them after the load may improve your performance.
Partitioning the table may see performance benefits if you partition by active and inactive users (although the data set may be a little small for this)
Ensure you test how long each tweak adds or reduces your load and work from there.

Bulkcopy inserts with DBCC CheckIdent

Our team needs to insert a cruel amount of data into our SQL Server 2008 database. We're looking for a good solution. Now we came up with one, but I have doubts with it, simply because it doesn't feel right. So I'm asking here if this seems like a good solution. Extra challange is that it's a peer-to-peer replicated database over 4 servers! :)
Imagine we have 1 million rows to insert
Start transaction
Increase current ident value on a table with 1 million
Have a DataSet/DataTable ready with 1 million rows and the correct ids
BulkCopy the data into the database
Commit transaction
Is this a good solution, might we get into concurrency issues, have too large transactions, etc.
you'll only get problems (as far as I can see, so there might be things I overlook!) if the database is online and users can insert rows into that table. Increasing the identity value for new rows on the meta-level simply means that the next row inserted by the system will use that number, so if you bump it with 1 million, it means you reserved those numbers up front.
Identity columns are 'nice' but have the side effect that they're not transferable. So if you have to migrate the data to another DB, realize that you likely have to adjust the data inserted to match the database you insert it in (as that's the scope of the data which means identity fields could collide with rows already in the table).
If this is a one-time affair, it might work out. If you're planning to do this regularly, I'd look into a more higher-level migration system where you migrate the data to new identity values or use guid's with NEWSEQUENTIALID() so you get proper checked indexes and also unique, transferable id's.

How to profile and address insert/update performance issues?

I am trying to insert thousands of rows into a table and performance is not acceptable. Rows on a particular table take 300ms per row to insert.
I know that tools exist to profile queries run against SQL Server (SQL Server Profile, Database Tuning Advisor), but how would I profile insert and update statements to determine slow running inserts? Am I forced to use perfmon while the queries run and deduce the issue with counters?
I would first check the query plan of a single insert to understand the costs associated to that operation - it is not known from the question whether the insert is selecting the data from elsewhere.
I would then check the table indexing for the following:
how many indexes are in place (apart from filtered indexes, each index will be inserted into as well)
whether a clustered index is present or are we inserting into a heap.
if the clustered index key means we will be getting a hotspot benefit on the end of the table or causing a large quantity of page splits.
This is all SQL schema based issues, assuming there is no problems within SQL, you can start checking disk IO counters to check for disk queue lengths and response time. Not forgetting the Log drive response time since each insert will be logged.
These kind of problems are very difficult to nail down as any 1 perscriptive thing / silver bullet you can give advice over, just a range of things you should be checking.
I'm betting that the problem is with the selects and not necessarily the updates. Have you tried profiling the select part of the update statement to make sure there isn't a problem there first?

Resources