How to avoid unwanted transactions and locks using FireDAC and SQL Server - sql-server

From analyzing table locks in SQL Server, my Win32 application built in RAD Studio XE7 starts numerous transactions while each FDQuery is active. Sometimes this causes application problems and locks with dozens of users. Especially with triggered tables.
For my test, I used simple FDConnection and FDQuery as Select * from Customer with default settings, and concluded that FDQuery1.Active:=True causes the start of a Customer table transaction. The transaction disappears when FDQuery1.Active:=false.
I would like to inhibit the starting of transactions in FDQuery for read-only, as lists of data for grid or reports.
But I can't find a way to find the appropriate tuning of FDQuery.

By default, SQL Server does not implement versioning of data blocks. So, to return a consistent set of rows, it guarantee that no other sessions makes changes do data during execution of a query, using shared locks.
Using "WITH(NOLOCK)" disable shared locks, but can result in an inconsistent result set.
The only one solution is to use READ_COMMITED_SNAPSHOT isolation level, which store changed data to temp, used to return consistent result sets without locking updates.

Look at the NOLOCK keyword for tables. https://www.mssqltips.com/sqlservertip/2470/understanding-the-sql-server-nolock-hint/

Related

Damage controlling deadlocks in SQL Server by switching the isolation level?

I joined a project a while ago, which is a a few web servers and a few backend servers.
They all do CRUD things on one database.
Unfortunately, a few tables fall into a deadlock situation for a while now. We can see those victim statements via SQL Server Management Studio and its extended events feature.
Primary keys and all the necessary indexes are set already. We even rebuilt them, alot of these had fragmentations over 50%.
Thing is, there is this one table we would like to switch to the isolation level called SNAPSHOT. I know this won't solve the deadlock situation at all hence I read that write statements might block each other.
One table contains logs (login of users, tasks started and ended on the backends, yadda yadda...), the other one contains all the processes, so the backends are selecting, inserting and updating (like setting the "running" field from 0 to 1 and vice versa). While the first one for logging reasons might be good for the snapshot level, I doubt it might be recommended for the process table, as far as I understood how the snapshot leveling is working. And I am also aware that rollbacks of transactions will block the tables during the rollback process anyway.
Even the sysobjects table is getting blocked sometimes when a table has to be dropped. And I must mention that the database is ridiculously large, like many many table.
What I would like to know is, if you guys ever switched from whatever isolation level to snapshot and what challenges you had to face, or even if you changed your mind when it came to deadlock prevention and tried a different approach, like hardware upgrade, etc...

Disable transactions on SQL Server

I need some light here. I am working with SQL Server 2008.
I have a database for my application. Each table has a trigger to stores all changes on another database (on the same server) on one unique table 'tbSysMasterLog'. Yes the log of the application its stored on another database.
Problem is, before any Insert/update/delete command on the application database, a transaction its started, and therefore, the table of the log database is locked until the transaction is committed or rolled back. So anyone else who tries to write in any another table of the application will be locked.
So...is there any way possible to disable transactions on a particular database or on a particular table?
You cannot turn off the log. Everything gets logged. You can set to "Simple" which will limit the amount of data saved after the records are committed.
" the table of the log database is locked": why that?
Normally you log changes by inserting records. The insert of records should not lock the complete table, normally there should not be any contention in insertion.
If you do more than inserts, perhaps you should consider changing that. Perhaps you should look at the indices defined on log, perhaps you can avoid some of them.
It sounds from the question that you have a create transaction at the start of your triggers, and that you are logging to the other database prior to the commit transaction.
Normally you do not need to have explicit transactions in SQL server.
If you do need explicit transactions. You could put the data to be logged into variables. Commit the transaction and then insert it into your log table.
Normally inserts are fast and can happen in parallel with out locking. There are certain things like identity columns that require order, but this is very lightweight structure they can be avoided by generating guids so inserts are non blocking, but for something like your log table a primary key identity column would give you a clear sequence that is probably helpful in working out the order.
Obviously if you log after the transaction, this may not be in the same order as the transactions occurred due to the different times that transactions take to commit.
We normally log into individual tables with a similar name to the master table e.g. FooHistory or AuditFoo
There are other options a very lightweight method is to use a trace, this is what is used for performance tuning and will give you a copy of every statement run on the database (including triggers), and you can log this to a different database server. It is a good idea to log to different server if you are doing a trace on a heavily used servers since the volume of data is massive if you are doing a trace across say 1,000 simultaneous sessions.
https://learn.microsoft.com/en-us/sql/tools/sql-server-profiler/save-trace-results-to-a-table-sql-server-profiler?view=sql-server-ver15
You can also trace to a file and then load it into a table, ( better performance), and script up starting stopping and loading traces.
The load on the server that is getting the trace log is minimal and I have never had a locking problem on the server receiving the trace, so I am pretty sure that you are doing something to cause the locks.

What is the proper way to run a long query against an active database?

We are using SQL Server 2012 EE but currently do not have the option to run queries on a R/O mirror though that is my long term goal, though am concerned I may run into the below issue in that scenario as well since the mirror would also be updating data I am querying.
I have a view that joins across several tables from two databases and is used for invoicing from existing data. Three of these tables are also actively updated by ongoing transactions. Running a report that used this view did not used to be a problem but now our database is getting much larger and I have run into some timeout problems. First the query was timing out so I set command timeout to 0 and reran the query which pegged all 4 CPUs 100% for 90 minutes and then I killed it. There were no problems with active transactions during that time. I reviewed the query and found a field I was joining on that was not indexed so created an index on that field, reran the report, which then finished in three minutes and all the CPUs were busy but not at all pegged out. Same data amount queried both times. I figured problem solved. Of course later, my boss ran a similar query, perhaps with some more data but probably not a lot more, and our live transactions started timing out 100% while his query was running. I did not get a chance to see the CPU usage during that time.
So my questions are two:
Given I have to use the live and active database, what is the proper way to run a long R/O query so that active transactions can still continue? I am considering NO LOCK but am hoping there is a better standard practice.
And what might cause sqlserver to peg out 4 CPUs with 100% busy and not cause live transaction timeouts, yet when my boss ran his query, after I added the index and my query ran much better, the live update transactions start timing out 100%?
I know this is not a lot of info to go on. I'm not very familiar with sql profiling and performance monitoring yet this behavior seems rather odd and am hoping a best practice would be the correct workaround.
The default behavior of SELECT queries in the READ_COMMITTED transaction isolation level is to acquire shared locks during query execution to provide the requested data consistency (read committed data only). These locks are typically row-level and released quickly during query execution immediately after each row is read. There are also less granular intent locks at the page and table level prevent concurrent updates to data as it is being read. Depending on the particulars of the execution plan, there may even be shared locks held at the table level for the duration of the query, which will prevent updates to the table during query execution and result in readers blocking writers.
Setting the READ_COMMITTED_SNAPSHOT database option causes SQL Server to use row versioning instead of locking to provide the same read consistency. A row version store is maintained in tempdb so that when a row requested by the query has changed since the query began, the most recent committed row version is returned instead. This row-versioning behavior avoids locking and effectively provides a statement-level snapshot of the database at the time the query began. Readers do not block writers and writers do not block readers. Do not confuse the READ_COMMITTED_SNAPSHOT database option with the SNAPSHOT isolation level (a common mistake).
The downside of setting the READ_COMMITTED_SNAPSHOT is additional resource usage. An additional 14 bytes of storage overhead for each row is incurred once the database option is enabled. Updates and deletes will generate row versions in tempdb. These versions require tempdb space for the duration of the longest running query and there is overhead in maintained the version store. Also consider whether you have existing applications that depend on readers-block-writers locking behavior. Despite this overhead, the concurrency benefits may yield better overall performance depending on your workload, while providing read integrity. See http://technet.microsoft.com/en-us/library/ms188277.aspx for more information.
Actually I decided to create a snapshot at the beginning of each month for reporting to run against. Then delete when no longer needed for reporting. This seems to work fine. I could do something similar with a database restore but slightly more work. This allows not needing a second SQL EE license, and lets me run reports w/o locking tables for live transactions.

Does SSRS lock the table when querying?

my senior told me that for SQL Query execution by default doesn't lock the table.
But i was having some issues with my SSRS report which seems to be getting some issues with locking and getting some errors.
I did some googling but fall short of finding anything.
Just looking for confirmation does SSRS report actually will lock any tables that is being queried?
And are there any MSDN documentation that document this behavior down specifically?
SSRS doesn't lock anything on its own. The locking will be driven by your queries embeded in the reports. Nobody can answer this question but you, look at the reports and the queries used and then see if they lock or not lock tables.
Normally queries do lock data in the tables, not tables. Consistent, correct, reports absolutely do require locking. Do not succumb to the fallacy of adding NOLOCK hint and call it a day, you will get incorrect results.
If you see contention in production caused by reporting, then there are many solutions. Offload reporting to a read-only server, using Availability Groups, a database snapshot or a standby log shipping server. Another approach is to enable row versioned isolation levels like SNAPSHOT.
for straight select statement from SSRS will definitely lock the table.
You can use NOLOCK and write a specific query with required columns to be displayed in the report.
Here your select query must not have "*" insted have column names there.

SQL Server 2000 Deadlock

We are experiencing some very annoying deadlock situations in a production SQL Server 2000 database.
The main setup is the following:
SQL Server 2000 Enterprise Edition.
Server is coded in C++ using ATL OLE Database.
All database objects are being accessed trough stored procedures.
All UPDATE/INSERT stored procedures wrap their internal operations in a BEGIN TRANS ... COMMIT TRANS block.
I collected some initial traces with SQL Profiler following several articles on the Internet like this one (ignore it is referring to SQL Server 2005 tools, the same principles apply). From the traces it appears to be a deadlock between two UPDATE queries.
We have taken some measures that may have reduced the likelihood of the problem from happening as:
SELECT WITH (NOLOCK). We have changed all the SELECT queries in the stored procedures to use WITH (NOLOCK). We understand the implications of having dirty reads but the data being queried is not that important since we do a lot of automatic refreshes and under normal conditions the UI will have the right values.
READ UNCOMMITTED. We have changed the transaction isolation level on the server code to be READ UNCOMMITED.
Reduced transaction scope. We have reduced the time a transaction is being held in order to minimize the probabilities of a database deadlock to take place.
We are also questioning the fact that we have a transaction inside the majority of the stored procedures (BEGIN TRANS ... COMMIT TRANS block). In this situation my guess is that the transaction isolation level is SERIALIZABLE, right? And what about if we also have a transaction isolation level specified in the source code that calls the stored procedure, which one applies?
This is a processing intensive application and we are hitting the database a lot for reads (bigger percentage) and some writes.
If this were a SQL Server 2005 database I could go with Geoff Dalgas answer on an deadlock issue concerning Stack Overflow, if that is even applicable for the issue I am running into. But upgrading to SQL Server 2005 is not, at the present time, a viable option.
As these initial attempts failed my question is: How would you go from here? What steps would you take to reduce or even avoid the deadlock from happening, or what commands/tools should I use to better expose the problem?
A few comments:
The isolation level explicitly specified in your stored procedure overrides isolatlation level of the caller.
If sp_getapplock is available on 2000, I'd use it:
http://sqlblogcasts.com/blogs/tonyrogerson/archive/2006/06/30/855.aspx
In many cases serializable isolation level increases the chance you get a deadlock.
A good resource for 2000:
http://www.code-magazine.com/article.aspx?quickid=0309101&page=1
Also some of Bart Duncan's advice might be applicable:
http://blogs.msdn.com/bartd/archive/2006/09/09/747119.aspx
In addition to Alex's answer:
Eyeball the code to see if tables are being accessed in the same order. We did this recently and reordered code to alway to parent then child. The system had grown, code and features were more complex, more user: we simply started getting deadlocks.
- See if transactions can be shortened (eg start later, finish earlier, less processing)
Identify which code you'd like not to fail and use SET DEADLOCK PRIORITY LOW in the other
We've used this (SQL 2005 has more options here) to make sure that some code will never be deadlocked and sacrificed other code.
If you have SELECT at the start of the transaction to prepare some stuff, consider HOLDLOCK (maybe UPDLOCK) to keep this locked for the duration. We use this occasionally so stop writes on this table by other processes.
The reason for the deadlocks in my setup scenario was after all the indexes. We were using (generated by default) non clustered indexes for the primary keys of the tables. Changing to clustered indexes fixed the problem.
My guess would be that you are experiencing deadlocks, either:
Because your DML(Updates probably) statements are getting escalations to table-locks, or
Different stored procedures are accessing the same tables in transactions but in a different order.
To address this, I would first examine the stored procedures, and make sure the the modifications statements have the indexes that they need.
Note: this applies to both the target tables and the source tables (despite NOLOCK, an UPDATE's source tables will get locks also. Check the query plans for scans on user stored procedures. Unlike batch or bulk operations, most user queries & DMLs work on a small subsets of the table rows and so should not be locking the entire table.
Then secondly, I would check the stored procedures to ensure that all data access in a stored procedure is being done in a consistent order (Parent -> Child is usually preferred).

Resources