my senior told me that for SQL Query execution by default doesn't lock the table.
But i was having some issues with my SSRS report which seems to be getting some issues with locking and getting some errors.
I did some googling but fall short of finding anything.
Just looking for confirmation does SSRS report actually will lock any tables that is being queried?
And are there any MSDN documentation that document this behavior down specifically?
SSRS doesn't lock anything on its own. The locking will be driven by your queries embeded in the reports. Nobody can answer this question but you, look at the reports and the queries used and then see if they lock or not lock tables.
Normally queries do lock data in the tables, not tables. Consistent, correct, reports absolutely do require locking. Do not succumb to the fallacy of adding NOLOCK hint and call it a day, you will get incorrect results.
If you see contention in production caused by reporting, then there are many solutions. Offload reporting to a read-only server, using Availability Groups, a database snapshot or a standby log shipping server. Another approach is to enable row versioned isolation levels like SNAPSHOT.
for straight select statement from SSRS will definitely lock the table.
You can use NOLOCK and write a specific query with required columns to be displayed in the report.
Here your select query must not have "*" insted have column names there.
Related
From analyzing table locks in SQL Server, my Win32 application built in RAD Studio XE7 starts numerous transactions while each FDQuery is active. Sometimes this causes application problems and locks with dozens of users. Especially with triggered tables.
For my test, I used simple FDConnection and FDQuery as Select * from Customer with default settings, and concluded that FDQuery1.Active:=True causes the start of a Customer table transaction. The transaction disappears when FDQuery1.Active:=false.
I would like to inhibit the starting of transactions in FDQuery for read-only, as lists of data for grid or reports.
But I can't find a way to find the appropriate tuning of FDQuery.
By default, SQL Server does not implement versioning of data blocks. So, to return a consistent set of rows, it guarantee that no other sessions makes changes do data during execution of a query, using shared locks.
Using "WITH(NOLOCK)" disable shared locks, but can result in an inconsistent result set.
The only one solution is to use READ_COMMITED_SNAPSHOT isolation level, which store changed data to temp, used to return consistent result sets without locking updates.
Look at the NOLOCK keyword for tables. https://www.mssqltips.com/sqlservertip/2470/understanding-the-sql-server-nolock-hint/
There are few T-SQL Jobs which are moving data from one table to other. The jobs fail because of timeout sometimes. I identified that there is no much data to process and suspect that the underlying tables might be blocked by someone else. I want a query to find out the historical records of resources (tables etc) being locked by any other process/stored proc.
I have searched online portals but most of them give me the query to find out the resources locked currently and not the historical ones. Since the jobs are running and failing during night time, I want to see what happened, in the morning.
For recent deadlocks, there's sp_BlitzLock (part of Brent Ozar's First Responder Kit).
Just execute it without parameters and it shows a table with recent deadlocks, including the exact queries that caused the deadlocks.
We are using SQL Server 2012 EE but currently do not have the option to run queries on a R/O mirror though that is my long term goal, though am concerned I may run into the below issue in that scenario as well since the mirror would also be updating data I am querying.
I have a view that joins across several tables from two databases and is used for invoicing from existing data. Three of these tables are also actively updated by ongoing transactions. Running a report that used this view did not used to be a problem but now our database is getting much larger and I have run into some timeout problems. First the query was timing out so I set command timeout to 0 and reran the query which pegged all 4 CPUs 100% for 90 minutes and then I killed it. There were no problems with active transactions during that time. I reviewed the query and found a field I was joining on that was not indexed so created an index on that field, reran the report, which then finished in three minutes and all the CPUs were busy but not at all pegged out. Same data amount queried both times. I figured problem solved. Of course later, my boss ran a similar query, perhaps with some more data but probably not a lot more, and our live transactions started timing out 100% while his query was running. I did not get a chance to see the CPU usage during that time.
So my questions are two:
Given I have to use the live and active database, what is the proper way to run a long R/O query so that active transactions can still continue? I am considering NO LOCK but am hoping there is a better standard practice.
And what might cause sqlserver to peg out 4 CPUs with 100% busy and not cause live transaction timeouts, yet when my boss ran his query, after I added the index and my query ran much better, the live update transactions start timing out 100%?
I know this is not a lot of info to go on. I'm not very familiar with sql profiling and performance monitoring yet this behavior seems rather odd and am hoping a best practice would be the correct workaround.
The default behavior of SELECT queries in the READ_COMMITTED transaction isolation level is to acquire shared locks during query execution to provide the requested data consistency (read committed data only). These locks are typically row-level and released quickly during query execution immediately after each row is read. There are also less granular intent locks at the page and table level prevent concurrent updates to data as it is being read. Depending on the particulars of the execution plan, there may even be shared locks held at the table level for the duration of the query, which will prevent updates to the table during query execution and result in readers blocking writers.
Setting the READ_COMMITTED_SNAPSHOT database option causes SQL Server to use row versioning instead of locking to provide the same read consistency. A row version store is maintained in tempdb so that when a row requested by the query has changed since the query began, the most recent committed row version is returned instead. This row-versioning behavior avoids locking and effectively provides a statement-level snapshot of the database at the time the query began. Readers do not block writers and writers do not block readers. Do not confuse the READ_COMMITTED_SNAPSHOT database option with the SNAPSHOT isolation level (a common mistake).
The downside of setting the READ_COMMITTED_SNAPSHOT is additional resource usage. An additional 14 bytes of storage overhead for each row is incurred once the database option is enabled. Updates and deletes will generate row versions in tempdb. These versions require tempdb space for the duration of the longest running query and there is overhead in maintained the version store. Also consider whether you have existing applications that depend on readers-block-writers locking behavior. Despite this overhead, the concurrency benefits may yield better overall performance depending on your workload, while providing read integrity. See http://technet.microsoft.com/en-us/library/ms188277.aspx for more information.
Actually I decided to create a snapshot at the beginning of each month for reporting to run against. Then delete when no longer needed for reporting. This seems to work fine. I could do something similar with a database restore but slightly more work. This allows not needing a second SQL EE license, and lets me run reports w/o locking tables for live transactions.
At work, users are very happy to generate their own reports using Reporting Services' Report Builder.
But, alas, the queries it generates are very inefficient, and they don't use "WITH (NOLOCK)" - slowing down things for everyone.
These are reports that really do need to be run using latest data - can't be offloaded to the reporting server. And since they query very specific, detailed data, hypercubes are of no use here.
So the question is:
Is there a way to configure Report Builder's Data Models so the queries it generates always use "WITH (NOLOCK)" when querying a table?
NOLOCK is no solution. Dirty reads are inconsistent reads. Your totals are going to be off, your reports will not balance, and you will, in general, produce garbage aggregate data. Use snapshot isolation to prevent reports from blocking updates and to prevent updates from blocking reports:
ALTER DATABASE ... SET READ_COMITTED_SNAPSHOT ON;
See Row Versioning-based Isolation Levels in the Database Engine for more details.
Create views as data source for the report, and add with (nolock) to all tables in the view's select statement.
I have a scheduled job with a SP running on daily basis (SQL Server 2005). Recently I frequently encounter deadlock problem for this SP. Here is the error message:
Message
Executed as user: dbo. Transaction (Process ID 56) was deadlocked on thread |
communication buffer resources with another process and has been chosen as the deadlock
victim. Rerun the transaction. [SQLSTATE 40001] (Error 1205). The step failed.
The SP uses some inter joined views to some tables, one of them is a large size data table with several million rows of data(and keep growing). I am not sure if any job or query against to the table will cause the SP un-accessible to the table? I am going to investigate who is on line by using the query. That may expose some query or person on SQL server during that time.
Not sure if any one have similar issue or this is known SQL 2005 issue? Any additional way I should do in my SP or on SQL server to avoid the deadlock?
Use the SQL Server Profiler to track all the queries that are running. I put the output into SQL Server. This will help you figure out which ones are accessing your particular table / tables. Post your findings, and we can help you with that.
Deadlocks are when two transactions are each holding onto some resources and want a resource that the other one has as well - neither can proceed as they are both waiting for each other. They cannot be completely eliminated, but a lot can be done to mitigate them. Remus and Raj suggest capturing more information about them in Profiler - which I also recommend - generally optimizing your queries (if you know which ones are involved) can also help. Here is an MSDN article that can help get you going: "Minimizing Deadlocks".