SQL query in Excel locking table? - sql-server

I ran into an issue where no queries could be executed at all against a single table in our database. The table was in a complete deadlock. I checked sp_whoisactive for answers and found the following.
A simple SELECT on this table with session id 172, was waiting for a DELETE query on this table with session id 478, which was waiting for an INDEX REORGANIZE on this table with session id 207, which was waiting for a simple SELECT on this table with session id 598.
After killing session 598 everything completed immediately. Afterwards, executing that same query that was keeping the table in a deadlock in a separate window in SSMS only took 2 seconds. I asked around and this query is executed by an Excel file. Apparently we have lots of Excel files floating around which run queries against our SQL database. Obviously this is extremely bad practice as the connection string can be found inside, but as with everything legacy that's just the way it is now until it is fixed.
When googling I'm finding a lot of resources about Excel actually locking tables. Now as far as my limited understanding of locks goes if the query executed by session id 598 would actually lock the table it should only do so for the duration of the query. And since running the query separately only took a few seconds I don't understand how it was running for over 12 hours. If I can trust the results of sp_whoisactive it wasn't waiting on anything else. So why didn't it complete?
Before I suggest something like adding WITH(NOLOCK) to every query in Excel files, which is just a patch and not a solution I'd like to find out why this happened so that we can avoid this in the future. What causes a deadlock like this and how can it be avoided?

Excel will do a page lock for the duration of the query and until the query is released, which is basically when the spreadsheet is closed. So, if the spreadsheet was left open by a user overnight or for an entire day, then the table would be locked. We have the same issue and we are trying to get the offending spreadsheets replaced with SSRS reports but not having much luck with the take up from staff because the Excel files do a lot of other things that SSRS doesn't.

Related

SQL Server Pulling Records Out of a Table Being Continually Updated

I have a table in SQL Server 2012 that's being used by a service that's continually updating records within the table. It's sort of a queue where the service processes the records, and then periodically I run another stored procedure to pull out the ones that are processed into another table. Records in this table start out in one status and as they're processed they get put into another status.
When I try to run the stored procedure to pull the completed records out, I'm running into a deadlocking issue if it happens to occur when the running process is updating the table, which happens about every 2 minutes. I thought about just using a NOLOCK hint to eliminate that, but after reading a bit on this SO thread, I'm thinking I should avoid NOLOCK whenever possible.
GOAL:
Allow the service to continue running as usual, but also allow another stored procedure to periodically go in and remove records that are completed. In the event that there's a lock on a given row, I'd like to just leave that row alone and pick it up on the next time I run the stored procedure. During the processing, there's no requirement that I get all the rows with the stored procedure. That only matters once all the records have been processed, at which point I need to ensure that I get all the records, all while having the service still running on other unrelated records, and not causing any deadlocking issues. Hopefully this makes sense.
This article seems to suggest REPEATABLE READ
Am I on the right track or is there a better method?

How to keep indexes performant on Sql Server table frequently updated (delete/insert) by batch process

We have a table in our sql server database which holds 'today's' data and is updated by many scheduled jobs around the clock. Each job deletes rows it previously inserted and inserts new rows. The table data is also made available via a web site which runs many queries against it. The problem is the indexes are constantly fragmented and although it only has 1.5m rows querys are generally very slow and the website times out frequently.
So I would like to know if anyone else has experienced a similar scenario and if so how did you deal with it.
You need to ReOrg the indexes on a daily basis. Here's am image of a defrag job from SSMS:

MS SQL Specific Tables Hanging at Queries

I have SQL Server 2008. I run a query in a table on a database. The weirdest thing keeps happening. I run a simple select statement on the table. I know there are 62 rows in the table but it gets stuck at row 48 and goes on "querying...". Waited already for hours and it didn't move on from there. I only know of two programs, and one reporting service connecting to that particular table and one other user. Does anyone have any idea on what could be causing this and how I could trace the source of the lock on that table?
As a side note, I noted that the logs only had a notice that Autogrow failed the day before I checked. Could this have something to do with it?
What if you do a
SELECT * FROM YourTable WITH(NOLOCK)
Does it still hang?
Additionally when it does appear to be blocked you can try running
exec sp_who2
And looking in the BlkBy column to see what process is blocking you.
If that doesn't shed any light this article gives some info on some DMVs that might help get some insight into reasons for waits.

Find most recent SQL Server database activity

Data from another system is replicated into a SQL Server 2005 database in real-time (during the day, it's hundreds of transactions/second) using Goldengate. I'd like to be able to tell if there's been a transaction recently, which will tell me if replication is currently happening. Even in the off-hours, I can expect a transaction every few minutes, though I won't know which of the 400 tables it will go into.
Here's my current process:
IUD trigger on most popular replicated table
Updates date in "Sync Notification" table every time there's any activity on that table
SQL Agent job runs every few minutes and compares this date with GETDATE(). If it's been too long, it emails me.
This works for the most part, but I get false positives if there's activity in other tables, but not the monitored one, which can happen overnight.
Any other suggestions short of adding this same trigger to every table in the database? If I do add the triggers, how to I prevent deadlocks and contention on the "Sync notification" table? Since I don't care about the most recent date being exact during high-contention periods, is there a way I can have SQL try to update the date but just skip it if some other process has locked it?
The only "application-level" choice I have is to TELNET to the Goldengate monitor and ask for the replica lag, then screen scrape the results. I'm open to that, but I'd like to do something SQL-side if it's more feasible.
Is this for an automated job or something you want to look at every now and then? If the latter, then you could use a transaction log examination tool (Redgate Log Rescue, Apex SQLLog, probably others).
Another option open to you is look at sysindexes (SQL Server 2000: dbo.sysindex; 2005: sys.sysindexes). The column rowmodctr (to quote MSDN) "Counts the total number of inserted, deleted, or updated rows since the last time statistics were updated for the table". It may not return everything you need to know but, providing you've got covering indexes, it would give an indication of how many and where the changes there have been if sampled on a regular basis.
You can check SELECT * FROM ::fn_dblog(#startLSN, NULL) and see if any LOP_MODIFY_ROW operation occured since the last check (since last LSN you checked).

OpenQuery to DB2/AS400 from SQL Server 2000 causing locks

Every morning we have a process that issues numerous queries (~10000) to DB2 on an AS400/iSeries/i6 (whatever IBM calls it nowadays), in the last 2 months, the operators have been complaining that our query locks a couple of files preventing them from completing their nightly processing. The queries are very simplisitic, e.g
Select [FieldName] from OpenQuery('<LinkedServerName>', 'Select [FieldName] from [LibraryName].[FieldName] where [SomeField]=[SomeParameter]')
I am not an expert on the iSeries side of the house and was wondering if anyone had any insight on lock escalation from an AS400/Db2 perspective. The ID that is causing the lock has been confirmed to be the ID we registered our linked server as and we know its most likely us because the [Library] and [FileName] are consistent with the query we are issuing.
This has just started happening recently. Is it possible that our select statements which are causing the AS400 to escalate locks? The problem is they are not being released without manual intervention.
Try adding "FOR READ ONLY" to the query then it won't lock records as you retrieve them.
Writes to the files on the AS/400 side from an RPG/COBOL/JPL job program will cause a file lock (by default I think). The job will be unable to get this lock when you are reading. The solution we used was ... don't read the files when jobs are running. We created a big schedule sheet in excel and put all the sql servers' and as/400's jobs on it in times slots w/ color coding for importance and server. That way no conflicts or out of date extract files either.
You might have Commitment Control causing a lock for a Repeatable Read. Check the SQL Server ODBC connection associated with <linkedServerName> to change the commitment control.

Resources