Database Table lock - sql-server

I'm using SQL 2000 for my application. My application is using N tables.
My application has a wrapper for SQL server called Database server. It is running as a 24/7 windows service.
If I have checked the integrity check option in the SQL maintenance plan, when this task is running one time after that one of my tables has been locked and it has been never unlocked.
So my history of the database transaction has been lost.
Please provide your suggestion how to solve this problem.

What if you have a client-side command timeout? And the locks are your own locks as a result of the DBCC?
Your code will timeout waiting for the DBCC to finish, but any locks it's already issued are not rolled back.
A command timeout tells SQL Server to simply stop processing. To release locks you need to either ROLLACK on the connection or close the connection.
Options:
Use SET XACT_ABORT in the SQL: Do I really need to use “SET XACT_ABORT ON”? (SO)
On client error, try and rollback yourself (Literally IF ##TRANCOUNT > 0 ROLLBACK TRAN)

Related

Is there any way to rollback transactions in SSIS for SQL Server 2012?

Cannot successfully execute an SSIS package with BEGIN TRAN functionality.
I'm at a loss with an SSIS package I inherited. It contains:
1 Script Task
3 Execute SQL tasks
5 Data flow tasks (each contains a number of merges, lookups, data inserts and other transformations)
1 file system task of the package.
All of these are encapsulated in a Foreach loop container. I've been tasked with modifying the package so that if any of the steps within the control/data flow fails, the entire thing is rolled back. Now I've tried two different approaches to accomplish this:
I. Using Distributed Transactions.
I ensured that:
MSDTC was running on target server and executing client (screenshot enclosed)
msdtc.exe was added as an exception to server and client firewall
Inbound and outbound rules were set for both server and client to allow DTC connections.
ForeachLoop Container TrasanctionLevel: Required
All other tasks TransactionLevel: Supported
My OLEDB Connection has RetainSameConnection set to TRUE and I'm using SQL Server Authentication with Save Password checked
When I execute the package, it fails right after the script task (first step).
After spending an entire week trying to figure out a workaround, I decided to try SQL Tasks to try to accomplish my goal using 3 Execute SQL Tasks:
BEGIN TRAN before the ForeachLoop Container
COMMIT TRAN after the ForeachLoop Container with a Success Constraint
ROLLBACK TRAN after the ForeachLoop Container with a Failure constraint
In this case, the ForeachLoop container and all other tasks have TransactionLevel property set to Supported. Now here, the problem is that the package executes up to the fourth data flow task and hangs there forever. After logging into SQL Server and verifying the running sessions, I noticed sys.sp_describe_first_result_set;1 as a headblocker session.
Doing some research, I found it could be related to a few TRUNCATE statements in some of my Data flow tasks which could cause a schema lock. I went ahead and changed the ValidateExternalMetaData property to False for all tasks within my data flow and changed my truncate statements to DELETE statements instead. Re-ran package and still hangs in the same spot with the same headblocker. As an alternative, I tried creating a second OLEDB connection to the same database, assigned that new OLEDB Connection to my BEGIN, ROLLBACK and COMMIT SQL tasks with RetainSameConnectionProperty set to TRUE and changed the RetainSameConnectionProperty to FALSE (and tried it with TRUE as well) in the original OLEDB connection (the one used by the data flow tasks). This worked in the sense that the package appeared to execute (It ran and Commit Tran executed fine) and then I ran it again with a forced error to cause it to fail and the Rollback TRAN task executed successfully, however, when I queried the affected tables, the transaction hadn't rolled back, all new records were inserted and old ones were updated (the begin tran was clearly started in a different connection and hence didn't affect the package's workflow). I'm not sure what else to try at this point. Any help would be truly appreciated, I’m about to go nuts with this!
P.S. additionally, all objects have "DelayValidation" set to true on everything and SQL Server version is 2012.

How to prevent SQL Server transactions to get stuck?

I am using remote connections to connect a client and server, after 6 months of working smoothly a transaction got stuck, probably because there was a cut in the connection while the transaction was running.
How can I prevent a transaction to get stuck in the case of a connection lost?
Isn't SQL supposed to cancel the transaction if not finished in some time?
UPDATE:
I am using the default SQL Server isolation (Read commited) and this is the way to replicate it:
I tried SET XACT_ABORT is ON as suggested but no luck, problem remains, this is the sequence of events to replicate this issue:
Set a breakpoint in the middle of the transaction and start
debugging
Once the transaction reached the breakpoint, disconnect the
computer from network (simulating there was an abnormal
disconnection)
Continue debugging the process and wait for .NET SqlClient to
throw the error (No network)
Plug PC back to network (simulating internet connection has
returned)
SQL Server does not finish or rollback the transaction, therefore
tables used in the first middle of the transaction are locked
You Need to SET XACT_ABORT is ON, if a Transact-SQL statement raises a run-time error, the entire transaction is terminated and rolled back.
Check out his link for More information
!https://learn.microsoft.com/en-us/sql/t-sql/statements/set-xact-abort-transact-sql?view=sql-server-2017

Idle in transaction: How to get the query caused

My application executes many queries and it is sure that all connections are closed well. PgAdmin shows many queries have gone "Idle in transaction" and finally DB becomes unresponsive. Is there a way to get the query caused to be 'Idle in transaction' ? Or any other tool which can track it ? Postgres 8.1 is used.
Edit: Connection Pool is used. Also, the state ' in transaction' got cleared after couple of minutes. Then, if any connection is opened, how this get cleared ?
If you check information in Postgres documentation regarding this:
idle in transaction (waiting for client inside a BEGIN block), or a
command type name such as SELECT. Also, waiting is attached if the
server process is presently waiting on a lock held by another server
process
I would suggest following things:
enable logging of "long queries" using log_min_duration_statement
and log_lock_waits option in postgresql.conf in Error Reporting and Logging section
check Lock Management parameters of postgresql.conf configuration file,deadlock_timeout option in particular
check Lock Monitoring article on Postgres Wiki and pg_locks view in Postgres
This is clean signal, so some about closing transaction and closing sessions is wrong in your application. The queries works well. Check your application - unexpected exceptions, fails, ... Some applications are pretty buggy - usually it is pretty serious problem. Orphaned transactions block VACUUM and block reusing connections.

Locked by wait type OLEDB with SQL server 2012

I have a batch process that generate one linked server over huge Excel files to fetch data into SQL Server 2012
Sometimes the process is locked by a wait type "OLEDB"
I can't find the root ause but my biggest problem is that I can't kill the process that has the wait_type OLEDB
I have try
KILL spid
Not works, never kill the spid.
SPID 90: transaction rollback in progress. Estimated rollback completion: 0%. Estimated time remaining: 0 seconds.
ALTER DATABASE [DataMigration] SET SINGLE_USER WITH ROLLBACK IMMEDIATE
But never finish (due PRINT_ROLLBACK_PROGRESS)
If I look into the process, I see the file that cause the block, but I don't know how solve the issue
SELECT * FROM SYSPROCESSES where spid=90
-There is any way to kill this process without restart the servers?
-How can avoid the wait type OLEDB? the same file in the same location usually works fine, the process hangs only some times.
This is quite simple. OLEDB as a wait type often indicates some wait on another Server. This is also the reason why you can't kill the process. OLEDB is often (not always) used as a wait type for connections between SQL Server instances.
You kill the connection on your server, but if the process is running on another instance/linked server it will run there too. It will kill the process after the process on the other instance is finished.
So much to the bad news. The good news for you is, that you easily can find the query which runs on another linked server using this query:
SELECT spid, waitresource
FROM sys.sysprocesses
WHERE spid = <yourKilledSpid>
Just filter it for the spid you try to kill. The waitresource will indicate the remote server including the spid on the remote server. Go to the remote server and kill the spid there too. Your connection will immediately be killed/rolledback. Hopefully this solves your issue.
You can additionally try to take a look at the waiting_tasks. Maybe you'll see something helpful like a blocking resource in there.
SELECT *
FROM sys.dm_os_waiting_tasks

Extreme wait-time when taking a SQL Server database offline

I'm trying to perform some offline maintenance (dev database restore from live backup) on my dev database, but the 'Take Offline' command via SQL Server Management Studio is performing extremely slowly - on the order of 30 minutes plus now. I am just about at my wits end and I can't seem to find any references online as to what might be causing the speed problem, or how to fix it.
Some sites have suggested that open connections to the database cause this slowdown, but the only application that uses this database is my dev machine's IIS instance, and the service is stopped - there are no more open connections.
What could be causing this slowdown, and what can I do to speed it up?
After some additional searching (new search terms inspired by gbn's answer and u07ch's comment on KMike's answer) I found this, which completed successfully in 2 seconds:
ALTER DATABASE <dbname> SET OFFLINE WITH ROLLBACK IMMEDIATE
(Update)
When this still fails with the following error, you can fix it as inspired by this blog post:
ALTER DATABASE failed because a lock could not be placed on database 'dbname' Try again later.
you can run the following command to find out who is keeping a lock on your database:
EXEC sp_who2
And use whatever SPID you find in the following command:
KILL <SPID>
Then run the ALTER DATABASE command again. It should now work.
There is most likely a connection to the DB from somewhere (a rare example: asynchronous statistic update)
To find connections, use sys.sysprocesses
USE master
SELECT * FROM sys.sysprocesses WHERE dbid = DB_ID('MyDB')
To force disconnections, use ROLLBACK IMMEDIATE
USE master
ALTER DATABASE MyDB SET SINGLE_USER WITH ROLLBACK IMMEDIATE
Do you have any open SQL Server Management Studio windows that are connected to this DB?
Put it in single user mode, and then try again.
In my case, after waiting so much for it to finish I had no patience and simply closed management studio. Before exiting, it showed the success message, db is offline. The files were available to rename.
execute the stored procedure
sp_who2
This will allow you to see if there is any blocking locks.. kill their should fix it.
In SSMS: right-click on SQL server icon, Activity Monitor. Open Processes. Find the processed connected. Right-click on the process, Kill.
In my case I had looked at some tables in the DB prior to executing this action. My user account was holding an active connection to this DB in SSMS. Once I disconnected from the server in SSMS (leaving the 'Take database offline' dialog box open) the operation succeeded.
anytime you run into this type of thing you should always think of your transaction log. The alter db statment with rollback immediate indicates this to be the case. Check this out: http://msdn.microsoft.com/en-us/library/ms189085.aspx
Bone up on checkpoints, etc. You need to decide if the transactions in your log are worth saving or not and then pick the mode to run your db in accordingly. There's really no reason for you to have to wait but also no reason for you to lose data either - you can have both.
Closing the instance of SSMS (SQL Service Manager) from which the request was made solved the problem for me.....
To get around this I stopped the website that was connected to the db in IIS and immediately the 'frozen' 'take db offline' panel became unfrozen.
Also, close any query windows you may have open that are connected to the database in question ;)
I tried all the suggestions below and nothing worked.
EXEC sp_who
Kill < SPID >
ALTER DATABASE SET SINGLE_USER WITH Rollback Immediate
ALTER DATABASE SET OFFLINE WITH ROLLBACK IMMEDIATE
Result: Both the above commands were also stuck.
4 . Right-click the database -> Properties -> Options
Set Database Read-Only to True
Click 'Yes' at the dialog warning SQL Server will close all connections to the database.
Result: The window was stuck on executing.
As a last resort, I restarted the SQL server service from configuration manager and then ran ALTER DATABASE SET OFFLINE WITH ROLLBACK IMMEDIATE. It worked like a charm
In SSMS, set the database to read-only then back. The connections will be closed, which frees up the locks.
In my case there was a website that had open connections to the database. This method was easy enough:
Right-click the database -> Properties -> Options
Set Database Read-Only to True
Click 'Yes' at the dialog warning SQL Server will close all connections to the database.
Re-open Options and turn read-only back off
Now try renaming the database or taking it offline.
For me, I just had to go into the Job Activity Monitor and stop two things that were processing. Then it went offline immediately. In my case though I knew what those 2 processes were and that it was ok to stop them.
In my case, the database was related to an old Sharepoint install. Stopping and disabling related services in the server manager "unhung" the take offline action, which had been running for 40 minutes, and it completed immediately.
You may wish to check if any services are currently utilizing the database.
Next time, from the Take Offline dialog, remember to check the 'Drop All Active Connections' checkbox. I was also on SQL_EXPRESS on local machine with no connections, but this slowdown happened for me unless I checked that checkbox.
SSMS, especially if running it from your own desktop remotely and not directly within the database server, can be a reason for the long delays in detaching a database. For some reason SSMS may not be able to disconnect any existing "connections" to the database.
We found the process was almost instant when we did it directly from the database server itself. And in fact it killed the attempt from my own desktop SSMS session, and it "took over" and detached the database.
Nothing else suggested here worked.
Thanks
In my case i stopped Tomcat server . then immediately the DB went offline .

Resources