DDL commands are AutoCommit in SQL server, what does it mean? - sql-server

at most pages I have read that "DDL commands have AutoCommit in SQL Server", if I am not wrong this statement simply means that we don't need explicit commit command for DDL commands.
then why...
1) Alter TABLE EMP Add Age INT;
UPDATE EMP SET Age=20;
fails ,saying Invalid column name 'Age'
2) BEGIN TRAN
Alter TABLE EMP Add Age INT;
ROLLBACK
can be rollbacked successfully.
Maybe I am wrong with concept of AutoCommit, please explain it with example where it actully has effects.
Thanks for any help.

Autocommit Transactions:
A connection to an instance of the Database Engine operates in autocommit mode until a BEGIN TRANSACTION statement starts...
So, your second example doesn't apply. Further down:
In autocommit mode, it sometimes appears as if an instance of the Database Engine has rolled back an entire batch instead of just one SQL statement. This happens if the error encountered is a compile error, not a run-time error. A compile error prevents the Database Engine from building an execution plan, so nothing in the batch is executed.
Which is what your first example deals with.
So, neither one is actually dealing with Autocommit transactions.
So, lets take a statement like:
Alter TABLE EMP Add Age INT;
If you have an open connection in Autocommit mode, execute the above, and it completes without errors, then you will find that this connection has no open transactions, and the changes are visible to any other connection immediately.
If you have an open connection in Implicit Transactions mode, execute the above, and it completes without errors, then you will find that this connection has an open transaction. Other connections will be blocked on any operations that require a schema lock on EMP, until you execute either COMMIT or ROLLBACK.
If you have an open connection, in which you have executed BEGIN TRANSACTION, execute the above, and it completes without errors - then you'll be in the same situation as for Implicit Transactions. However, having COMMITed or ROLLBACKed, your connection will revert to either Autocommit mode or Implicit Transactions mode (whichever was active before the call to BEGIN TRANSACTION).

If you have a begin tran explicitly like in your second example, then you have to commit it. (you can roll back as well)
If you dont specify it explicity like in your first example then it is autocommit.
autocommit is the default mode in sql server, which can be turned off if requried

If you add batch in your code it will work
Alter TABLE EMP Add Age INT;
go
UPDATE EMP SET Age=20;

Related

SELECT statement is not blocked by an existing exclusive table lock

For testing, I am trying to simulate a condition in which a query from our web application to our SQL Server backend would timeout. The web application is configured so this happens if the query runs longer than 30 seconds. I felt the easiest way to do this would be to take and hold an exclusive lock on the the table that the web application wants to query. As I understand it, an exclusive lock should prevent any additional locks (even the shared locks taken by a SELECT statement).
I used the following methodology:
CREATE A LONG-HELD LOCK
Open a first query window in SSMS and run
BEGIN TRAN;
SELECT * FROM MyTable WITH (TABLOCKX);
WAITFOR DELAY '00:02:00';
ROLLBACK;
(see https://stackoverflow.com/a/25274225/2824445 )
CONFIRM THE LOCK
I can EXEC sp_lock and see results with ObjId matching MyTable, Type of TAB, Mode of X
TRY TO GET BLOCKED BY THE LOCK
Open a second query window in SSMS and run SELECT * FROM MyTable
I would expect this to sit and wait, not returning any results until after the lock is released by the first query. Instead, the second query returns with full results immediately.
STUFF I TRIED
In the second query window, if I SET TRANSACTION ISOLATION LEVEL SERIALIZABLE, then the second query waits until the first completes as expected. However, the point is to simulate a timeout in our web application, and I do not have any easy way to alter the transaction isolation level of the web application's connections away from the default of READ COMMITTED.
In the first window, I tried modifying the table's values inside the transaction. In this case, when the second query returns immediately, the values it shows are the unmodified values.
Figured it out. We had READ_COMMITTED_SNAPSHOT turned on, which is how the second query was able to return the previous, unmodified values in part 2 of "Stuff I tried". I was able to determine this with SELECT is_read_committed_snapshot_on FROM sys.databases WHERE name = 'MyDatabase'. Once it was turned off with ALTER DATABASE MyDatabase SET READ_COMMITTED_SNAPSHOT OFF, I began to see the expected behavior in which the second query would wait for the first to complete.

SQL Transaction and external call deadlock

For some reason we have a code like this (inside a procedure), on SQL Server 2008 R2:
update foo set value = 'xxx' where id = 123
exec usp_analyzeFoo #id = 123
where usp_analyzefoo calls out a web service that connects to the database on its own and tries to update table foo by itself. The web service uses native code, connects to hardware
(so I didn't want to have this code running inside SQL server) etc, works something out and updates foo. But it's made from a different connection and outside the current transaction.
It has worked until somebody wrapped these calls in a transaction and now the update command locks the table foo and the web service (which connects through its own database connection) is blocked by this transaction and times out.
Is there a reasonable solution for this problem? Something along calling the web service after the current transaction is over, scheduling it several seconds later (it's not critical) or something similar?
You can call that SP whit a job like Justicator commented or you can put that call in two separate transactions.
If you don't explicit wrap that in a transaction both operations ill be done in implicit transactions by itself.
update -- implicit tran
exec SP -- implicit tran
when you wrapped it you mean to do both operations in a atomic tran, that does not work due the SP is calling a third party WS and it trying to do another transaction.
begin tran
update
exec SP -- call a third party ill be blocked by that update cuz it's not committed yet
commit tran -- ill never reach this point since exec does not complete, its a deadlock baby
you can separate the transactions if you wish, they ill work like before that (bad) wrap.
begin tran
update
commit tran
begin tran
exec SP
commit tran

Why is my SQL transaction not executing a delete at the end?

I've got a simple SQL command that is supposed to read all of the records in from a table and then delete them all. Because there's a chance someone else could be writing to this table at the exact moment, I want to lock the table so that I'm sure that everything I delete is also everything I read.
BEGIN TRAN T1;
SELECT LotID FROM fsScannerIOInvalidCachedLots WITH (TABLOCK, HOLDLOCK);
DELETE FROM fsInvalidCachedLots;
COMMIT TRAN T1;
The really strange thing is, this USED to work. It worked for a while through testing, but now I guess something has changed because it's reading everything in, but it's not deleting any of the records. Consequently, SQL Server is spinning up high CPU usage because when this runs the next time it takes significantly longer to execute, which I assume has something to do with the lock.
Any idea what could be going on here? I've tried both TABLOCK and TABLOCKX
Update: Oh yea, something I forgot to mention, I can't query that table until after the next read the program does. What I mean is, after that statement is executed in code (and the command and connection are disposed of) if I try to query that table from within Management Studio it just hangs, which I assume means it's still locked. But then if I step through the calling program until I hit the next database connection, the moment after the first read the table is no longer locked.
Your SELECT retrieves data from a table named fsScannerIOInvalidCachedLots, but the delete is from a different table named fsInvalidCachedLots.
If you run this query in set xact_abort off, the transaction will not be aborted by the error from the invalid table name. In fact, select ##trancount will show you that there is an active transaction, and select xact_state() will return 1 meaning that it is active and no error has occurred.
On the other hand, with set xact_abort on, the transaction is aborted. select ##trancount will return 0, and select xact_state() will return 0 (no active transaction).
See ##trancount and xact_state() on MSDN for more information about them.

Will SQL Server roll back single statement after it was terminated?

I have a statement with a single UPDATE command. If I manually terminate it will all the results be rolled back?
If you kill the connection on which the update was issued, or otherwise manage to cancel the query, the update will be rolled back. Every DML statement in SQL runs within the context of a transaction - SQL Server will automatically create one if one doesn't exist, and commit it after the statement completes.
If no errors occur then this statement cannot be rolled back
Update table
Set MyCol = 'foo'
Where MyOtherCol = 'bar'
However, what you can do in SQL is run the following statements together:
begin transaction
Update table
Set MyCol = 'foo'
Where MyOtherCol = 'bar'
Then perform any checks that you may need to do. If everything is ok then you can run the following.
commit transaction
if you need to cancel the update you can run this
rollback transaction

Do DB locks require transactions?

Is it true that "Every statement (select/insert/delete/update) has an isolation level regardless of transactions"?
I have a scenario in which I have set update of statements inside a transaction (ReadCommitted).
And another set not in a transaction (select statements).
In this case when first set is executing another waits.
If I set READ_COMMITTED_SNAPSHOT for DB Deadlock occurs.
ALTER DATABASE Amelio SET ALLOW_SNAPSHOT_ISOLATION ON
ALTER DATABASE Amelio SET READ_COMMITTED_SNAPSHOT ON
To solve this problem, do I need to put "Select" statements in TransactionScope?
On SQL Server every transaction has an implicit or explicit transaction level. Explicit if called with BEGIN/COMMIT/ROLLBACK TRANSACTION, implicit if nothing like this is issued.
Start your snapshot before the update query starts. Otherwise you give SQL Server no chance to prepare the changed rows into tempdb and the Update query still has the lock open.
Another way without creating a snapshot isolation is to use SELECT <columns> FROM <table> WITH (NOLOCK) which is the way to tell SQL Server to get the rows no matter what (aka READ_UNCOMMITED). As it is a query hint it changes the isolation level even with your settings. Can work if you are not bothered which state of the row is queried - however caution needs to be used when evaluating the data received.

Resources