Why or when would you set SET XACT_ABORT to OFF? - sql-server

I know what XACT_ABORT is. There is a lot of info on why to set this to on. My question is why would someone set it to off? The point of a transaction is either everything is done or nothing is done. So why would someone set this to off? Why MS even choose off as the default is strange (The default for triggers is on and I know that).

After some digging I can only think of one decent reason as to why you would. That would be that you would want the transaction to continue processing regardless of any errors that are flagged. It might be that you want to deal with the errors later on rather than having the entire transaction rolled back.
There are also certain times when having XACT_ABORT isn't required for data modification statements but requirement is different to preference.
Source: MSDN

Related

Why would I ever want to have SETNOCOUNT OFF?

I'm new to SQL and in my research of best performance I'm seeing everywhere to SET NOCOUNT ON in all of my queries to improve performance... I understand that it returns the rows affected and that's "most of the time" unnecessary data transmission, but my question is when would you want that? If ever? Why is it set to OFF as default? Can I set it to ON by default?
Let me explain why I use SET NOCOUNT ON and SET NOCOUNT OFF a lot.
I usually interact with the database by loading script files. I set SQL Server Management Studio to display the output as text.
Why do I do this, and why do I use SET NOCOUNT ?
I set the output as text as then I can see very easily what the queries have returned.
I might be sending 20 or 30 queries to the database, and if I return the results as grid makes it really hard to know what has happened, as there are so many tabs to look through.
And why use SET NOCOUNT ?
Well, the main reason is that I've probably deleted some rows, and if I use SET NOCOUNT OFF I can very quickly see whether I've deleted 10,246 rows, when I was only expecting to have deleted 7.
So basically, loading and running commands from a script file, setting the output to text, and setting SET NOCOUNT OFF tells me very quickly whether my commands have done as I expected.
I agree that, if you're not running commands from a text file, and setting output to text, the setting does seem pointless.
For the use case though that I describe, it really is useful.

Scripting using a transaction

I have a question but I can never get a clear answer. Any stored
procedure that used a transaction that I have looked at up until my recent job always had a commit transaction + a roll back in case of error. However I have seen a lot of code
at my new job that just has a begin transaction and then a commit at the end with no roll back. I understand why you would use a transaction with a rollback but why would you want to begin a transaction with no roll back? Is it so when you run that code you want to lock the table up so no values can be changed why your code is updating? If so why would you not want the added security of a roll back in case something goes wrong? Is this proper use of the transaction statement? Any thoughts or ideas would be great!
For Example:
BEGIN TRANSACTION [Tran1]
INSERT INTO [Test].[dbo].[T1]
([Title], [AVG])
VALUES ('Tidd130', 130), ('Tidd230', 230)
UPDATE [Test].[dbo].[T1]
SET [Title] = N'az2' ,[AVG] = 1
WHERE [dbo].[T1].[Title] = N'az'
COMMIT TRANSACTION [Tran1]
GO
shouldn't this code be using a roll back syntax for proper use of the begin transaction statement?
The idea is that if that set of transactions needs to be "all or nothing", wrapping the lot in a transaction is the way to ensure that is what will happen. You're not seeing an explicit rollback because that's not what they're guarding against. Imagine the ff scenario with your contrived example:
The insert happens
The server crashes (or the log fills up or some other external reason why things can't continue) before the update can happen
If they're both wrapped in the same transaction, the insert won't be reflected in the table data. Which is the desired behavior.
When transactions are not explicitly declared, SQL Server will automatically BEGIN and COMMIT a TRANSACTION for each command. This frees up each command's lock as soon as the command executes.
When executing multiple commands inside a single transaction (as in the example you posted), locks from all commands are held until the transaction is committed.
Depending on the desired behavior, the script you posted may be correct. However, I would be cautious to ensure that the developer did not mistakenly believe that the transaction would be automatically rolled back on error. If that behavior is desired, you do indeed need to explicitly ROLLBACK or SET XACT_ABORT ON
You use transaction when you need the outcome to be atomic, you would see this alot in financial related procedures where you are gravely worried about data acid consistency . Otherwise it is not necessary and introduces a great deal of locking overhead. There is a good question here and here that goes into great depth.
Edit
The takeaway point is if the procedure is a all or none and must either succeed or fail the correct decision is to use a transaction. If the procedure is not a all or none transaction such as simple insert update etc using a transaction is a) unnecessary and b) can introduce an undue performance overhead due to additional locking.

Committing Transaction

http://msdn.microsoft.com/en-us/library/ms189797.aspx
In this link they are committing a transaction within catch clause IF (XACT_STATE()) = 1, I don't get it, if there is an error why they are committing it? even if the problem in select statement and there is no big deal committing it, why don't just roll it back.
Thanks
The link is demonstrating its use, that's all.
Saying that, it may be that in more complex code you want to do a partial commit, for example, rather than rolling back the entire transaction. However, you may not be able to (for example SET XACT_ABORT ON is used as per example)
It's just demonstration code to show that SET XACT_ABORT ON; makes it impossible to commit a transaction where an error occured.
As an example where you might want to commit a transaction after an error, consider logging code. You typically want the log entries to be committed when possible, even if the new order insert resulted in a primary key violation.

Lost Update Anomaly in Sql Server Update Command

I am very much confused.
I have a transaction in ReadCommitted Isolation level. Among other things I am also updating a counter value in it, something similar to below:
Update tblCount set counter = counter + 1
My application is a desktop application and this transaction happens to occur quite frequently and concurrently. We recently noticed an error that sometimes the counter value doesn't get updated or is missed. We also insert one record on each counter update so we are sure that records have been inserted but somehow counter fails to update. This happens once in 2000 simulaneous transactions.
I seriously doubt it is a lost update anomaly I am facing but if you look at the command above, it's just update the counter from its own value: if I have started a transaction and the transaction has reached this statement, it should have locked the row. This should not cause lost update, but it's happening somehow.
Is the thing that this update command works in two parts? Like first it reads the counter value (during which it doesn't get the exclusive lock) and then writes the new calculated value (when it does get an exclusive lock)?
Please help, I have got really confused.
The update command does not work in two parts. It only works in one.
There's something else going on, and my first guess would be that your transaction is rolling back for another reason. Out of those 2,000 transactions, for example, one may be rolling back - especially if you're doing a ton of things concurrently - and it didn't succeed at all.
That update may not have been what caused the problem, either - you may have deadlocks involved due to other transactions, and they may be failing before the update command (or during the update command).
I'd zoom out and ask questions about the transaction's error handling. Are you doing everything in try/catch blocks? Are you capturing error levels when transactions fail? If not, you'll need to capture a trace with Profiler to find out what's going on.
Are you sure that the SQL is always succeeding? What I mean is, could it be something like an occasional lock time-out? Are you handling SQL exceptions in your .Net code in a way that will be aware of them (i.e a pop-up message or a log entry)?

sql server set implicit_transactions off and other options

I am still learning sql server somewhat and recently came across a select query in a stored procedure which was causing a very slow fill of a dataset in c#. At first I thought this was to do with .NET but then found a suggestion to put in the stored procedure:
set implicit_transactions off
this seems to cure it but I would like to know why also I have seen other options such as:
set nocount off
set arithabort on
set concat_null_yields_null on
set ansi_nulls on
set cursor_close_on_commit off
set ansi_null_dflt_on on
set ansi_padding on
set ansi_warnings on
set quoted_identifier on
Does anyone know where to find good info on what each of these does and what is safe to use when I have stored procedures setup just to query of data for viewing.
I should note just to stop the usual use/don't use stored procedures debate these queries are complex select statements used on multiple programs in multiple languages it is the best place for them.
Edit: Got my answer didn't end up fully reviewing all the options but did find
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
Sped up the complex queries dramatically, I am not worried about the dirty read in this instance.
This is the page out of SQL Server Books Online (BOL) that you want. It explains all the SET statements that can be used in a session.
http://msdn.microsoft.com/en-us/library/ms190356.aspx
Ouch, someone, somewhere is playing with fire big-time.
I have never had a production scenario where I had to enable implicit transactions. I always open transactions when I need them and commit them when I am done. The problem with implicit transactions is its really easy to "leak" an open transaction which can lead to horrible issues. What this setting means is "please open a transaction for me the first time I run a statement if there is no transaction open, don't worry about committing it".
For example have a look at the following examples:
set implicit_transactions on
go
select top 10 * from sysobjects
And
set implicit_transactions off
go
begin tran
select top 10 * from sysobjects
They both do the exact same thing, however in the second statement its pretty clear someone forgot to commit the transaction. This can get very complicated to track down if you have this set in an obscure place.
The best place to get documentation for all the set statements is the old trusty sql server books online. It together with a bit of experimentation in query analyzer are usually all that is required to get a grasp of most settings.
I would strongly recommend you find out who is setting up implicit transactions, find out why they are doing it, and remove the setting if its not really required. Also, you must confirm that whoever uses this setting commits their implicitly open transactions.
What was probably going on is that you had an open transaction that was blocking a bit of your your stored proc, and somewhere you have a timeout that is occurring, raising an error and being handled in code, when that timeout happens your stored proc continues running. My guess is that the delay is usually 30 seconds exactly.
I think you need to look deeper into your stored procedure. I don't think that SET IMPLICIT_TRANSACTIONS is really going to be what's sped up your procedure, I think it's probably a coincidence.
One thing that may be worth a look at is what is passed from the client to the server by using the profiler.
We had an odd situation where the default SET arguments for the ADO connection were causing an SP to take ages to run from the client which we resolved by looking at exactly what the server was receiving from the client, complete with default SET arguments compared to what was sent when executing from SSMS. We then made the client pass the same SET statements as those sent by SSMS.
This may be way off track but it is a useful method to use when the SP executes in a timely fashion on the server but not from the client.

Resources