What is the equivalent to the following SQL Server statements in DB2?
Begin Transaction
Commit Transaction
Rollback Transaction
The answer is actually a little more complicated than indicated here. True, transactions are ANSI standardized, and DB2 may support them.
DB2 for z/OS can be a very different beast from the other variants (LUW, Linux-Unix-Windows, being the most common). At risk of sliding into a rant, this makes the idea of talking about DB2 almost pointless. You are talking about some specific variant of IBM's database, but what works in one can be completely invalid in another. I will assume that whatever flavor the OP was using was not the z/OS one, since the BEGIN TRANSACTION answer was accepted.
For those of you who stumble across this trying to use transactions with DB2 z/OS, here is the rundown: DB2 for the mainframe does not have explicit transactions. There is no BEGIN TRANSACTION or any other comparable construct. Transactions are begun implicitly (usually referred to as a unit of work in the docs) and committed or rolled back explicitly (usually--many GUI tools, like Toad, have an autocommit feature that can sneak up on you once in a while).
From the 9.1 z/OS SQL reference manual (page 28; available at http://www-01.ibm.com/support/docview.wss?uid=swg27011656#manuals):
"A unit of work is initiated when an application process is initiated. A unit of work
is also initiated when the previous unit of work is ended by something other than
the end of the application process. A unit of work is ended by a commit operation,
a full rollback operation, or the end of an application process. A commit or rollback
operation affects only the database changes made within the unit of work it ends."
The closest thing you get when writing scripts is to manually specify a savepoint.
These look like this:
SAVEPOINT A ON ROLLBACK RETAIN CURSORS;
UPDATE MYTABLE SET MYCOL = 'VAL' WHERE 1;
ROLLBACK WORK TO SAVEPOINT A;
Superficially, these resemble explicit transactions, but they are not. Instead, they really are just points in time within a single implicit transaction. For many purposes, they may suffice, but it is important to be aware of the conceptual differences.
See here for more info. But basically
BEGIN TRANSACTION
COMMIT TRANSACTION
ROLLBACK
If you use an IDE like Intellij Idea (or others) then you don't have the possibility to start explicitely a transaction. In others words you cannot type 'begin transaction' in the console of your IDE.
But you can disable 'Auto-commit' (and reenable it later) and then type 'commit' or "rollback' into the console.
In IDEA there is also a button to 'commit' and a button to 'rollback'.
Have a look to the screen dump attached.
Related
I have some scripts including DDL and DML transactions that use an Informix 14 database that I want to run some tests against.
If the tests fail they will often leave the database in an inconsistent state that needs to be manually resolved before the tests can be run again.
I would like to automate this so that the tests do not require manual intervention before running the tests again.
So, is it possible to use rollback and savepoint without locking the database and run some tests on the database?
If you execute BEGIN WORK (or just BEGIN), then everything you do afterwards will be part of the same transaction until you explicitly execute COMMIT WORK (or just COMMIT) or ROLLBACK WORK (or just ROLLBACK) — or until your program exits without explicitly ending the transaction. If your program terminates unexpectedly or carelessly (without explicitly completing the transaction), the transaction will be rolled back.
The transaction will manage all the DDL and DML operations — with the sole exception of some caveats with the TRUNCATE TABLE statement. There are some restrictions on what you can do with the truncated table — basically, it cannot be modified again until the transaction completes.
Of course, this assumes your database is logged. You can't have transactional control in an unlogged database.
Let's say I have a stored procedure that is doing 3 inserts. To make sure everything is working fine, I add a begin and commit tran in the stored procedure.
Then from the code side (.NET, C#), the programmer is also creating a transaction.
What will be the best approach for that?
Having in both places?
Having that in the C# code only?
Having that in the stored procedure only?
It's better to only do it in the stored procedure for a number of reasons:
The procedure can keep better control of when to begin and commit the transaction.
It can also control the isolation level, which should usually be set before the transaction starts.
It keeps database code close to the database (somewhat subjective).
If the connection is severed and the server does not realize, a transaction opened by the client may not be committed or rolled back for some time, and could leave locks hanging, causing a huge chain of blocking
The client starting and committing the transaction requires two extra round-trips over the network, which in a low-latency app may be problematic. (SET NOCOUNT ON should be used for the same reason.) The transaction and associated locking is also extended for that time, casuing further blocking problems.
Do use SET XACT_ABORT ON, in case of exceptions this will cause an automatic rollback and prevent them from leaving hanging transactions.
It may still may sense to use client-side transactions, especially with distributed transactions.
Having transactions in both client code and the procedure is silly and wasteful. Choose one or the other option and stick to it.
I have some SQL statements in a batch that I want to profile for performance. To that end, I have created a stored procedure that logs execution times.
However, I also want to be able to roll back the changes the main batch performs, while still retaining the performance logs.
The alternative is to run the batch, copy the performance data to another database, restore the database from backup, re-apply all the changes made that I want to profile, plus any more, and start again. That is rather more time-consuming than not including the act of logging in the transaction.
Let us say we have this situation:
BEGIN TRANSACTION
SET #StartTime = SYSDATETIME
-- Do stuff here
UPDATE ABC SET x = fn_LongRunningFunction(x)
EXECUTE usp_Log 'Do stuff', #StartTime
SET #StartTime = SYSDATETIME
-- Do more stuff here
EXEC usp_LongRunningSproc()
EXECUTE usp_Log 'Do more stuff', #StartTime
ROLLBACK
How can I persist the results that usp_Log saves to a table without rolling them back along with the changes that take place elsewhere in the transaction?
It seems to me that ideally usp_Log would somehow not enlist itself into the transaction that may be rolled back.
I'm looking for a solution that can be implemented in the most reliable way, with the least coding or work possible, and with the least impact on the performance of the script being profiled.
EDIT
The script that is being profiled is extremely time-consuming - taking from an hour to several days - and I need to be able to see intermediate profiling results before the transaction completes or is rolled back. I cannot afford to wait for the end of the batch before being able to view the logs.
You can use a table variable for this. Table variables, like normal variables, are not affected by ROLLBACK. You would need to insert your performance log data into a table variable, then insert it into a normal table at the end of the procedure, after all COMMIT and ROLLBACK statements.
It might sound a bit overkill (given the purpose), but you can create a CLR stored procedure which will take over the progress logging, and open a separate connection inside it for writing log data.
Normally, it is recommended to use context connection within CLR objects whenever possible, as it simplifies many things. In your particular case however, you wish to disentangle from the context (especially from the current transaction), so regular connection is a way to go.
Caveat emptor: if you never dabbled with CLR programming within SQL Server before, you may find the learning curve a bit too steep. That, and the amount of server reconfiguration (both the SQL Server instance and the underlying OS) required to make it work might also seem to be prohibitively expensive, and not worth the hassle. Still, I would consider it a viable approach.
So, as Roger mentions above, SQLCLR is one option. However, since SQLCLR is not permitted, you are out of luck.
In SQL Server 2017 there is another option and that is to use the SQL Server extensibility framework and the support for Python.
You can use this to have Python code which calls back into your SQL Server instance and executes the usp_log procedure.
Another, rather obscure, option is to bind other sessions to the long-running transaction for monitoring.
At the beginning of the transaction call sp_getbindtoken and display the bind token.
Then in another session call sp_bindsession, and you can examine the intermediate state of the transaction.
Or you can read the logs with (NOLOCK).
Or you can use RAISERROR WITH LOG to send debug messages to the client and mirror them to the SQL Log.
Or you can use custom user-configurable trace events, and monitor them in SQL Trace or XEvents.
Or you can use a Loopback linked server configured to not propagate the transaction.
I administer Sybase ASE(15.0.7) database thats run on Solaris(11). I am pretty new specifically to Sybase ASE , but I have pretty well overall knowledge with working on databases such as SQL Server. Lately, while I was doing tasks such as uploading programmers scripts and etc, I was told to do not use it with ASE ISQL utility and go straight from command line utility (isql) because it would lose part of the data otherwise . I was pretty confused how could it possibly lose anything while handing script to the DB.I tried to discuss this with the folks at work saying that it sounds pretty wierd.
None of us are the real Sybase heavily experienced admins and generally they could not give me any argued answers on the case. So they just claim thats ASE isql is a no-no.
Could that really be true?
This is absolutely not true. The Sybase command-line utility 'isql' is used very intensely by Sybase customers.
I think the confusion may come from the fact that isql does not perform 'autocommit', as is common in client tools for many other databases.
As a result, when you start an explicit transaction (BEGIN TRANSACTION) in the default unchained transaction mode , or when you run in chained transaction mode, and when you exit 'isql', then the transaction was not committed, so the ASE server will roll it back. This may be interpreted as 'data being lost' but that's not what really happens.
So, in ASE you should explicitly COMMIT a transaction, or it will eventually be rolled back.
(just for completeness, in the default unchained transaction mode, if you don't use BEGIN TRANSACTION then each DML command will commit immediately when it's ready. That's not the same as autocommit, although it is sometimes called that way.)
I just read in Wikipedia, that SQL is inherently transactional.
I took this to mean that every statement made to a SQL DBMS is treated as a transaction by default. Is this correct?
An example I can think of that might make this question relevant would be if you considered an update statement like this:
UPDATE Employee SET salary=salary * 1.1 WHERE type='clerk';
If this were being processed and there was some failure that caused the DBMS to shutdown, on restart, in a transactional sense, wouldn't the rows that records that were updated be rolled back?
At least in SQL Server, if you run a transaction, opening the line with
BEGIN TRAN
and you don't commit or rollback the transaction, it will ask you (if you try to exit the window) if you want to commit the transactions. If something caused everything to crash, and we had an open transaction (meaning, nothing to close it), it would not be considered committed.
Your question demonstrates another reason why many developers will use a COMMIT TRAN only if there are no errors, so every transaction by default, will rollback.
Disclaimer here: I am ONLY referring to SQL Server - I cannot say this would hold true for other SQL databases.
The answer is no. SQL is a language, what you describe is ACID behaviour. Though many database systems behave that way, it is still perfectly possible to create one that uses SQL as language and allows statements to be partially executed.