SQL Server: How to make a transaction commit fail? - sql-server

To test error handling in an application, I'm looking for a way to let a transaction commit result in an error.
The application is written in C and uses ODBC to talk to a SQL Server 2017 data source. The application starts a database transaction and executes an arbitrary SQL (which I can change for the sake of the test). Then, it commits the transaction (using ODBCs SQLEndTran()). I want to build a test that verifies the error handling of the commit.
Is there an easy way and reliable way to let the commit fail, e.g by executing some specific SQL script before the commit, or by changing the database or the data source settings?
EDIT / clarification: What I need to fail is the transaction commit itself (specifically the SQLEndTran() complete call with an error). SQL before that shall complete successfully.

If you are able to time it correctly in a testing framework you can do few things:
1. Kill session from a separate connection in a testing framework.
2. Change firewall configuration to emulate network error.
3. Switch database to single user mode or stop SQL Service.

Easiest way is to force a divide by zero.
declare #SomeVal int = 0
set #SomeVal = 2 / #SomeVal
--EDIT--
Since I guess you want the commit to fail you could simply add a rollback right before the commit. Then the exception would be thrown on the commit statement.

Related

Is there any way to rollback transactions in SSIS for SQL Server 2012?

Cannot successfully execute an SSIS package with BEGIN TRAN functionality.
I'm at a loss with an SSIS package I inherited. It contains:
1 Script Task
3 Execute SQL tasks
5 Data flow tasks (each contains a number of merges, lookups, data inserts and other transformations)
1 file system task of the package.
All of these are encapsulated in a Foreach loop container. I've been tasked with modifying the package so that if any of the steps within the control/data flow fails, the entire thing is rolled back. Now I've tried two different approaches to accomplish this:
I. Using Distributed Transactions.
I ensured that:
MSDTC was running on target server and executing client (screenshot enclosed)
msdtc.exe was added as an exception to server and client firewall
Inbound and outbound rules were set for both server and client to allow DTC connections.
ForeachLoop Container TrasanctionLevel: Required
All other tasks TransactionLevel: Supported
My OLEDB Connection has RetainSameConnection set to TRUE and I'm using SQL Server Authentication with Save Password checked
When I execute the package, it fails right after the script task (first step).
After spending an entire week trying to figure out a workaround, I decided to try SQL Tasks to try to accomplish my goal using 3 Execute SQL Tasks:
BEGIN TRAN before the ForeachLoop Container
COMMIT TRAN after the ForeachLoop Container with a Success Constraint
ROLLBACK TRAN after the ForeachLoop Container with a Failure constraint
In this case, the ForeachLoop container and all other tasks have TransactionLevel property set to Supported. Now here, the problem is that the package executes up to the fourth data flow task and hangs there forever. After logging into SQL Server and verifying the running sessions, I noticed sys.sp_describe_first_result_set;1 as a headblocker session.
Doing some research, I found it could be related to a few TRUNCATE statements in some of my Data flow tasks which could cause a schema lock. I went ahead and changed the ValidateExternalMetaData property to False for all tasks within my data flow and changed my truncate statements to DELETE statements instead. Re-ran package and still hangs in the same spot with the same headblocker. As an alternative, I tried creating a second OLEDB connection to the same database, assigned that new OLEDB Connection to my BEGIN, ROLLBACK and COMMIT SQL tasks with RetainSameConnectionProperty set to TRUE and changed the RetainSameConnectionProperty to FALSE (and tried it with TRUE as well) in the original OLEDB connection (the one used by the data flow tasks). This worked in the sense that the package appeared to execute (It ran and Commit Tran executed fine) and then I ran it again with a forced error to cause it to fail and the Rollback TRAN task executed successfully, however, when I queried the affected tables, the transaction hadn't rolled back, all new records were inserted and old ones were updated (the begin tran was clearly started in a different connection and hence didn't affect the package's workflow). I'm not sure what else to try at this point. Any help would be truly appreciated, I’m about to go nuts with this!
P.S. additionally, all objects have "DelayValidation" set to true on everything and SQL Server version is 2012.

How to stop a running query?

I use RODBC to send queries to an SQL-Server. Sometimes they take too much time to run, so I need to cancel them.
Clicking the red "stop" button in RStudio yields this error message:
R is not responding to your request to interrupt processing so to stop
the current operation you may need to terminate R entirely.
Terminating R will cause your R session to immediately abort. Active
computations will be interrupted and unsaved source file changes and
workspace objects will be discarded.
Do you want to terminate R now?
And if I click yes my session is indeed terminated. (note: using Rgui instead of RStudio doesn't make things better)
However:
when I use another software (named "Query ExPlus") to connect to this same SQL-Server, I have a similar stop button, and clicking it instantly interrupts the query, without any crash.
when I connect to a PostgreSQL database using the RPostgres package I can also interrupt the query at any time.
These two points lead me to think that there should be a way to solve my problem. What can I do?
So far my workaround is:
library(RODBC)
library(R.utils)
withTimeout(mydf <- sqlQuery(myconnection, myquery), timeout=120)
Note: I don't have permission to kill queries from the database side.
I've just stumbled upon the odbc package. It allows to interrupt a query at any time.
Basic usage goes like this:
library(DBI)
myconnection <- dbConnect(odbc::odbc(),
driver = "SQL Server",
server = "my_server_IP_address",
database = "my_DB_name",
uid = "my_user_id",
pwd = "my_password")
dbGetQuery(myconnection, myquery)
I don't have a deep understanding of what happens behind the scenes, but for what I've seen so far in my personal use this package has other advantages over RODBC:
really faster
get the column types from the DB instead of guessing them (see here)
no stringsAsFactors and as.is arguments necessary
Most SQL Server users use SQL Server Management Studio (which is free and can be downloaded from Microsoft) to connect to SQL Server or execute commands from the command line via a tool called SQLCMD.
If you can determine the session id that the SQL Command is being run in you can kill the session which would stop any executing command(s). SQL Server will still need time (could be a 'long' time) to rollback any changes made during the execution of the command.
Terminating a session (depending on the software) can take a while to communicate to SQL Server that the session has been terminated. When I connected to DB2 from SQL Server using linked servers DB2 would buffer the terminate command and it would frequently take up to an hour for DB2 to realize the session had been terminated.
To determine what the session you are running in you can try:
select ##spid;
once you have the spid (lets say 86)
you can then issue (depending on if you have permission to do so)
kill 86;
but as Microsoft notes:
Terminates a user process that is based on the session ID or unit of work (UOW). If the specified session ID or UOW has a lot of work to undo, the KILL statement may take some time to complete, particularly when it involves rolling back a long transaction.
Try to close your "tab query" on SQL Server Management Studio
Then it will appear pop-up,
This Query is currently executing. Do you want to cancel this query ?
Cancel anyway, choose "yes".
try to set your connection prior to query:
sql = odbcConnect('Database name')
Then use same line to run your query:
mydf <- sqlQuery(sql, " myquery ")
Note: The running time is dependant on both database and R server but setting up the connection this way should resolve termination problem.

What happens in the case of a Postgres Transaction Rollback command not reaching the database?

Section 3.4 of the Postgres documentation covers transactions.
I thought a transaction worked according to the following rules:
The client sends a BEGIN statement to the Database server on a connection. Call this connection “connection_one”.
The client sends whatever queries they want to the Database server. All of these queries are sent via “connection_one”.
If at any time the connection (in this example “connection_one”) is lost before a COMMIT statement reaches the Database server, the Database server rollsback to before the BEGIN statement.
If a COMMIT statement is issued and received by the Database server, then the changes are saved and then transaction block has completed.
It looks like the above is not the case though. My confusion is that it looks like I have to actually issue a ROLLBACK command and have it reach the Database Server in order for partial changes not to be saved. Is this really the case or am I missing something? If it is the case is there any way I can get the above behavior to occur or is there some reason I would not want the above behavior to occur? My concern is what if the connection is lost before I am able to ROLLBACK.
Thanks.

Oracle database rollback after update

After upgrading from 11g to 12c, we noticed a weird behaviour. When we update a table, the data is rolled back though we issued commit and there was no error.
Anyone with similar experience?
This is never supposed to happen: once the database receives a COMMIT request, it must either (1) fulfill the COMMIT request or (2) return an error AND roll back the transaction. Oracle (version 12C) has an API called "Transaction Guard" that is supposed to notify you if a commit is successful. Here is the URL for that API:
https://docs.oracle.com/database/121/CNCPT/transact.htm#CNCPT89217
Even if you are Not using this API, if the transaction reaches the ORACLE database, it should
either succeed or you should receive a listener or ORA- error.
Are you sure that:
You are using a client that does not roll back if part of the data for a transaction (E.G. one field in a data grid) is not filled out
Is your ORACLE client compatible with 12C?

cannot connect to database after putting tasks in sequence container

I had a package that worked perfectly until i decided to put some of its tasks inside a sequence container (More on why I wanted to do that - How to make a SSIS transaction in my case?).
Now, i keep on getting an error -
[Execute SQL Task] Error: Failed to acquire connection "MyDatabase". Connection may not be configured correctly or you may not have the right permissions on this connection.
Why could this be happening and how do I fix it ?
I started writing my own examples to reply to your question. Then I remember that I met Matt Mason when I talked at a SQL Saturday in New Hampshire. He is the Microsoft Program Manager for SSIS.
While I spent 3 years between 2009 and 2011 writing nothing else but ETL code, I figured Matt had an article out there.
http://www.mattmasson.com/2011/12/design-pattern-avoiding-transactions/
Here is a high level summary of the approaches and the error you found.
[ERROR]
The error you found is related to MSDTC having issues. This must be configured and working correctly without any issues. Common issues are firewalls. Check out this post.
http://social.msdn.microsoft.com/Forums/sqlserver/en-US/3a5c847e-9c7e-4628-b857-4e6edaa7936c/sql-task-transaction-required?forum=sqlintegrationservices
[SOLUTION 1] - Use transactions at the package, task or container level.
Some data providers do not support MSDTC. Some tasks do not support transactions. This may be slow in performance since you are adding a new layer to support two phase commits.
http://technet.microsoft.com/en-us/library/aa213066(v=sql.80).aspx
[SOLUTION 2] - Use the following tasks.
A - BEGIN TRAN (EXECUTE SQL)
B - YOUR DATA FLOW
C - TEST THE RETURN CODE
1 - GOOD = COMMIT (EXECUTE SQL)
2 - FAILURE = ROLLBACK (EXECUTE SQL)
You must have the RetainSameConnection property set to True on the connection.
This forces all calls thru one session or SPID. All transaction management is now on the server.
[SOLUTION 3] - Write all you code so that it is restartable. This does not mean you go out and use check points.
One solution is to always use UPSERTS. Insert new data. Update old data. Deletes are only a flag in a table. This pattern allows a failed job to be executed many times with the same final state being achieved.
Another solution is to handle all error rows by placing them into a hospital table for manual inspection, correction, and insertion.
Why not use a database snapshot (keeps track of just changed records)? Take a snapshot before the ETL job. If an error occurs, restore the database from the snapshot. Last step is to remove the snapshot from the system to clean up house.
In short, I hope this is enough ideas to help you out.
While the transaction option is nice, it does have some down falls. If you need an example, just ping me.
Sincerely
J
What package protection level are you using? Don't Save Sensitive? Encrypt Sensitive with User Key? I'd recommend changing it to use Encrypt Sensitive with Password and enter a password. The password won't disappear.
Have you tried testing the connection to the database in the connection manager?

Resources