SQL says no active transaction even though I created one - sql-server

Using Microsoft.EntityFrameworkCore version 6.0.5, with .NET 6, I'm doing multiple database operations inside of a transaction. When I call CommitAsync, it fails saying:
Cannot issue SAVE TRANSACTION when there is no active transaction.
How is there no active transaction when I've clearly created one?
var task = await Context.Database.BeginTransactionAsync(cancellationToken);
try {
var newEntity = ...
... Multiple database operations ...
await Context.SaveChangesAsync(cancellationToken);
... Other DB operations that now make use of the just created objects.
await Context.SaveChangesAsync(cancellationToken);
await task.CommitAsync(cancellationToken);
return newEntity;
} catch (Exception) {
await task.RollbackAsync(cancellationToken);
throw;
}
I'm creating multiple object in different tables that reference each other, and calling other people's code that might be doing a save operation of its own, thus the transaction.

Strangely I tracked this down to SQL server corruption. I had renamed a table and for some reason, if I saved changes to that table, it was internally (in the SQL server) trying to use that old name. I nuked the table and recreated and then I had no more transaction issues.

Related

SQL Server - Records being deleted randomly?

I have an ASP.NET MVC app I've developed for a client. MVC 5, EF 6.
The web server and db server are separate. All of a sudden a bunch of data was deleted from the db server. Users were performing no delete functions. Is there any scenario that would cause this? The records seem to be random. No stored procs, triggers, etc.... running. The app was working fine for months. Any scenario where SQL Server (2014 version) would delete records in a table? No errors were displayed to the user.
**** UPDATE ****
The only "delete" related code that I rolled out recently was this...
[Authorize]
public class WorkResultsController : Controller
{
private readonly ABC_WorkingContext db = new ABC_WorkingContext();
public ActionResult DeleteEvent(int id, bool? performRedirectAfterDelete = true)
{
if (!WorkFormServices.IsEditOrDeleteEnabled(id)) return this.HttpNotFound();
var #event = this.db.Events.Find(id);
try
{
// first remove any Work questions related to this event
var WorkResultsToDelete = this.db.WorkResults.Where(e => e.EventId == id).ToList();
foreach (var row in WorkResultsToDelete) this.db.WorkResults.Remove(row);
this.db.Events.Remove(#event);
this.db.SaveChanges();
if (performRedirectAfterDelete == true) return this.RedirectToAction("Index", "WorkScheduler");
return this.Json(
new { success = true, responseText = "Delete success!" },
JsonRequestBehavior.AllowGet);
}
catch (Exception)
{
return this.Json(
new { success = false, responseText = "Delete failed!!" },
JsonRequestBehavior.AllowGet);
}
}
I want to delete only WorkResults records related to the specific ID. So, I believe this is working correctly. Do you see any unintended deletes that could happen?
I agree with Min - a DB won't just delete data. This is more than likely a code (app or DB side) issue or a breach of some kind.
I would check:
app code - is there a bad SQL call/statement (related to the tables you're missing data from) that could have deleted more than it should
Stored Procs, Triggers - same as above - an SQL mistake could wreak havoc
Table relationships - are any unwanted cascade deletes set up?
EF - are there any unwanted cascades set up in this between entities?
Logins - for sanity - change the passwords for the logins your app uses...this could be a breach maybe - hard to tell without seeing the pattern of missing data
First, no commercial DB deletes random data by itself. If it really deletes its client's data, its maker would be sued by client.
So, there are DELETE queries in somewhere or someone executed DELETE operation on SQL SERVER Studio. You can monitor DB queries. Check your queries and find which query delete your data. And ask DBA or DB Operator if they executed some queries.
In my experience, there is no "THERE IS NO SUCH QUERY".

Safe to leave database open at all times? Is crash detection needed?

Right now, to make sure that I'm writing to a correct and functional database at all times, I'm calling the following createDB function every time when I'm accessing the database for data insertion (which happens on a 15 second interval):
protected void createDB(String dbName){
// clear the singleton if forcing a switch
try{
mdbHelper = new DaoMaster.DevOpenHelper(mcontext, dbName, null);
database = mdbHelper.getWritableDatabase();
mdaoMaster = new DaoMaster(database);
dbSessionInstance = mdaoMaster.newSession();
}catch (Exception e){
System.out.println(TAG + e.getMessage());
}
}
This will attempt to create a database with the name given to dbName if it doesn't already exist; if it exists, then it'll just use it.
There are a few scenarios I would like to consider:
1) When the database app crashes, do I have to do anything to make sure the database doesn't get messed up?
2) Does it do more good or harm if I regularly and often call createDB?
3) Just in case, how do I safely perform secure database closures in the event that accidents happen?
When an app crashes, there's nothing it can do because it is no longer running.
As long as you keep the database open, it still exists.
Trying to (re)create the DB repeatedly while your app is running makes sense only if you suspect that somebody else is deleting the DB.
The boundary for guaranteed durability are not database connection but transactions.
In SQLite, transactions are designed so that the database is in a consistent state even if the app crashes during a transaction (i.e., uncommitted transactions are rolled back when the DB is opened afterwards).

Is there an automatic way to generate a rollback script when inserting data with LINQ2SQL?

Let's assume we have a bunch of LINQ2SQL InsertOnSubmit statements against a given DataContext. If the SubmitChanges call is successful, is there any way to automatically generate a list of SQL commands (or even LINQ2SQL statements) that could undo everything that was submitted at a later time? It's like executing a rollback even though everything worked as expected.
Note: The destination database will either be Oracle or SQL Server, so if there is specific functionality for both databases that will achieve this, I'm happy to use that as well.
Clarification:
I do not want the "rollback" to happen automatically as soon as the inserts have succesfully completed. I want to have the ability to "undo" the INSERT statements via DELETE (or some other means) up to 24 hours (for example) after the original program finished inserting data. We can ignore any possible referential integrity issues that may come up.
Assume a Table A with two columns: Id (autogenerated unique id) and Value (string)
If the LINQ2SQL code performs two inserts
INSERT INTO Table A VALUES('a') // Creates new row with Id = 1
INSERT INTO Table A VALUES('z') // Creates new row with Id = 2
<< time passes>>
At some point later I would want to be able "undo" this by executing
DELETE FROM A Where Id = 1
DELETE FROM A Where Id = 2
or something similar. I want to be able to generate the DELETE statements to match the INSERT ones. Or use some functionality that would let me capture a transaction and perform a rollback later.
We cannot just 'reset the database' to a certain point in time either as other changes not initiated by our program could have taken place since.
It is actually quite easy to do this, because you can pass in a SqlConnection into the LINQ to SQL DataContext on construction. Just run this connection in a transaction and roll that transaction back as soon as you're done.
Here's an example:
string output;
using (var connection = new SqlConnection("your conn.string"))
{
connection.Open();
using (var transaction = connection.StartTransaction())
{
using (var context = new YourDataContext(connection))
{
// This next line is needed in .NET 3.5.
context.Transaction = transaction;
var writer = new StringWriter();
context.Log = writer;
// *** Do your stuff here ***
context.SubmitChanges();
output = writer.ToString();
}
transaction.Rollback();
}
}
I am always required to provide a RollBack script to our QA team for testing before any change script can be executed in PROD.
Example: Files are sent externally with a bunch of mappings between us, the recipient and other third parties. One of these third parties wants to change, on an agreed date, the mappings between the three of us.
Exec script would maybe update some exisiting, delete some now redundant and insert some new records - scope_identity used in subsequent relational setup etc etc.
If, for some reason, after we have all executed our changes and the file transport is fired up, just like in UAT, we see some errors not encountered in UAT, we might multilaterally make the decision to roll back the changes. Hence the roll back script.
SQL has this info when you BEGIN TRAN until you COMMIT TRAN or ROLLBACK TRAN. I guess your question is the same as mine - can you output that info as a script.
Why do you need this?
Maybe you should explore the flashback possibilities of Oracle. It makes it possible to travel back in time.
It makes it possible to reset the content of a table or a database to how it once was at a specific moment in time (or at a specific system change number).
See: http://www.oracle.com/technology/deploy/availability/htdocs/Flashback_Overview.htm

Does SQLite support transactions across multiple databases?

I've done some searching and also read the FAQ on the SQLite site, no luck finding an answer to my question.
It could very well be that my database approach is flawed, but at the moment, I would like to store my data in multiple SQLite3 databases, so that means separate files. I am very worried about data corruption due to my application possibly crashing, or a power outage in the middle of changing data in my tables.
In order to ensure data integrity, I basically need to do this:
begin transaction
modify table(s) in database #1
modify table(s) in database #2
commit, or rollback if error
Is this supported by SQLite? Also, I am using sqlite.net, specifically the latest which is based on SQLite 3.6.23.1.
UPDATE
One more question -- is this something people would usually add to their unit tests? I always unit test databases, but have never had a case like this. And if so, how would you do it? It's almost like you have to pass another parameter to the method like bool test_transaction, and if it's true, throw an exception between database accesses. Then test after the call to make sure the first set of data didn't make it into the other database. But maybe this is something that's covered by the SQLite tests, and should not appear in my test cases.
Yes transactions works with different sqlite database and even between sqlite and sqlserver. I have tried it couple of times.
Some links and info
From here - Transaction between different data sources.
Since SQLite ADO.NET 2.0 Provider supports transaction enlistement, not only it is possible to perform a transaction spanning several SQLite datasources, but also spanning other database engines such as SQL Server.
Example:
using (DbConnection cn1 = new SQLiteConnection(" ... ") )
using (DbConnection cn2 = new SQLiteConnection(" ... "))
using (DbConnection cn3 = new System.Data.SqlClient.SqlConnection( " ... ") )
using (TransactionScope ts = new TransactionScope() )
{
cn1.Open(); cn2.Open(); cn3.Open();
DoWork1( cn1 );
DoWork2( cn2 );
DoWork3( cn3 );
ts.Complete();
}
How to attach a new database:
SQLiteConnection cnn = new SQLiteConnection("Data Source=C:\\myfirstdatabase.db");
cnn.Open();
using (DbCommand cmd = cnn.CreateCommand())
{
cmd.CommandText = "ATTACH DATABASE 'c:\\myseconddatabase.db' AS [second]";
cmd.ExecuteNonQuery();
cmd.CommandText = "SELECT COUNT(*) FROM main.myfirsttable INNER JOIN second.mysecondtable ON main.myfirsttable.id = second.mysecondtable.myfirstid";
object o = cmd.ExecuteScalar();
}
Yes, SQLite explicitly supports multi-database transactions (see https://www.sqlite.org/atomiccommit.html#_multi_file_commit for technical details). However, there is a fairly large caveat. If the database file is in WAL mode, then:
Transactions that involve changes against multiple ATTACHed databases
are atomic for each individual database, but are not atomic across all
databases as a set.

How to execute inserts and updates outside a transaction in T-SQL

I have stored procedures in SQL Server T-SQL that are called from .NET within a transaction scope.
Within my stored procedure, I am doing some logging to some auditing tables. I insert a row into the auditing table, and then later on in the transaction fill it up with more information by means of an update.
What I am finding, is that if a few people try the same thing simultaneously, 1 or 2 of them will become transaction deadlock victims. At the moment I am assuming that some kind of locking is occurring when I am inserting into the auditing tables.
I would like to execute the inserts and updates to the auditing tables outside of the transaction I am executing, so that the auditing will occur anyway, even if the transaction rolls back. I was hoping that this might stop any locks occurring, allowing more than one person to execute the procedure at once.
Can anyone help me do this in T-SQL?
Thanks,
Rich
Update- I have since found that the auditing was unrelated to the transaction deadlock, thanks to Josh's suggestion of using SQL Profiler to track down the source of the deadlock.
TranactionScope supports Suppress:
using (TransactionScope scope = new TransactionScope())
{
// Transactional code...
// Call a SQL stored procedure (but suppress the transaction)
using (TransactionScope suppress = new TransactionScope(TransactionScopeOption.Suppress))
{
using (SqlConnection conn = new SqlConnection(...))
{
conn.Open();
SqlCommand sqlCommand = conn.CreateCommand();
sqlCommand.CommandType = CommandType.StoredProcedure;
sqlCommand.CommandText = "MyStoredProcedure";
int rows = (int)sqlCommand.ExecuteScalar();
}
}
scope.Complete();
}
But I would have to question why logging/auditing would run outside of the transaction? If the transaction is rolled back you will still have committed auditing/logging records and that's probably not what you want.
You haven't provided much information as to how you are logging. Does your audit table have Foreign keys pointing back to your main active tables? If so, remove the foreign keys (assuming the audit records only come from 'known' applications).
you could save your audits to a table variable (which are not affected by transactions) and then at the end of your SP (outside the scope of the transaction) insert the rows into the audit table.
However, it sounds like you are trying to fix the symptoms rather than the problem. you may want to track down the deadlocks and fix them.
I had a similar requirement where I needed to log errors into an errorlog table, but found that Rollback were wiping them out.
Solved this problem by popping previously inserted error records into a table variable, calling Rollback, then pushing back (inserting) the records into the table.
Works like a charm but code is messy, on account of it having to be inline. Can't put ROLLBACK into a stored procedure otherwise will get "Transaction count after EXECUTE... “ error.
Why are you updating the auditing table? If you were only doing inserts you might help prevent lock escalations. Also have you examined the deadlock trace to determine what exactly you were deadlocking?
You can do this by enabling trace flag 1204. Or running SQL Profiler. This will give you detailed information that will let you know what kind of deadlock (locks, threads, parrallel etc...).
Check out this article on Detecting and Ending Deadlocks.
One other way to do auditing is to decouple from the business transaction completly by sending all logging events to a queue at the application tier, this minimizes the impact logging has on your business transaction but is probally a very large for an existing application.

Resources