I'm trying to update a record in a database through C# code. I found a solution that I think should work using SaveChanges. However, I'm getting an error from my catch statement that says: "An error occurred while starting a transaction on the provider connection. See the inner exception for details." I'm either looking for an answer on how to fix it and/or how to make my catch statement give better details on what the problem actually is.
This is my code.
using var orderContext =
new OrderContext(Resources.SqlAuthenticationConnectionString);
foreach(OrderRecord order in orders)
{
var query =
from o in orderContext.OrderRecords
where o.ID == order.ID
select o;
foreach(OrderRecord record in query)
{
record.HeatLotNumber = order.HeatLotNumber;
record.OrderNumber = order.OrderNumber;
record.ShimCenterMaterial = order.ShimCenterMaterial;
try
{
orderContext.SaveChanges();
}
catch (Exception e)
{
MessageBox.Show(e.Message);
}
}
}
Looks like I didn't look hard enough. Here's what my problem was. The save needs to be outside the foreach loop.
An error occurred while starting a transaction on the provider connection. See the inner exception for details
Related
I migrated my code from WebApi2 to NET5 and now I have a problem when executing a non-query. In the old code I had:
public void CallSp()
{
var connection = dataContext.GetDatabase().Connection;
var initialState = connection.State;
try
{
if (initialState == ConnectionState.Closed)
connection.Open();
connection.Execute("mysp", commandType: CommandType.StoredProcedure);
}
catch
{
throw;
}
finally
{
if (initialState == ConnectionState.Closed)
connection.Close();
}
}
This was working fine. After I migrated the code, I'm getting the following exception:
BeginExecuteNonQuery requires the command to have a transaction when the connection assigned to the command is in a pending local transaction. The Transaction property of the command has not been initialized.
So, just before calling Execute I added:
var ct = dataContext.GetDatabase().CurrentTransaction;
var tr = ct.UnderlyingTransaction;
And passed the transaction to Execute. Alas, CurrentTransaction is null, so the above change can't be used.
So then I tried to create a new transaction by doing:
using var tr = dataContext.GetDatabase.BeginTransaction();
And this second change throws a different exception complaining that SqlConnection cannot use parallel transactions.
So, now I'm in a situation where I originally had no problem to having neither an existing transaction nor can I create a new one.
How can I make Dapper happy again?
How can I make Dapper happy again?
Dapper has no opinion here whatsoever; what is unhappy is your data provider. It sounds like somewhere, somehow, your dataContext has an ADO.NET transaction active on the connection. I can't tell you where, how, or why. But: while a transaction is active on a connection, ADO.NET providers tend to be pretty fussy about having that same transaction explicitly specified on all commands that are executed on the connection. This could be because you are somehow sharing the same connection between multiple threads, or it could simply be that something with the dataContext has an incomplete transaction somewhere.
I am currently working on an API using ASP.NET Core Web API along with Entity Framework Core 2.1 and a SQL Server database. The API is used to transfer money from two accounts A and B. Given the nature of the B account which is an account that accepts payments, a lot of concurrent requests might be executed at the same moment. As you know if it's not well managed, this can result in some users not seeing their payments arrive.
Having spent days trying to achieve concurrency I can't figure out what the best approach is. For the sake of simplicity I created a test project trying to reproduce this concurrency issue.
In the test project, I have two routes: request1 and request2 each one perform a transfer to the same user the first one have an amount of 10 and the second one is 20. I put a Thread.sleep(10000) on the first one as follows:
[HttpGet]
[Route("request1")]
public async Task<string> request1()
{
using (var transaction = _context.Database.BeginTransaction(System.Data.IsolationLevel.Serializable))
{
try
{
Wallet w = _context.Wallets.Where(ww => ww.UserId == 1).FirstOrDefault();
Thread.Sleep(10000);
w.Amount = w.Amount + 10;
w.Inserts++;
_context.Wallets.Update(w);
_context.SaveChanges();
transaction.Commit();
}
catch (Exception ex)
{
transaction.Rollback();
}
}
return "request 1 executed";
}
[HttpGet]
[Route("request2")]
public async Task<string> request2()
{
using (var transaction = _context.Database.BeginTransaction(System.Data.IsolationLevel.Serializable))
{
try
{
Wallet w = _context.Wallets.Where(ww => ww.UserId == 1).FirstOrDefault();
w.Amount = w.Amount + 20;
w.Inserts++;
_context.Wallets.Update(w);
_context.SaveChanges();
transaction.Commit();
}
catch (Exception ex)
{
transaction.Rollback();
}
}
return "request 2 executed";
}
After executing request1 and request2 after in a browser, the first transaction is rolled back due to:
InvalidOperationException: An exception has been raised that is likely due to a transient failure. Consider enabling transient error resiliency by adding 'EnableRetryOnFailure()' to the 'UseSqlServer' call.
I can also retry the transaction but isn't there a better way? using locks ?
Serializable, being the most isolated level and the most costly too is as said in the documentation:
No other transactions can modify data that has been read by the current transaction until the current transaction completes.
Which means no other transaction can update data that has been read by another transaction, which is working as intended here since the update in the request2 route wait for the first transaction (request1) to commit.
The problem here is we need to block read by other transactions once the current transaction has read the wallet row, to solve the problem I need to use locking so that when the first select statement in request1 executes, all the transactions after need to wait for the 1st transaction to finish so they can select the correct value. Since EF Core have no support for locking I need to execute a SQL query directly, so when selecting the wallet I'll add a row lock to the current row selected
//this locks the wallet row with id 1
//and also the default transaction isolation level is enough
Wallet w = _context.Wallets.FromSql("select * from wallets with (XLOCK, ROWLOCK) where id = 1").FirstOrDefault();
Thread.Sleep(10000);
w.Amount = w.Amount + 10;
w.Inserts++;
_context.Wallets.Update(w);
_context.SaveChanges();
transaction.Commit();
Now this works perfectly even after executing multiple request the result of the transfers all combined is correct. In addition to that am using a transaction table that holds every money transfer made with the status to keep a record of each transaction in case something went wrong am able to compute all wallets amount using this table.
Now there are other ways of doing it like:
Stored procedure: but I want my logic to be in the application level
Making a synchronized method to handle the database logic: this way all the database requests are executed in a single thread, I read a blog post that advise about using this approach but maybe we'll use multiple servers for scalability
I don't know if I'm not searching well but I can't find good material for handling pessimistic concurrency with Entity Framework Core, even while browsing Github, most of code I've seen don't use locking.
Which bring me to my question: is this the correct way of doing it?
Cheers and thanks in advance.
My suggestion for you is to catch on DbUpdateConcurrencyException and use entry.GetDatabaseValues(); and entry.OriginalValues.SetValues(databaseValues); into your retry logic. No need to lock the DB.
Here is the sample on EF Core documentation page:
using (var context = new PersonContext())
{
// Fetch a person from database and change phone number
var person = context.People.Single(p => p.PersonId == 1);
person.PhoneNumber = "555-555-5555";
// Change the person's name in the database to simulate a concurrency conflict
context.Database.ExecuteSqlCommand(
"UPDATE dbo.People SET FirstName = 'Jane' WHERE PersonId = 1");
var saved = false;
while (!saved)
{
try
{
// Attempt to save changes to the database
context.SaveChanges();
saved = true;
}
catch (DbUpdateConcurrencyException ex)
{
foreach (var entry in ex.Entries)
{
if (entry.Entity is Person)
{
var proposedValues = entry.CurrentValues;
var databaseValues = entry.GetDatabaseValues();
foreach (var property in proposedValues.Properties)
{
var proposedValue = proposedValues[property];
var databaseValue = databaseValues[property];
// TODO: decide which value should be written to database
// proposedValues[property] = <value to be saved>;
}
// Refresh original values to bypass next concurrency check
entry.OriginalValues.SetValues(databaseValues);
}
else
{
throw new NotSupportedException(
"Don't know how to handle concurrency conflicts for "
+ entry.Metadata.Name);
}
}
}
}
}
You can use distributed lock mechanism with redis for example.
Also, you can lock by userId, it will not lock method for others.
Why don't you handle the concurrency problem in the code, why it needs to be in the DB layer?
You can have a method that updates the value of given wallet with given value and you can use simple lock there. Like this:
private readonly object walletLock = new object();
public void UpdateWalletAmount(int userId, int amount)
{
lock (balanceLock)
{
Wallet w = _context.Wallets.Where(ww => ww.UserId == userId).FirstOrDefault();
w.Amount = w.Amount + amount;
w.Inserts++;
_context.Wallets.Update(w);
_context.SaveChanges();
}
}
So your methods will look like this:
[HttpGet]
[Route("request1")]
public async Task<string> request1()
{
try
{
UpdateWalletAmount(1, 10);
}
catch (Exception ex)
{
// log error
}
return "request 1 executed";
}
[HttpGet]
[Route("request2")]
public async Task<string> request2()
{
try
{
UpdateWalletAmount(1, 20);
}
catch (Exception ex)
{
// log error
}
return "request 2 executed";
}
You don't even need to use a transaction in this context.
This exception usually happens when a batch is being run or alerts are coming into our Salesforce instance too quickly. When inserting a case, we try to lock down the contact and account associated with the case before inserting the case to prevent the 'UNABLE_TO_LOCK_ROW' exception from happening.
Here is the exact exception:
'System.QueryException: Record Currently Unavailable: The record you are attempting to edit, or one of its related records, is currently being modified by another user. Please try again.'
Class.Utility.DoCaseInsertion: line 98, column 1
I've done a lot of research on the 'UNABLE_TO_LOCK_ROW' exception and 'Record Currently Unavailable' exception and I can't seem to find a great solution to this issue.
What I've tried to accomplish is a loop to attempt the insert 10 times, but I'm still getting the 'Record Currently Unavailable' exception. Does anyone else have a suggestion for this?
Below is the code:
Public static void DoCaseInsertion(case myCase) {
try
{
insert myCase;
}
catch (System.DmlException ex)
{
boolean repeat = true;
integer cnt = 0;
while (repeat && cnt < 10)
{
try
{
repeat = false;
List<Contact> contactList = [select id from Contact where id =: myCase.ContactId for update]; // Added for related contact to overcome the 'UNABLE_TO_LOCK_ROW issues'
List<Account> accountList = [select id from Account where id =: myCase.AccountId for update]; // Added for related account to overcome the 'UNABLE_TO_LOCK_ROW issues'
insert myCase;
}
catch (System.DmlException e)
{
repeat = true;
cnt++;
}
}
}
}
This basically happens when there is a conflicting modification being done by other user/process on a particular record that you are trying to access. Mostly it will happen when anykind of batch process is running in the background and locked the particular record you are trying to access(in your case Account). To get rid of this problem, you would need to check if there are any scheduled apex classes running in the background on Accounts/Cases and see if there is anything you can do to optimize the code to avoid conflicting behavior.
I am trying to execute sql inside a transaction using ServiceStack OrmLite. The code below works with Sqlite but not with SqlServer. With SqlServer I get the following error:
ExecuteScalar requires the command to have a transaction when the connection assigned to the command is in a pending local transaction. The Transaction property of the command has not been initialized.
Is there something wrong with this code?
using (var trans = Db.BeginTransaction())
{
try
{
foreach (myObject in myObjects)
Db.Insert<MyObject>(myObject);
trans.Commit();
}
catch (Exception ex)
{
trans.Rollback();
throw ex;
}
}
Someone else put this answer in a comment and then deleted it... so:
BeginTransaction needs to be OpenTransaction
I have the following code which performs a commit of a single row to a database table (SQL 2008 / .NET 4)
using (var db = new MyDbDataContext(_dbConnectionString))
{
Action action = new Action();
db.Actions.InsertOnSubmit(dbAction);
db.SubmitChanges();
}
Normally everything is fine, but once in a while I get the following exception:
System.InvalidOperationException: This SqlTransaction has completed; it is no longer usable.
at System.Data.SqlClient.SqlTransaction.ZombieCheck()
at System.Data.SqlClient.SqlTransaction.Rollback()
at System.Data.Linq.DataContext.SubmitChanges(ConflictMode failureMode)
There are a number of similar questions on SO but I after reading them I cannot work out the cause.
Could this be simply due to a SQL timeout (the exception occurs close to 25s after the call is made)? Or should I expect a SQL timeout exception in that case?
Does anyone know what else may cause this?
The DataContext.SubmitChanges method has the following code lines in it's body:
// ...
try
{
if (this.provider.Connection.State == ConnectionState.Open)
{
this.provider.ClearConnection();
}
if (this.provider.Connection.State == ConnectionState.Closed)
{
this.provider.Connection.Open();
flag = true;
}
dbTransaction = this.provider.Connection.BeginTransaction(IsolationLevel.ReadCommitted);
this.provider.Transaction = dbTransaction;
new ChangeProcessor(this.services, this).SubmitChanges(failureMode);
this.AcceptChanges();
this.provider.ClearConnection();
dbTransaction.Commit();
}
catch
{
if (dbTransaction != null)
{
dbTransaction.Rollback();
}
throw;
}
// ...
When the connection times out, the catch block is executed and the dbTransaction.Rollback(); line will throw a InvalidOperationException.
If you had control over the code, you could catch the exception like this:
catch
{
// Attempt to roll back the transaction.
try
{
if (dbTransaction != null)
{
dbTransaction.Rollback();
}
}
catch (Exception ex2)
{
// This catch block will handle any errors that may have occurred
// on the server that would cause the rollback to fail, such as
// a closed connection.
Console.WriteLine("Rollback Exception Type: {0}", ex2.GetType());
Console.WriteLine(" Message: {0}", ex2.Message);
}
throw;
}
YES! I had the same issue. The scary answer is that SQLServer sometimes rolls back a transaction on the server side when it encounters an error, and does not pass the error back to the client. YIKES!
Look on the Google Group microsoft.public.dotnet.framework.adonet for "SqlTransaction.ZombieCheck error" Colberd Zhou [MSFT] explains it very well.
and see aef123's comment on this SO post
May I suggest that connection closes earlier that transaction commits. Then the transaction is rolled back. Check this article on MSDN Blog.