Define ExecutionStrategy at configuration level in EF core - sql-server

I do have an application using EF core connected to azure SQL.
We were facing resilience failure , to which adding EnableRetryOnFailure() was the solution which I have configured.
services.AddEntityFrameworkSqlServer()
.AddDbContext<jmasdbContext>(options => options.UseSqlServer(Configuration.GetConnectionString("DataContext"), sqlServerOptionsAction: sqlActions =>
{
sqlActions.EnableRetryOnFailure(
maxRetryCount: 10,
maxRetryDelay: TimeSpan.FromSeconds(5),
errorNumbersToAdd: null);
}), ServiceLifetime.Transient);
Now, this one would fail when we have BeginTransaction throwing error as below
"The configured execution strategy
'SqlServerRetryingExecutionStrategy' does not support user-initiated
transactions. Use the execution strategy returned by
'DbContext.Database.CreateExecutionStrategy()' to execute all the
operations in the transaction as a retriable unit."
I looked into MS docs and they suggest a way to define Execution strategy manually using ExecuteAsync "https://learn.microsoft.com/en-us/dotnet/architecture/microservices/implement-resilient-applications/implement-resilient-entity-framework-core-sql-connections"
This has become pain as we do have more than 25+ places where we have these transactions.
I tried to have custom ExecutionStrategy at DbContext level but that did not help
protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
if (!optionsBuilder.IsConfigured && !string.IsNullOrEmpty(ConnectionString))
{
optionsBuilder.UseSqlServer(ConnectionString, options =>
{
options.ExecutionStrategy((dependencies) =>
{
return new SqlServerRetryingExecutionStrategy(dependencies, maxRetryCount: 3, maxRetryDelay: TimeSpan.FromSeconds(5), errorNumbersToAdd: new List<int> { 4060 });
});
});
}
}
Is there any way to have this defined at global level? We do not want different strategy for each operation, whenever there is failure , we want that to be rollback completely and start from beginning.

Related

.NET 7 Distributed Transactions issues

I am developing small POC application to test .NET7 support for distributed transactions since this is pretty important aspect in our workflow.
So far I've been unable to make it work and I'm not sure why. It seems to me either some kind of bug in .NET7 or im missing something.
In short POC is pretty simple, it runs WorkerService which does two things:
Saves into "bussiness database"
Publishes a message on NServiceBus queue which uses MSSQL Transport.
Without Transaction Scope this works fine however, when adding transaction scope I'm asked to turn on support for distributed transactions using:
TransactionManager.ImplicitDistributedTransactions = true;
Executable code in Worker service is as follows:
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
int number = 0;
try
{
while (!stoppingToken.IsCancellationRequested)
{
number = number + 1;
using var transactionScope = TransactionUtils.CreateTransactionScope();
await SaveDummyDataIntoTable2Dapper($"saved {number}").ConfigureAwait(false);
await messageSession.Publish(new MyMessage { Number = number }, stoppingToken)
.ConfigureAwait(false);
_logger.LogInformation("Publishing message {number}", number);
_logger.LogInformation("Worker running at: {time}", DateTimeOffset.Now);
transactionScope.Complete();
_logger.LogInformation("Transaction complete");
await Task.Delay(1000, stoppingToken);
}
}
catch (Exception e)
{
_logger.LogError("Exception: {ex}", e);
throw;
}
}
Transaction scope is created with the following parameters:
public class TransactionUtils
{
public static TransactionScope CreateTransactionScope()
{
var transactionOptions = new TransactionOptions();
transactionOptions.IsolationLevel = IsolationLevel.ReadCommitted;
transactionOptions.Timeout = TransactionManager.MaximumTimeout;
return new TransactionScope(TransactionScopeOption.Required, transactionOptions,TransactionScopeAsyncFlowOption.Enabled);
}
}
Code for saving into database uses simple dapper GenericRepository library:
private async Task SaveDummyDataIntoTable2Dapper(string data)
{
using var scope = ServiceProvider.CreateScope();
var mainTableRepository =
scope.ServiceProvider
.GetRequiredService<MainTableRepository>();
await mainTableRepository.InsertAsync(new MainTable()
{
Data = data,
UpdatedDate = DateTime.Now
});
}
I had to use scope here since repository is scoped and worker is singleton so It cannot be injected directly.
I've tried persistence with EF Core as well same results:
Transaction.Complete() line passes and then when trying to dispose of transaction scope it hangs(sometimes it manages to insert couple of rows then hangs).
Without transaction scope everything works fine
I'm not sure what(if anything) I'm missing here or simply this still does not work in .NET7?
Note that I have MSDTC enable on my machine and im executing this on Windows 10
We've been able to solve this by using the following code.
With this modification DTC is actually invoked correctly and works from within .NET7.
using var transactionScope = TransactionUtils.CreateTransactionScope().EnsureDistributed();
Extension method EnsureDistributed implementation is as follows:
public static TransactionScope EnsureDistributed(this TransactionScope ts)
{
Transaction.Current?.EnlistDurable(DummyEnlistmentNotification.Id, new DummyEnlistmentNotification(),
EnlistmentOptions.None);
return ts;
}
internal class DummyEnlistmentNotification : IEnlistmentNotification
{
internal static readonly Guid Id = new("8d952615-7f67-4579-94fa-5c36f0c61478");
public void Prepare(PreparingEnlistment preparingEnlistment)
{
preparingEnlistment.Prepared();
}
public void Commit(Enlistment enlistment)
{
enlistment.Done();
}
public void Rollback(Enlistment enlistment)
{
enlistment.Done();
}
public void InDoubt(Enlistment enlistment)
{
enlistment.Done();
}
This is 10year old code snippet yet it works(im guessing because .NET Core merely copied and refactored the code from .NET for DistributedTransactions, which also copied bugs).
What it does it creates Distributed transaction right away rather than creating LTM transaction then promoting it to DTC if required.
More details explanation can be found here:
https://www.davidboike.dev/2010/04/forcibly-creating-a-distributed-net-transaction/
https://github.com/davybrion/companysite-dotnet/blob/master/content/blog/2010-03-msdtc-woes-with-nservicebus-and-nhibernate.md
Ensure you're using Microsoft.Data.SqlClient +v5.1
Replace all "usings" System.Data.SqlClient > Microsoft.Data.SqlClient
Ensure ImplicitDistributedTransactions is set True:
TransactionManager.ImplicitDistributedTransactions = true;
using (var ts = new TransactionScope(your options))
{
TransactionInterop.GetTransmitterPropagationToken(Transaction.Current);
... your code ..
ts.Complete();
}

ABP: How to configure prefetch count when using rabbitmq for distributed events?

In an abp framework application using RabbitMQ for distributed eventing, how do I configure the prefetch count?
At the moment I have the following issue:
I publish distributed events from my blazor app. The events are consumed by some worker worker apps. The worker then do stuff like transferring master data from one system to another.
Application 1 is firing 3000 Events - and only one consumer app is running. The one consumer receives all messages. If I now spawn a second consumer, it does not do anything because there are no messages left to consume.
Now it comes, that after a while (because processing those messages may take quite a time) I run into a lot of timeouts. Because of that I am currently integrating the outbox pattern, so that it's at least ensured that information does not get lost.
In a scenario where I scale my workers up and down depending on the amount of work to do, it's nessessary to be able to configure the current behavior.
I tried to take a look at AbpRabbitMqOptions and AbpRabbitMqEventBusOptions but sadly could not find a property that seems to match what I am looking for (adjusting prefetch).
At the moment my configuration looks something like this:
private void ConfigureDistributedEvents(ServiceConfigurationContext context, IConfiguration configuration)
{
Configure<AbpRabbitMqOptions>(options =>
{
});
Configure<AbpRabbitMqEventBusOptions>(options =>
{
});
// distributed lock
context.Services.AddSingleton<IDistributedLockProvider>(sp =>
{
var connection = ConnectionMultiplexer
.Connect(configuration["Redis:Configuration"]);
return new
RedisDistributedSynchronizationProvider(connection.GetDatabase());
});
// outbox/ inbox
Configure<AbpDistributedEventBusOptions>(options =>
{
options.Outboxes.Configure(config =>
{
config.UseDbContext<FooDbContext>();
});
options.Inboxes.Configure(config =>
{
config.UseDbContext<FooDbContext>();
});
});
}
And then there's appsettings.json, but it's standard:
"RabbitMQ": {
"Connections": {
"Default": {
"HostName": "localhost"
}
},
"EventBus": {
"ClientName": "Foo_Queue",
"ExchangeName": "Foo"
}
Is there anything I am missing?

Correct concurrency handling using EF Core 2.1 with SQL Server

I am currently working on an API using ASP.NET Core Web API along with Entity Framework Core 2.1 and a SQL Server database. The API is used to transfer money from two accounts A and B. Given the nature of the B account which is an account that accepts payments, a lot of concurrent requests might be executed at the same moment. As you know if it's not well managed, this can result in some users not seeing their payments arrive.
Having spent days trying to achieve concurrency I can't figure out what the best approach is. For the sake of simplicity I created a test project trying to reproduce this concurrency issue.
In the test project, I have two routes: request1 and request2 each one perform a transfer to the same user the first one have an amount of 10 and the second one is 20. I put a Thread.sleep(10000) on the first one as follows:
[HttpGet]
[Route("request1")]
public async Task<string> request1()
{
using (var transaction = _context.Database.BeginTransaction(System.Data.IsolationLevel.Serializable))
{
try
{
Wallet w = _context.Wallets.Where(ww => ww.UserId == 1).FirstOrDefault();
Thread.Sleep(10000);
w.Amount = w.Amount + 10;
w.Inserts++;
_context.Wallets.Update(w);
_context.SaveChanges();
transaction.Commit();
}
catch (Exception ex)
{
transaction.Rollback();
}
}
return "request 1 executed";
}
[HttpGet]
[Route("request2")]
public async Task<string> request2()
{
using (var transaction = _context.Database.BeginTransaction(System.Data.IsolationLevel.Serializable))
{
try
{
Wallet w = _context.Wallets.Where(ww => ww.UserId == 1).FirstOrDefault();
w.Amount = w.Amount + 20;
w.Inserts++;
_context.Wallets.Update(w);
_context.SaveChanges();
transaction.Commit();
}
catch (Exception ex)
{
transaction.Rollback();
}
}
return "request 2 executed";
}
After executing request1 and request2 after in a browser, the first transaction is rolled back due to:
InvalidOperationException: An exception has been raised that is likely due to a transient failure. Consider enabling transient error resiliency by adding 'EnableRetryOnFailure()' to the 'UseSqlServer' call.
I can also retry the transaction but isn't there a better way? using locks ?
Serializable, being the most isolated level and the most costly too is as said in the documentation:
No other transactions can modify data that has been read by the current transaction until the current transaction completes.
Which means no other transaction can update data that has been read by another transaction, which is working as intended here since the update in the request2 route wait for the first transaction (request1) to commit.
The problem here is we need to block read by other transactions once the current transaction has read the wallet row, to solve the problem I need to use locking so that when the first select statement in request1 executes, all the transactions after need to wait for the 1st transaction to finish so they can select the correct value. Since EF Core have no support for locking I need to execute a SQL query directly, so when selecting the wallet I'll add a row lock to the current row selected
//this locks the wallet row with id 1
//and also the default transaction isolation level is enough
Wallet w = _context.Wallets.FromSql("select * from wallets with (XLOCK, ROWLOCK) where id = 1").FirstOrDefault();
Thread.Sleep(10000);
w.Amount = w.Amount + 10;
w.Inserts++;
_context.Wallets.Update(w);
_context.SaveChanges();
transaction.Commit();
Now this works perfectly even after executing multiple request the result of the transfers all combined is correct. In addition to that am using a transaction table that holds every money transfer made with the status to keep a record of each transaction in case something went wrong am able to compute all wallets amount using this table.
Now there are other ways of doing it like:
Stored procedure: but I want my logic to be in the application level
Making a synchronized method to handle the database logic: this way all the database requests are executed in a single thread, I read a blog post that advise about using this approach but maybe we'll use multiple servers for scalability
I don't know if I'm not searching well but I can't find good material for handling pessimistic concurrency with Entity Framework Core, even while browsing Github, most of code I've seen don't use locking.
Which bring me to my question: is this the correct way of doing it?
Cheers and thanks in advance.
My suggestion for you is to catch on DbUpdateConcurrencyException and use entry.GetDatabaseValues(); and entry.OriginalValues.SetValues(databaseValues); into your retry logic. No need to lock the DB.
Here is the sample on EF Core documentation page:
using (var context = new PersonContext())
{
// Fetch a person from database and change phone number
var person = context.People.Single(p => p.PersonId == 1);
person.PhoneNumber = "555-555-5555";
// Change the person's name in the database to simulate a concurrency conflict
context.Database.ExecuteSqlCommand(
"UPDATE dbo.People SET FirstName = 'Jane' WHERE PersonId = 1");
var saved = false;
while (!saved)
{
try
{
// Attempt to save changes to the database
context.SaveChanges();
saved = true;
}
catch (DbUpdateConcurrencyException ex)
{
foreach (var entry in ex.Entries)
{
if (entry.Entity is Person)
{
var proposedValues = entry.CurrentValues;
var databaseValues = entry.GetDatabaseValues();
foreach (var property in proposedValues.Properties)
{
var proposedValue = proposedValues[property];
var databaseValue = databaseValues[property];
// TODO: decide which value should be written to database
// proposedValues[property] = <value to be saved>;
}
// Refresh original values to bypass next concurrency check
entry.OriginalValues.SetValues(databaseValues);
}
else
{
throw new NotSupportedException(
"Don't know how to handle concurrency conflicts for "
+ entry.Metadata.Name);
}
}
}
}
}
You can use distributed lock mechanism with redis for example.
Also, you can lock by userId, it will not lock method for others.
Why don't you handle the concurrency problem in the code, why it needs to be in the DB layer?
You can have a method that updates the value of given wallet with given value and you can use simple lock there. Like this:
private readonly object walletLock = new object();
public void UpdateWalletAmount(int userId, int amount)
{
lock (balanceLock)
{
Wallet w = _context.Wallets.Where(ww => ww.UserId == userId).FirstOrDefault();
w.Amount = w.Amount + amount;
w.Inserts++;
_context.Wallets.Update(w);
_context.SaveChanges();
}
}
So your methods will look like this:
[HttpGet]
[Route("request1")]
public async Task<string> request1()
{
try
{
UpdateWalletAmount(1, 10);
}
catch (Exception ex)
{
// log error
}
return "request 1 executed";
}
[HttpGet]
[Route("request2")]
public async Task<string> request2()
{
try
{
UpdateWalletAmount(1, 20);
}
catch (Exception ex)
{
// log error
}
return "request 2 executed";
}
You don't even need to use a transaction in this context.

Why does the SQL Server Isolation level revert to default within the same session using EF TransactionScope

I am having an issue with using Entity Framework TransactionScopes.
Upon reading the documentations and viewing multiple examples and suggestions, I have implemented Transaction Scopes on many queries that I have in my Web Application. The issue I am facing here is related to Isolation Levels. I want every query within the TransactionScope to be ReadUncommited, but for some reason, only the first query has to desired Isolation Level (READ UNCOMMITED), but all subsequent queries revert back to READ COMMITED. These queries read a lot of data, and I do not mind dirty reads here.
This is my EF TransactionScope and Context (Very basic):
var transactionOptions = new TransactionOptions { IsolationLevel = System.Transactions.IsolationLevel.ReadUncommitted };
using (var transactionScope = new TransactionScope(TransactionScopeOption.Required, transactionOptions, TransactionScopeAsyncFlowOption.Enabled))
{
using (var db = new Context())
{
//db.Database.ExecuteSqlCommand("SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;");
//var SessionID = await db.Database.SqlQuery<short>("SELECT ##SPID").FirstOrDefaultAsync();
//db.Database.Log = s => System.Diagnostics.Debug.WriteLine(s);
//QUERY1
var list1= await db.Table1.Include(x => x.ExternalProperty).Where(x => x.Created >= sevenDaysAgo).ToListAsync();
//QUERY2
var list2 = await db.Table1.Include(x => x.ExternalProperty).Where(x => x.Created >= fourteenDaysAgo && x.Created <= eightDaysAgo).ToListAsync();
//... Doing more stuff here
transactionScope.Complete();
}
}
QUERY 1 executes with READ UNCOMMITED, while, for some reason, QUERY 2 executes with READ COMMITED. Am I missing something? Because in my understanding, this should not happen since both queries are withing the same TransactionScope.
I used await db.Database.SqlQuery<short>("SELECT ##SPID").FirstOrDefaultAsync() to get the session ID reserved by the context, to make sure that the same session is being used.
I have also tried to set the Isolation Level manually using: db.Database.ExecuteSqlCommand("SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;"); which resulted in the same behavior.
I have searched around, and almost all answers suggesting using the above code. For example: THIS ANSWER
Why would this happen, especially since the TransactionScope has not completed yet?
Thanks
Unless someone has a better answer, here is what solved my issue.
I had to manually open and close the connection within the transactionscope, or order to stop the EF context from returning the connection back to the pool between each query.
I used db.Database.Connection.Open(); and db.Database.Connection.Close();
NOTE that this would keep the connection active until you dispose of the context. Be careful as it may not be proper for your scenario.

How do I do nested transactions in NHibernate?

Can I do nested transactions in NHibernate, and how do I implement them? I'm using SQL Server 2008, so support is definitely in the DBMS.
I find that if I try something like this:
using (var outerTX = UnitOfWork.Current.BeginTransaction())
{
using (var nestedTX = UnitOfWork.Current.BeginTransaction())
{
... do stuff
nestedTX.Commit();
}
outerTX.Commit();
}
then by the time it comes to outerTX.Commit() the transaction has become inactive, and results in a ObjectDisposedException on the session AdoTransaction.
Are we therefore supposed to create nested NHibernate sessions instead? Or is there some other class we should use to wrap around the transactions (I've heard of TransactionScope, but I'm not sure what that is)?
I'm now using Ayende's UnitOfWork implementation (thanks Sneal).
Forgive any naivety in this question, I'm still new to NHibernate.
Thanks!
EDIT: I've discovered that you can use TransactionScope, such as:
using (var transactionScope = new TransactionScope())
{
using (var tx = UnitOfWork.Current.BeginTransaction())
{
... do stuff
tx.Commit();
}
using (var tx = UnitOfWork.Current.BeginTransaction())
{
... do stuff
tx.Commit();
}
transactionScope.Commit();
}
However I'm not all that excited about this, as it locks us in to using SQL Server, and also I've found that if the database is remote then you have to worry about having MSDTC enabled... one more component to go wrong. Nested transactions are so useful and easy to do in SQL that I kind of assumed NHibernate would have some way of emulating the same...
NHibernate sessions don't support nested transactions.
The following test is always true in version 2.1.2:
var session = sessionFactory.Open();
var tx1 = session.BeginTransaction();
var tx2 = session.BeginTransaction();
Assert.AreEqual(tx1, tx2);
You need to wrap it in a TransactionScope to support nested transactions.
MSDTC must be enabled or you will get error:
{"Network access for Distributed Transaction Manager (MSDTC) has been disabled. Please enable DTC for network access in the security configuration for MSDTC using the Component Services Administrative tool."}
As Satish suggested, nested transactions are not supported in NHibernate. I've not come across scenarios where nested transactions were needed, but certainly I've faced problems where I had to ignore creating transactions if other ones were already active in other units of work.
The blog link below provides an example implementation for NHibernate, but should also work for SQL server:
http://rajputyh.blogspot.com/2011/02/nested-transaction-handling-with.html
I've been struggling with this for a while now. Am going to have another crack at it.
I want to implement transactions in individual service containers - because that makes them self-contained - but then be able to nest a bunch of those service methods within a larger transaction and rollback the whole lot if necessary.
Because I'm using Rhino Commons I'm now going to try refactoring using the With.Transaction method. Basically it allows us to write code as if transactions were nested, though in reality there is only one.
For example:
private Project CreateProject(string name)
{
var project = new Project(name);
With.Transaction(delegate
{
UnitOfWork.CurrentSession.Save(project);
});
return project;
}
private Sample CreateSample(Project project, string code)
{
var sample = new Sample(project, code);
With.Transaction(delegate
{
UnitOfWork.CurrentSession.Save(sample);
});
return sample;
}
private void Test_NoNestedTransaction()
{
var project = CreateProject("Project 1");
}
private void TestNestedTransaction()
{
using (var tx = UnitOfWork.Current.BeginTransaction())
{
try
{
var project = CreateProject("Project 6");
var sample = CreateSample(project, "SAMPLE006", true);
}
catch
{
tx.Rollback();
throw;
}
tx.Commit();
}
}
In Test_NoNestedTransaction(), we are creating a project alone, without the context of a larger transaction. In this case, in CreateSample a new transaction will be created and committed, or rolled back if an exception occurs.
In Test_NestedTransaction(), we are creating both a sample and a project. If anything goes wrong, we want both to be rolled back. In reality, the code in CreateSample and CreateProject will run just as if there were no transactions at all; it is entirely the outer transaction that decides whether to rollback or commit, and does so based on whether an exception is thrown. Really that's why I'm using a manually created transaction for the outer transaction; so we I have control over whether to commit or rollback, rather than just defaulting to on-exception-rollback-else-commit.
You could achieve the same thing without Rhino.Commons by putting a whole lot of this sort of thing through your code:
if (!UnitOfWork.Current.IsInActiveTransaction)
{
tx = UnitOfWork.Current.BeginTransaction();
}
_auditRepository.SaveNew(auditEvent);
if (tx != null)
{
tx.Commit();
}
... and so on. But With.Transaction, despite the clunkiness of needing to create anonymous delegates, does that quite conveniently.
An advantage of this approach over using TransactionScopes (apart from the reliance on MSDTC) is that there ought to be just a single flush to the database in the final outer-transaction commit, regardless of how many methods have been called in-between. In other words, we don't need to write uncommitted data to the database as we go, we're always just writing it to the local NHibernate cache.
In short, this solution doesn't offer ultimate control over your transactions, because it doesn't ever use more than one transaction. I guess I can accept that, since nested transactions are by no means universally supported in every DBMS anyway. But now perhaps I can at least write code without worrying about whether we're already in a transaction or not.
That implementation doesn't support nesting, if you want nesting use Ayende's UnitOfWork implementation. Another problem with the implementation your are using (at least for web apps) is that it holds onto the ISession instance in a static variable.
I just rewrote our UnitOfWork yesterday for these reasons, it was originally based off of Gabriel's.
We don't use UnitOfWork.Current.BeginTransaction(), we use UnitofWork.TransactionalFlush(), which creates a separate transaction at the very end to flush all the changes at once.
using (var uow = UnitOfWork.Start())
{
var entity = repository.Get(1);
entity.Name = "Sneal";
uow.TransactionalFlush();
}

Resources