ServiceStack Ormlite transaction between services - sql-server

I'm having trouble with a rather complex save operation inside a ServiceStack service.
To simplify the explanation the service starts an Ormlite transaction and within it calls another service through ResolveService:
public ApplicationModel Post(ApplicationModel request)
{
using (IDbTransaction tr = Db.OpenTransaction())
{
using (var cases = ResolveService<CaseService>())
{
request.Case = cases.Post(request.Case);
}
}
Db.Save<Application>(request.Application, true);
}
The other service (CaseService) uses also a transaction to perform its logic:
public CaseModel Post(CaseModel request)
{
using (IDbTransaction tr = Db.OpenTransaction())
{
Db.Insert<Case>(request);
Db.SaveAllReferences<CaseModel>(request);
}
}
In a similar situation with higher hierarchy of services calling other services a "Timeout expired" error is thrown, and so far I've not been able to resolve, although I closely monitored the SQL Server for deadlocks.
My question is whether this is the right way of using/sharing Ormlite transactions across services or there is another mechanism?
Thanks in advance.

You shouldn't have nested transactions, rather than calling across services to perform DB operations you should extract shared logic out either using a separate shared Repository or re-usable extension methods:
public static class DbExtensions
{
public static void SaveCaseModel(this IDbConnection db,
CaseModel case)
{
db.Insert<Case>(case);
db.SaveAllReferences<CaseModel>(case);
}
}
Then your Services can maintain their own transactions whilst being able to share logic, e.g:
public ApplicationModel Post(ApplicationModel request)
{
using (var trans = Db.OpenTransaction())
{
Db.SaveCaseModel(request.Case);
Db.Save<Application>(request.Application, true);
trans.Commit();
}
}
public CaseModel Post(CaseModel request)
{
using (var trans = Db.OpenTransaction())
{
Db.SaveCaseModel(request);
trans.Commit();
}
}

Related

.NET 7 Distributed Transactions issues

I am developing small POC application to test .NET7 support for distributed transactions since this is pretty important aspect in our workflow.
So far I've been unable to make it work and I'm not sure why. It seems to me either some kind of bug in .NET7 or im missing something.
In short POC is pretty simple, it runs WorkerService which does two things:
Saves into "bussiness database"
Publishes a message on NServiceBus queue which uses MSSQL Transport.
Without Transaction Scope this works fine however, when adding transaction scope I'm asked to turn on support for distributed transactions using:
TransactionManager.ImplicitDistributedTransactions = true;
Executable code in Worker service is as follows:
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
int number = 0;
try
{
while (!stoppingToken.IsCancellationRequested)
{
number = number + 1;
using var transactionScope = TransactionUtils.CreateTransactionScope();
await SaveDummyDataIntoTable2Dapper($"saved {number}").ConfigureAwait(false);
await messageSession.Publish(new MyMessage { Number = number }, stoppingToken)
.ConfigureAwait(false);
_logger.LogInformation("Publishing message {number}", number);
_logger.LogInformation("Worker running at: {time}", DateTimeOffset.Now);
transactionScope.Complete();
_logger.LogInformation("Transaction complete");
await Task.Delay(1000, stoppingToken);
}
}
catch (Exception e)
{
_logger.LogError("Exception: {ex}", e);
throw;
}
}
Transaction scope is created with the following parameters:
public class TransactionUtils
{
public static TransactionScope CreateTransactionScope()
{
var transactionOptions = new TransactionOptions();
transactionOptions.IsolationLevel = IsolationLevel.ReadCommitted;
transactionOptions.Timeout = TransactionManager.MaximumTimeout;
return new TransactionScope(TransactionScopeOption.Required, transactionOptions,TransactionScopeAsyncFlowOption.Enabled);
}
}
Code for saving into database uses simple dapper GenericRepository library:
private async Task SaveDummyDataIntoTable2Dapper(string data)
{
using var scope = ServiceProvider.CreateScope();
var mainTableRepository =
scope.ServiceProvider
.GetRequiredService<MainTableRepository>();
await mainTableRepository.InsertAsync(new MainTable()
{
Data = data,
UpdatedDate = DateTime.Now
});
}
I had to use scope here since repository is scoped and worker is singleton so It cannot be injected directly.
I've tried persistence with EF Core as well same results:
Transaction.Complete() line passes and then when trying to dispose of transaction scope it hangs(sometimes it manages to insert couple of rows then hangs).
Without transaction scope everything works fine
I'm not sure what(if anything) I'm missing here or simply this still does not work in .NET7?
Note that I have MSDTC enable on my machine and im executing this on Windows 10
We've been able to solve this by using the following code.
With this modification DTC is actually invoked correctly and works from within .NET7.
using var transactionScope = TransactionUtils.CreateTransactionScope().EnsureDistributed();
Extension method EnsureDistributed implementation is as follows:
public static TransactionScope EnsureDistributed(this TransactionScope ts)
{
Transaction.Current?.EnlistDurable(DummyEnlistmentNotification.Id, new DummyEnlistmentNotification(),
EnlistmentOptions.None);
return ts;
}
internal class DummyEnlistmentNotification : IEnlistmentNotification
{
internal static readonly Guid Id = new("8d952615-7f67-4579-94fa-5c36f0c61478");
public void Prepare(PreparingEnlistment preparingEnlistment)
{
preparingEnlistment.Prepared();
}
public void Commit(Enlistment enlistment)
{
enlistment.Done();
}
public void Rollback(Enlistment enlistment)
{
enlistment.Done();
}
public void InDoubt(Enlistment enlistment)
{
enlistment.Done();
}
This is 10year old code snippet yet it works(im guessing because .NET Core merely copied and refactored the code from .NET for DistributedTransactions, which also copied bugs).
What it does it creates Distributed transaction right away rather than creating LTM transaction then promoting it to DTC if required.
More details explanation can be found here:
https://www.davidboike.dev/2010/04/forcibly-creating-a-distributed-net-transaction/
https://github.com/davybrion/companysite-dotnet/blob/master/content/blog/2010-03-msdtc-woes-with-nservicebus-and-nhibernate.md
Ensure you're using Microsoft.Data.SqlClient +v5.1
Replace all "usings" System.Data.SqlClient > Microsoft.Data.SqlClient
Ensure ImplicitDistributedTransactions is set True:
TransactionManager.ImplicitDistributedTransactions = true;
using (var ts = new TransactionScope(your options))
{
TransactionInterop.GetTransmitterPropagationToken(Transaction.Current);
... your code ..
ts.Complete();
}

MSSQL replication triggers, how to handle conditional HasTrigger in EntityFrameworkCore

I am using EntityFrameworkCore version 7 to implement data access across a number of client databases.
I have recently run into the error 'Could not save changes because the target table has database triggers.' on one of the clients. The error is obviously self explanatory and I understand how to fix it using HasTrigger.
The problem is that this error has occurred because this specific client is replicated and has what I assume are auto generated triggers MSmerge_upd, MSmerge_ins, MSmerge_del. Concurrently the majority of my clients are not replicated and would therefore not have any of these triggers in their database.
So, what is the correct way to handle replication triggers in EntityFrameworkCore particularly when your clients have a mishmash where some are replicated and some are not? Is there a way to check inside IEntityTypeConfiguration if you are running on a replicated database and conditionally add the replication triggers? Is there some sort of best practice in terms of how to handle this scenario with the new HasTriggers requirement?
Given that nobody has posted any answer I will post what my workaround is for now.
I have created a class called AutoTriggerBuilderEntityTypeConfiguration which basically attempts to configure all the triggers for a given EF model.
There are some performance implications with this approach and it could potentially be improved by caching the triggers for all tables across the database but its sufficient for my use case.
It looks like this:
public abstract class AutoTriggerBuilderEntityTypeConfiguration<TEntity> : IEntityTypeConfiguration<TEntity>
where TEntity : class
{
private readonly string _connectionString;
public AutoTriggerBuilderEntityTypeConfiguration(string connectionString)
{
this._connectionString = connectionString;
}
public void Configure(EntityTypeBuilder<TEntity> builder)
{
this.ConfigureEntity(builder);
var tableName = builder.Metadata.GetTableName();
var tableTriggers = this.GetTriggersForTable(tableName);
var declaredTriggers = builder.Metadata.GetDeclaredTriggers();
builder.ToTable(t =>
{
foreach (var trigger in tableTriggers)
{
if (!declaredTriggers.Any(o => o.ModelName.Equals(trigger, StringComparison.InvariantCultureIgnoreCase)))
t.HasTrigger(trigger);
}
});
}
private IEnumerable<string> GetTriggersForTable(string tableName)
{
var result = new List<string>();
using (var connection = new SqlConnection(this._connectionString))
using (var command = new SqlCommand(#"SELECT sysobjects.name AS Name FROM sysobjects WHERE sysobjects.type = 'TR' AND OBJECT_NAME(parent_obj) = #TableName", connection)
{
CommandType = CommandType.Text
})
{
connection.Open();
command.Parameters.AddWithValue("#TableName", tableName);
using (var reader = command.ExecuteReader())
{
while (reader.Read())
result.Add(reader.GetString("Name"));
}
}
return result;
}
public abstract void ConfigureEntity(EntityTypeBuilder<TEntity> builder);
}

Spring #Transactional does not begin new transaction on MS SQL

I'm having trouble with transactions in Spring Boot using #Transactional annotation. The latest Spring is connected to a MS SQL Database.
I have following service, which periodically executes transactional method according to some criteria:
#Service
public class SomeService {
SomeRepository repository;
public SomeService(SomeRepository someRepository) {
this.repository = someRepository;
}
#Scheduled(fixedDelayString="${property}") //10 seconds
protected scheduledIteration() {
if(something) {
insertDataInNewTransaction(getSomeData());
}
}
#Transactional(propagation = Propagation.REQUIRED, rollbackFor = Exception.class)
protected void insertDataInNewTransaction(List<Data> data) {
//insert data to db
repository.saveAll(data);
//call verify proc
repository.verifyData();
}
}
The algorithm supposed to process data, insert them into table and perform check (db procedure). If the procedure throws an exception, the transaction should be rollbacked. I'm sure, that the procedure does not perform commit of the transaction.
The problem I'm facing is, that calling the method does not begin new transaction (or does but it's auto-commited), because I've tried following:
#Transactional(propagation = Propagation.REQUIRED, rollbackFor = Exception.class)
protected void insertDataInNewTransaction(List<Data> data) {
int counter = 0;
for(Data d : data) {
repository.save(d);
counter++;
//test
if(counter == 10) {
throw new Exception("test");
}
}
}
After the test method is executed, the first 10 rows remain in the table, where they were supposed to be rollbacked. During debugging I've noticed, that calling repository.save() in the loop inserts to the table outside transaction, because I can see the row from DB IDE while debugger sitting on next row. This gave me an idea, that the problem is caused by auto-commit, as it's MS SQL default. So I have tried to add following properties, but without any difference:
spring.datasource.hikari.auto-commit=false
spring.datasource.auto-commit=false
Is there anything I'm doing wrong?
If you use Spring Proxy AOP, then you need to turn the method insertDataInNewTransaction as public.
Remember that if the method is public, but it is invoked from the same bean, it will not create a new transaction (because spring proxies won't be call).
Short answer:
#Transactional(propagation = Propagation.REQUIRED, rollbackFor = Exception.class)
public void insertDataInNewTransaction(List<Data> data) {
//insert data to db
repository.saveAll(data);
//call verify proc
repository.verifyData();
}
But if you really need a new separate transaction use Propagation.REQUIRES_NEW instead of Propagation.REQUIRED.

Correct concurrency handling using EF Core 2.1 with SQL Server

I am currently working on an API using ASP.NET Core Web API along with Entity Framework Core 2.1 and a SQL Server database. The API is used to transfer money from two accounts A and B. Given the nature of the B account which is an account that accepts payments, a lot of concurrent requests might be executed at the same moment. As you know if it's not well managed, this can result in some users not seeing their payments arrive.
Having spent days trying to achieve concurrency I can't figure out what the best approach is. For the sake of simplicity I created a test project trying to reproduce this concurrency issue.
In the test project, I have two routes: request1 and request2 each one perform a transfer to the same user the first one have an amount of 10 and the second one is 20. I put a Thread.sleep(10000) on the first one as follows:
[HttpGet]
[Route("request1")]
public async Task<string> request1()
{
using (var transaction = _context.Database.BeginTransaction(System.Data.IsolationLevel.Serializable))
{
try
{
Wallet w = _context.Wallets.Where(ww => ww.UserId == 1).FirstOrDefault();
Thread.Sleep(10000);
w.Amount = w.Amount + 10;
w.Inserts++;
_context.Wallets.Update(w);
_context.SaveChanges();
transaction.Commit();
}
catch (Exception ex)
{
transaction.Rollback();
}
}
return "request 1 executed";
}
[HttpGet]
[Route("request2")]
public async Task<string> request2()
{
using (var transaction = _context.Database.BeginTransaction(System.Data.IsolationLevel.Serializable))
{
try
{
Wallet w = _context.Wallets.Where(ww => ww.UserId == 1).FirstOrDefault();
w.Amount = w.Amount + 20;
w.Inserts++;
_context.Wallets.Update(w);
_context.SaveChanges();
transaction.Commit();
}
catch (Exception ex)
{
transaction.Rollback();
}
}
return "request 2 executed";
}
After executing request1 and request2 after in a browser, the first transaction is rolled back due to:
InvalidOperationException: An exception has been raised that is likely due to a transient failure. Consider enabling transient error resiliency by adding 'EnableRetryOnFailure()' to the 'UseSqlServer' call.
I can also retry the transaction but isn't there a better way? using locks ?
Serializable, being the most isolated level and the most costly too is as said in the documentation:
No other transactions can modify data that has been read by the current transaction until the current transaction completes.
Which means no other transaction can update data that has been read by another transaction, which is working as intended here since the update in the request2 route wait for the first transaction (request1) to commit.
The problem here is we need to block read by other transactions once the current transaction has read the wallet row, to solve the problem I need to use locking so that when the first select statement in request1 executes, all the transactions after need to wait for the 1st transaction to finish so they can select the correct value. Since EF Core have no support for locking I need to execute a SQL query directly, so when selecting the wallet I'll add a row lock to the current row selected
//this locks the wallet row with id 1
//and also the default transaction isolation level is enough
Wallet w = _context.Wallets.FromSql("select * from wallets with (XLOCK, ROWLOCK) where id = 1").FirstOrDefault();
Thread.Sleep(10000);
w.Amount = w.Amount + 10;
w.Inserts++;
_context.Wallets.Update(w);
_context.SaveChanges();
transaction.Commit();
Now this works perfectly even after executing multiple request the result of the transfers all combined is correct. In addition to that am using a transaction table that holds every money transfer made with the status to keep a record of each transaction in case something went wrong am able to compute all wallets amount using this table.
Now there are other ways of doing it like:
Stored procedure: but I want my logic to be in the application level
Making a synchronized method to handle the database logic: this way all the database requests are executed in a single thread, I read a blog post that advise about using this approach but maybe we'll use multiple servers for scalability
I don't know if I'm not searching well but I can't find good material for handling pessimistic concurrency with Entity Framework Core, even while browsing Github, most of code I've seen don't use locking.
Which bring me to my question: is this the correct way of doing it?
Cheers and thanks in advance.
My suggestion for you is to catch on DbUpdateConcurrencyException and use entry.GetDatabaseValues(); and entry.OriginalValues.SetValues(databaseValues); into your retry logic. No need to lock the DB.
Here is the sample on EF Core documentation page:
using (var context = new PersonContext())
{
// Fetch a person from database and change phone number
var person = context.People.Single(p => p.PersonId == 1);
person.PhoneNumber = "555-555-5555";
// Change the person's name in the database to simulate a concurrency conflict
context.Database.ExecuteSqlCommand(
"UPDATE dbo.People SET FirstName = 'Jane' WHERE PersonId = 1");
var saved = false;
while (!saved)
{
try
{
// Attempt to save changes to the database
context.SaveChanges();
saved = true;
}
catch (DbUpdateConcurrencyException ex)
{
foreach (var entry in ex.Entries)
{
if (entry.Entity is Person)
{
var proposedValues = entry.CurrentValues;
var databaseValues = entry.GetDatabaseValues();
foreach (var property in proposedValues.Properties)
{
var proposedValue = proposedValues[property];
var databaseValue = databaseValues[property];
// TODO: decide which value should be written to database
// proposedValues[property] = <value to be saved>;
}
// Refresh original values to bypass next concurrency check
entry.OriginalValues.SetValues(databaseValues);
}
else
{
throw new NotSupportedException(
"Don't know how to handle concurrency conflicts for "
+ entry.Metadata.Name);
}
}
}
}
}
You can use distributed lock mechanism with redis for example.
Also, you can lock by userId, it will not lock method for others.
Why don't you handle the concurrency problem in the code, why it needs to be in the DB layer?
You can have a method that updates the value of given wallet with given value and you can use simple lock there. Like this:
private readonly object walletLock = new object();
public void UpdateWalletAmount(int userId, int amount)
{
lock (balanceLock)
{
Wallet w = _context.Wallets.Where(ww => ww.UserId == userId).FirstOrDefault();
w.Amount = w.Amount + amount;
w.Inserts++;
_context.Wallets.Update(w);
_context.SaveChanges();
}
}
So your methods will look like this:
[HttpGet]
[Route("request1")]
public async Task<string> request1()
{
try
{
UpdateWalletAmount(1, 10);
}
catch (Exception ex)
{
// log error
}
return "request 1 executed";
}
[HttpGet]
[Route("request2")]
public async Task<string> request2()
{
try
{
UpdateWalletAmount(1, 20);
}
catch (Exception ex)
{
// log error
}
return "request 2 executed";
}
You don't even need to use a transaction in this context.

EJB 3.1 and NIO2: Monitoring the file system

I guess most of us agree, that NIO2 is a fine thing to make use of. Presumed you want to monitor some part of the file system for incoming xml - files it is an easy task now. But what if I want to integrate the things into an existing Java EE application so I don't have to start another service (app-server AND the one which monitors the file system)?
So I have the heavy weight app-server with all the EJB 3.1 stuff and some kind of service monitoring the file system and take appropriate action once a file shows up. Interestingly the appropriate action is to create a Message and send it by JMS and it might be nice to integrate both into the app server.
I tried #Startup but deployment freezes (and I know that I shouldn't make use of I/O in there, was just a try). Anyhow ... any suggestions?
You could create a singleton that loads at startup and delegates the monitoring to an Asynchronous bean
#Singleton
#Startup
public class Initialiser {
#EJB
private FileSystemMonitor fileSystemMonitor;
#PostConstruct
public void init() {
String fileSystemPath = ....;
fileSystemMonitor.poll(fileSystemPath);
}
}
Then the Asynchronous bean looks something like this
#Stateless
public class FileSystemMonitor {
#Asynchronous
public void poll(String fileSystemPath) {
WatchService watcher = ....;
for (;;) {
WatchKey key = null;
try {
key = watcher.take();
for (WatchEvent<?> event: key.pollEvents()) {
WatchEvent.Kind<?> kind = event.kind();
if (kind == StandardWatchEventKinds.OVERFLOW) {
continue; // If events are lost or discarded
}
WatchEvent<Path> watchEvent = (WatchEvent<Path>)event;
//Process files....
}
} catch (InterruptedException e) {
e.printStackTrace();
return;
} finally {
if (key != null) {
boolean valid = key.reset();
if (!valid) break; // If the key is no longer valid, the directory is inaccessible so exit the loop.
}
}
}
}
}
Might help if you specified what server you're using, but have you considered implementing a JMX based service ? It's a bit more "neutral" than EJB, is more appropriate for a background service and has fewer restrictions.

Resources