JDO Exception in google app engine transaction - google-app-engine

I am getting the following exception while trying to use transation in app engine datastore.
javax.jdo.JDOUserException: Transaction is still active.
You should always close your transactions correctly using commit() or rollback().
FailedObject:org.datanucleus.store.appengine.jdo.DatastoreJDOPersistenceManager#12bbe6b
at org.datanucleus.jdo.JDOPersistenceManager.close(JDOPersistenceManager.java:277)
The following is the code snippet I used :
List<String> friendIds = getFriends(userId);
Date currentDate = new Date();
PersistenceManager manager = pmfInstance.getPersistenceManager();
try {
Transaction trans = manager.currentTransaction();
trans.begin();
for(String friendId : friendIds) {
User user = manager.getObjectById(User.class, friendId);
if(user != null) {
user.setRecoCount(user.getRecoCount() + 1);
user.setUpdatedDate(currentDate);
manager.makePersistent(user);
}
}
trans.commit();
} finally {
manager.close();
}

and if the commit or makePersistent fails where is the call to "rollback" ?

I was able to reproduce this - if you declare your transaction inside the try block and close the pm in the finally. You won't get this message if you move your
Transaction trans = manager.currentTransaction();
trans.begin();
to above the try { } section like this:
PersistenceManager pm = PMF.get().getPersistenceManager();
Transaction tx = pm.currentTransaction();
tx.begin();
try {
//do my thing
tx.commit();
}
} catch (Exception e) {
tx.rollback();
} finally {
pm.close();
}
javax.jdo.JDOUserException: Transaction is still active. You should always close your transactions correctly using commit() or rollback().

I think the different 'User' object are not belonging to the same Entity Group. All datastore operations in a transaction must operate on entities in the same entity group.
You may begin the transaction inside the loop , so you will be operating on one entity at a time, or make sure all your objects are in the same group.

Related

Spring #Transactional does not begin new transaction on MS SQL

I'm having trouble with transactions in Spring Boot using #Transactional annotation. The latest Spring is connected to a MS SQL Database.
I have following service, which periodically executes transactional method according to some criteria:
#Service
public class SomeService {
SomeRepository repository;
public SomeService(SomeRepository someRepository) {
this.repository = someRepository;
}
#Scheduled(fixedDelayString="${property}") //10 seconds
protected scheduledIteration() {
if(something) {
insertDataInNewTransaction(getSomeData());
}
}
#Transactional(propagation = Propagation.REQUIRED, rollbackFor = Exception.class)
protected void insertDataInNewTransaction(List<Data> data) {
//insert data to db
repository.saveAll(data);
//call verify proc
repository.verifyData();
}
}
The algorithm supposed to process data, insert them into table and perform check (db procedure). If the procedure throws an exception, the transaction should be rollbacked. I'm sure, that the procedure does not perform commit of the transaction.
The problem I'm facing is, that calling the method does not begin new transaction (or does but it's auto-commited), because I've tried following:
#Transactional(propagation = Propagation.REQUIRED, rollbackFor = Exception.class)
protected void insertDataInNewTransaction(List<Data> data) {
int counter = 0;
for(Data d : data) {
repository.save(d);
counter++;
//test
if(counter == 10) {
throw new Exception("test");
}
}
}
After the test method is executed, the first 10 rows remain in the table, where they were supposed to be rollbacked. During debugging I've noticed, that calling repository.save() in the loop inserts to the table outside transaction, because I can see the row from DB IDE while debugger sitting on next row. This gave me an idea, that the problem is caused by auto-commit, as it's MS SQL default. So I have tried to add following properties, but without any difference:
spring.datasource.hikari.auto-commit=false
spring.datasource.auto-commit=false
Is there anything I'm doing wrong?
If you use Spring Proxy AOP, then you need to turn the method insertDataInNewTransaction as public.
Remember that if the method is public, but it is invoked from the same bean, it will not create a new transaction (because spring proxies won't be call).
Short answer:
#Transactional(propagation = Propagation.REQUIRED, rollbackFor = Exception.class)
public void insertDataInNewTransaction(List<Data> data) {
//insert data to db
repository.saveAll(data);
//call verify proc
repository.verifyData();
}
But if you really need a new separate transaction use Propagation.REQUIRES_NEW instead of Propagation.REQUIRED.

Correct concurrency handling using EF Core 2.1 with SQL Server

I am currently working on an API using ASP.NET Core Web API along with Entity Framework Core 2.1 and a SQL Server database. The API is used to transfer money from two accounts A and B. Given the nature of the B account which is an account that accepts payments, a lot of concurrent requests might be executed at the same moment. As you know if it's not well managed, this can result in some users not seeing their payments arrive.
Having spent days trying to achieve concurrency I can't figure out what the best approach is. For the sake of simplicity I created a test project trying to reproduce this concurrency issue.
In the test project, I have two routes: request1 and request2 each one perform a transfer to the same user the first one have an amount of 10 and the second one is 20. I put a Thread.sleep(10000) on the first one as follows:
[HttpGet]
[Route("request1")]
public async Task<string> request1()
{
using (var transaction = _context.Database.BeginTransaction(System.Data.IsolationLevel.Serializable))
{
try
{
Wallet w = _context.Wallets.Where(ww => ww.UserId == 1).FirstOrDefault();
Thread.Sleep(10000);
w.Amount = w.Amount + 10;
w.Inserts++;
_context.Wallets.Update(w);
_context.SaveChanges();
transaction.Commit();
}
catch (Exception ex)
{
transaction.Rollback();
}
}
return "request 1 executed";
}
[HttpGet]
[Route("request2")]
public async Task<string> request2()
{
using (var transaction = _context.Database.BeginTransaction(System.Data.IsolationLevel.Serializable))
{
try
{
Wallet w = _context.Wallets.Where(ww => ww.UserId == 1).FirstOrDefault();
w.Amount = w.Amount + 20;
w.Inserts++;
_context.Wallets.Update(w);
_context.SaveChanges();
transaction.Commit();
}
catch (Exception ex)
{
transaction.Rollback();
}
}
return "request 2 executed";
}
After executing request1 and request2 after in a browser, the first transaction is rolled back due to:
InvalidOperationException: An exception has been raised that is likely due to a transient failure. Consider enabling transient error resiliency by adding 'EnableRetryOnFailure()' to the 'UseSqlServer' call.
I can also retry the transaction but isn't there a better way? using locks ?
Serializable, being the most isolated level and the most costly too is as said in the documentation:
No other transactions can modify data that has been read by the current transaction until the current transaction completes.
Which means no other transaction can update data that has been read by another transaction, which is working as intended here since the update in the request2 route wait for the first transaction (request1) to commit.
The problem here is we need to block read by other transactions once the current transaction has read the wallet row, to solve the problem I need to use locking so that when the first select statement in request1 executes, all the transactions after need to wait for the 1st transaction to finish so they can select the correct value. Since EF Core have no support for locking I need to execute a SQL query directly, so when selecting the wallet I'll add a row lock to the current row selected
//this locks the wallet row with id 1
//and also the default transaction isolation level is enough
Wallet w = _context.Wallets.FromSql("select * from wallets with (XLOCK, ROWLOCK) where id = 1").FirstOrDefault();
Thread.Sleep(10000);
w.Amount = w.Amount + 10;
w.Inserts++;
_context.Wallets.Update(w);
_context.SaveChanges();
transaction.Commit();
Now this works perfectly even after executing multiple request the result of the transfers all combined is correct. In addition to that am using a transaction table that holds every money transfer made with the status to keep a record of each transaction in case something went wrong am able to compute all wallets amount using this table.
Now there are other ways of doing it like:
Stored procedure: but I want my logic to be in the application level
Making a synchronized method to handle the database logic: this way all the database requests are executed in a single thread, I read a blog post that advise about using this approach but maybe we'll use multiple servers for scalability
I don't know if I'm not searching well but I can't find good material for handling pessimistic concurrency with Entity Framework Core, even while browsing Github, most of code I've seen don't use locking.
Which bring me to my question: is this the correct way of doing it?
Cheers and thanks in advance.
My suggestion for you is to catch on DbUpdateConcurrencyException and use entry.GetDatabaseValues(); and entry.OriginalValues.SetValues(databaseValues); into your retry logic. No need to lock the DB.
Here is the sample on EF Core documentation page:
using (var context = new PersonContext())
{
// Fetch a person from database and change phone number
var person = context.People.Single(p => p.PersonId == 1);
person.PhoneNumber = "555-555-5555";
// Change the person's name in the database to simulate a concurrency conflict
context.Database.ExecuteSqlCommand(
"UPDATE dbo.People SET FirstName = 'Jane' WHERE PersonId = 1");
var saved = false;
while (!saved)
{
try
{
// Attempt to save changes to the database
context.SaveChanges();
saved = true;
}
catch (DbUpdateConcurrencyException ex)
{
foreach (var entry in ex.Entries)
{
if (entry.Entity is Person)
{
var proposedValues = entry.CurrentValues;
var databaseValues = entry.GetDatabaseValues();
foreach (var property in proposedValues.Properties)
{
var proposedValue = proposedValues[property];
var databaseValue = databaseValues[property];
// TODO: decide which value should be written to database
// proposedValues[property] = <value to be saved>;
}
// Refresh original values to bypass next concurrency check
entry.OriginalValues.SetValues(databaseValues);
}
else
{
throw new NotSupportedException(
"Don't know how to handle concurrency conflicts for "
+ entry.Metadata.Name);
}
}
}
}
}
You can use distributed lock mechanism with redis for example.
Also, you can lock by userId, it will not lock method for others.
Why don't you handle the concurrency problem in the code, why it needs to be in the DB layer?
You can have a method that updates the value of given wallet with given value and you can use simple lock there. Like this:
private readonly object walletLock = new object();
public void UpdateWalletAmount(int userId, int amount)
{
lock (balanceLock)
{
Wallet w = _context.Wallets.Where(ww => ww.UserId == userId).FirstOrDefault();
w.Amount = w.Amount + amount;
w.Inserts++;
_context.Wallets.Update(w);
_context.SaveChanges();
}
}
So your methods will look like this:
[HttpGet]
[Route("request1")]
public async Task<string> request1()
{
try
{
UpdateWalletAmount(1, 10);
}
catch (Exception ex)
{
// log error
}
return "request 1 executed";
}
[HttpGet]
[Route("request2")]
public async Task<string> request2()
{
try
{
UpdateWalletAmount(1, 20);
}
catch (Exception ex)
{
// log error
}
return "request 2 executed";
}
You don't even need to use a transaction in this context.

Behavior when Spring Propagation.REQUIRES_NEW is nested in a Propagation.NESTED

I have code like so
#Transactional(propagation = Propagation.NESTED)
#TransactionConfiguration(transactionManager = "transactionManager", defaultRollback = true)
public class C1 {
...
public void xxx() {
try {
obj.someMethod();
} catch (Exception e) {
C2.yyy();
}
}
}
public class C2 {
#Transactional(propagation = Propagation.REQUIRES_NEW, readOnly = false)
public void yyy() {
...
}
}
My assumption is that when obj.someMethod(); throws a constraint violation exception C2.yyy() should still be able to save stuff to the DB.
However what I see is that when C2.yyy() is called Postgres reports
ERROR: current transaction is aborted, commands ignored until end of transaction block
Why should it do that? After all C2.yyy() should run in a different transaction that should be unaffected by the state of what happened in the invoking code. No?
Update
On further debugging here is what I found - let us say the call stack is like so
#Transactional(readOnly = false, propagation = Propagation.NESTED)
b1.m1()
#Transactional(readOnly = false, propagation = Propagation.REQUIRES_NEW)
b2.m2()
b3.m3()
b4.m4()
My assumption was that the DB code in m4() would execute in a new transaction because of the annotation on b2.m2(). But it seems the code in TransactionAspectSupport.java looks at the transaction annotation only on the current method and not on the stack. When it does not find any #transactional on m4() it assumes REQUIRES. Isn't this incorrect?
As answered here DatabaseError: current transaction is aborted, commands ignored until end of transaction block : "This is what postgres does when a query produces an error and you try to run another query without first rolling back the transaction".
Meaning that you 'obj' should run in its own transaction and roll back on exception.
Regarding the question in the update: REQUIRES_NEW always creates a new transaction, it is both documented and tested.

ZombieCheck Exception - This SqlTransaction has completed; it is no longer usable -- during simple commit

I have the following code which performs a commit of a single row to a database table (SQL 2008 / .NET 4)
using (var db = new MyDbDataContext(_dbConnectionString))
{
Action action = new Action();
db.Actions.InsertOnSubmit(dbAction);
db.SubmitChanges();
}
Normally everything is fine, but once in a while I get the following exception:
System.InvalidOperationException: This SqlTransaction has completed; it is no longer usable.
at System.Data.SqlClient.SqlTransaction.ZombieCheck()
at System.Data.SqlClient.SqlTransaction.Rollback()
at System.Data.Linq.DataContext.SubmitChanges(ConflictMode failureMode)
There are a number of similar questions on SO but I after reading them I cannot work out the cause.
Could this be simply due to a SQL timeout (the exception occurs close to 25s after the call is made)? Or should I expect a SQL timeout exception in that case?
Does anyone know what else may cause this?
The DataContext.SubmitChanges method has the following code lines in it's body:
// ...
try
{
if (this.provider.Connection.State == ConnectionState.Open)
{
this.provider.ClearConnection();
}
if (this.provider.Connection.State == ConnectionState.Closed)
{
this.provider.Connection.Open();
flag = true;
}
dbTransaction = this.provider.Connection.BeginTransaction(IsolationLevel.ReadCommitted);
this.provider.Transaction = dbTransaction;
new ChangeProcessor(this.services, this).SubmitChanges(failureMode);
this.AcceptChanges();
this.provider.ClearConnection();
dbTransaction.Commit();
}
catch
{
if (dbTransaction != null)
{
dbTransaction.Rollback();
}
throw;
}
// ...
When the connection times out, the catch block is executed and the dbTransaction.Rollback(); line will throw a InvalidOperationException.
If you had control over the code, you could catch the exception like this:
catch
{
// Attempt to roll back the transaction.
try
{
if (dbTransaction != null)
{
dbTransaction.Rollback();
}
}
catch (Exception ex2)
{
// This catch block will handle any errors that may have occurred
// on the server that would cause the rollback to fail, such as
// a closed connection.
Console.WriteLine("Rollback Exception Type: {0}", ex2.GetType());
Console.WriteLine(" Message: {0}", ex2.Message);
}
throw;
}
YES! I had the same issue. The scary answer is that SQLServer sometimes rolls back a transaction on the server side when it encounters an error, and does not pass the error back to the client. YIKES!
Look on the Google Group microsoft.public.dotnet.framework.adonet for "SqlTransaction.ZombieCheck error" Colberd Zhou [MSFT] explains it very well.
and see aef123's comment on this SO post
May I suggest that connection closes earlier that transaction commits. Then the transaction is rolled back. Check this article on MSDN Blog.

how to flush changes to persistent manager

The scenario is a update entity where i write code as follows:
try {
pm = PMF.get().getPersistenceManager();
BloodDonor bloodDonor = pm.getObjectById(BloodDonor.class,
bean.getBloodDonorSeq());
bloodDonor.setFirstName(firstName);
finally {
try {
if (pm != null && pm.isClosed() == false)
pm.close();
} catch (Exception e) {
log.severe("Exception in finally of execute of updateDonor");
}
log.info("end of updateDonor");
}
So, when it will hit the finally block on close of pm the changes will be written to datastore. My question is in a certain situation say i want to rollback whatever i did in the try block before closing the pm then how to do that? in other words, if i want to nullify the effect of try block then how do i do notify pm to clear all?
You should look at transaction to accomplish what you want to do here.

Resources