Rollback the changes in failure situation - google-app-engine

I have a situation where we update an entity data and then based on certain type of updates we may update another entity as well.
There are situations when the updation in the second entry may fail for some reasons and throw exceptions.
The question is how to handle this situation as we would like to rollback the changes done in the first entity.
We cannot defer the update to the first entry until the second entry update.
In the current situation if that happens then
as soon as the code reach the below block then it will commit the first entry changes even there are failures in 2nd entity update. so how to rollback? I think not closing the persistentManager if 2nd entity update is failed is not the right option.
finally {
try {
if (pm != null && pm.isClosed() == false )
pm.close();
} catch (Exception e) {
log.severe("Exception in finally of execute of updateDonor");
log.severe("Exception class is :" + e.getClass().getName());
log.severe("Exception is :" + e.getMessage());
throw new Exception(e.getMessage()
+ "Unable to close persistence manager");
}
log.info("end of updateDonor");
}

I'm not sure I fully understand your situation, but would cross-group (XG) transactions, which allow a transaction to be applied to entities from more than one entity group, be what you are looking for? Search for 'cross-group transactions on this page as well. With an XG transaction, either all changes to the entities encompassed by the transaction go through, or none do.

Related

Aspect Around Transactional service throw UnexpectedRollbackException

I Have Aspect around Methods in Transactional service.
Unfortunately, when I catch errors by service, another exception is throwed. How can I prevent from this error ?
*I searched for similar question but no-one solutions seems be adequat for my case
#Aspect
#Component
public class ServiceGuard {
#Pointcut("execution(* simgenealogy.service.*.*(..))")
public void persistence() {}
#Around("persistence()")
public Object logPersistence(ProceedingJoinPoint joinPoint) {
try {
Object o = joinPoint.proceed();
return o;
} catch (ConstraintViolationException constraintException) {
// (...)
return null;
} catch (Throwable throwable) {
// (...)
return null;
}
}
}
And error log.
2019-07-29 02:10:37.979 ERROR 11300 --- [ion Thread] s.a.g.s.w.ServiceGuard :
Constraint violation: First Name cannot be empty
2019-07-29 02:10:38.023 ERROR 11300 --- [ion Thread] s.a.g.s.w.ServiceGuard :
Constraint violation: Last Name cannot by empty
Exception in thread "JavaFX Application Thread" org.springframework.transaction.UnexpectedRollbackException: Transaction silently rolled back because it has been marked as rollback-only
at org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:755)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:714)
at org.springframework.transaction.interceptor.TransactionAspectSupport.commitTransactionAfterReturning(TransactionAspectSupport.java:534)
at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:305)
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:98)
at
You catch exceptions which probably occur for a reason. Then you just return null as the result of the previously executed (via proceed()) method, probably in situations where another, non-null return value is expected.
As you do not provide any application code, this is speculative, but I assume that the null return value is then assigned to a data property which ought have a non-null value (see the constraint violations in your log). More precisely you set first and last names to null which causes constraint violations which in turn causes your transaction to be rolled back because mandatory data fields are not set.
How to fix this? Either return non-null default values for first/last name (sounds strange, though) or let the original exceptions escalate instead of swallowing them and causing follow-up problems.
Bottom line: Your exception handling is just broken and needs fixing.
Ok I found solution.
#Transactional is also #Arround Aspect. The problem was in ordering aspects. My Guard class wasn't in fact around Transaction. Put #Order(0) on Guard and #Order(1) on #Transactional service solved issue.

Correct concurrency handling using EF Core 2.1 with SQL Server

I am currently working on an API using ASP.NET Core Web API along with Entity Framework Core 2.1 and a SQL Server database. The API is used to transfer money from two accounts A and B. Given the nature of the B account which is an account that accepts payments, a lot of concurrent requests might be executed at the same moment. As you know if it's not well managed, this can result in some users not seeing their payments arrive.
Having spent days trying to achieve concurrency I can't figure out what the best approach is. For the sake of simplicity I created a test project trying to reproduce this concurrency issue.
In the test project, I have two routes: request1 and request2 each one perform a transfer to the same user the first one have an amount of 10 and the second one is 20. I put a Thread.sleep(10000) on the first one as follows:
[HttpGet]
[Route("request1")]
public async Task<string> request1()
{
using (var transaction = _context.Database.BeginTransaction(System.Data.IsolationLevel.Serializable))
{
try
{
Wallet w = _context.Wallets.Where(ww => ww.UserId == 1).FirstOrDefault();
Thread.Sleep(10000);
w.Amount = w.Amount + 10;
w.Inserts++;
_context.Wallets.Update(w);
_context.SaveChanges();
transaction.Commit();
}
catch (Exception ex)
{
transaction.Rollback();
}
}
return "request 1 executed";
}
[HttpGet]
[Route("request2")]
public async Task<string> request2()
{
using (var transaction = _context.Database.BeginTransaction(System.Data.IsolationLevel.Serializable))
{
try
{
Wallet w = _context.Wallets.Where(ww => ww.UserId == 1).FirstOrDefault();
w.Amount = w.Amount + 20;
w.Inserts++;
_context.Wallets.Update(w);
_context.SaveChanges();
transaction.Commit();
}
catch (Exception ex)
{
transaction.Rollback();
}
}
return "request 2 executed";
}
After executing request1 and request2 after in a browser, the first transaction is rolled back due to:
InvalidOperationException: An exception has been raised that is likely due to a transient failure. Consider enabling transient error resiliency by adding 'EnableRetryOnFailure()' to the 'UseSqlServer' call.
I can also retry the transaction but isn't there a better way? using locks ?
Serializable, being the most isolated level and the most costly too is as said in the documentation:
No other transactions can modify data that has been read by the current transaction until the current transaction completes.
Which means no other transaction can update data that has been read by another transaction, which is working as intended here since the update in the request2 route wait for the first transaction (request1) to commit.
The problem here is we need to block read by other transactions once the current transaction has read the wallet row, to solve the problem I need to use locking so that when the first select statement in request1 executes, all the transactions after need to wait for the 1st transaction to finish so they can select the correct value. Since EF Core have no support for locking I need to execute a SQL query directly, so when selecting the wallet I'll add a row lock to the current row selected
//this locks the wallet row with id 1
//and also the default transaction isolation level is enough
Wallet w = _context.Wallets.FromSql("select * from wallets with (XLOCK, ROWLOCK) where id = 1").FirstOrDefault();
Thread.Sleep(10000);
w.Amount = w.Amount + 10;
w.Inserts++;
_context.Wallets.Update(w);
_context.SaveChanges();
transaction.Commit();
Now this works perfectly even after executing multiple request the result of the transfers all combined is correct. In addition to that am using a transaction table that holds every money transfer made with the status to keep a record of each transaction in case something went wrong am able to compute all wallets amount using this table.
Now there are other ways of doing it like:
Stored procedure: but I want my logic to be in the application level
Making a synchronized method to handle the database logic: this way all the database requests are executed in a single thread, I read a blog post that advise about using this approach but maybe we'll use multiple servers for scalability
I don't know if I'm not searching well but I can't find good material for handling pessimistic concurrency with Entity Framework Core, even while browsing Github, most of code I've seen don't use locking.
Which bring me to my question: is this the correct way of doing it?
Cheers and thanks in advance.
My suggestion for you is to catch on DbUpdateConcurrencyException and use entry.GetDatabaseValues(); and entry.OriginalValues.SetValues(databaseValues); into your retry logic. No need to lock the DB.
Here is the sample on EF Core documentation page:
using (var context = new PersonContext())
{
// Fetch a person from database and change phone number
var person = context.People.Single(p => p.PersonId == 1);
person.PhoneNumber = "555-555-5555";
// Change the person's name in the database to simulate a concurrency conflict
context.Database.ExecuteSqlCommand(
"UPDATE dbo.People SET FirstName = 'Jane' WHERE PersonId = 1");
var saved = false;
while (!saved)
{
try
{
// Attempt to save changes to the database
context.SaveChanges();
saved = true;
}
catch (DbUpdateConcurrencyException ex)
{
foreach (var entry in ex.Entries)
{
if (entry.Entity is Person)
{
var proposedValues = entry.CurrentValues;
var databaseValues = entry.GetDatabaseValues();
foreach (var property in proposedValues.Properties)
{
var proposedValue = proposedValues[property];
var databaseValue = databaseValues[property];
// TODO: decide which value should be written to database
// proposedValues[property] = <value to be saved>;
}
// Refresh original values to bypass next concurrency check
entry.OriginalValues.SetValues(databaseValues);
}
else
{
throw new NotSupportedException(
"Don't know how to handle concurrency conflicts for "
+ entry.Metadata.Name);
}
}
}
}
}
You can use distributed lock mechanism with redis for example.
Also, you can lock by userId, it will not lock method for others.
Why don't you handle the concurrency problem in the code, why it needs to be in the DB layer?
You can have a method that updates the value of given wallet with given value and you can use simple lock there. Like this:
private readonly object walletLock = new object();
public void UpdateWalletAmount(int userId, int amount)
{
lock (balanceLock)
{
Wallet w = _context.Wallets.Where(ww => ww.UserId == userId).FirstOrDefault();
w.Amount = w.Amount + amount;
w.Inserts++;
_context.Wallets.Update(w);
_context.SaveChanges();
}
}
So your methods will look like this:
[HttpGet]
[Route("request1")]
public async Task<string> request1()
{
try
{
UpdateWalletAmount(1, 10);
}
catch (Exception ex)
{
// log error
}
return "request 1 executed";
}
[HttpGet]
[Route("request2")]
public async Task<string> request2()
{
try
{
UpdateWalletAmount(1, 20);
}
catch (Exception ex)
{
// log error
}
return "request 2 executed";
}
You don't even need to use a transaction in this context.

Transactions and Locking with Appengine

I have a similar code below that I'm trying to figure out transaction locking:
DAOT.repeatInTransaction(new Transactable() {
#Override
public void run(DAOT daot)
{
Points points = daot.ofy().find(Points.class, POINTS_ID);
// do something with points
takes_a_very_long_time_delay(); // perhaps 10 secs
daot.ofy().put(points);
}
});
The code above is executed from within a Java servlet. The operation is expected to work for 10 seconds for example. In between that time, I have a test that will invoke another servlet that will delete a Points entity, I was expecting that the delete operation would fail or at least delete the entity after the transaction above has finished.
However the entity was deleted during the period that the above code is executing. In my real application, I added exception handling to throw exception when trying to access or edit a entity that does not exist.
From there, the application is throwing "Entity not found" exception just after I executed the servlet that will delete the Entity in the code above.
Although I am using GAE Transactions already, however I think I am still missing something that's why my test fails.
Code for the delete Transaction from withing the Delete servlet:
DAOT.repeatInTransaction(new Transactable() {
#Override
public void run(DAOT daot)
{
Points points = daot.ofy().find(Points.class, POINTS_ID);
daot.ofy().delete(points);
}
});
How can I ensure that a new operation like a delete for a entity will wait until the current operation is happening on a entity during a transaction?
App Engine uses optimistic concurrency, not locking. That is, a transaction on a group of entities will not prevent other processes from modifying those entities while the transaction runs. Instead, when the transaction attempts to commit, it will check if any modifications were made while the transaction was executing, and if it has, discard any changes and run your function again from the beginning.
I assume you use objectify to work with datastore.
First you need to make sure daot.ofy() returns objectify instance with explicit transaction set (ObjectifyFactory.beginTransaction()) instead of ObjectifyFactory.begin(). Then make sure you use the same objectify instance for both find() and delete() calls (as well as for find()/put pairs).

Google App Engine atomic section?

Say you retrieve a set of records from the datastore (something like: select * from MyClass where reserved='false').
how do i ensure that another user doesn't set the reserved is still false?
I've looked in the Transaction documentation and got shocked from google's solution which is to catch the exception and retry in a loop.
Any solution that I'm missing - it's hard to believe that there's no way to have an atomic operation in this environment.
(btw - i could use 'syncronize' inside the servlet but i think it's not valid as there's no way to ensure that there's only one instance of the servlet object, isn't it? same applies to static variable solution)
Any idea on how to solve?
(here's the google solution:
http://code.google.com/appengine/docs/java/datastore/transactions.html#Entity_Groups
look at:
Key k = KeyFactory.createKey("Employee", "k12345");
Employee e = pm.getObjectById(Employee.class, k);
e.counter += 1;
pm.makePersistent(e);
'This requires a transaction because the value may be updated by another user after this code fetches the object, but before it saves the modified object. Without a transaction, the user's request will use the value of counter prior to the other user's update, and the save will overwrite the new value. With a transaction, the application is told about the other user's update. If the entity is updated during the transaction, then the transaction fails with an exception. The application can repeat the transaction to use the new data'
Horrible solution, isn't it?
You are correct that you cannot use synchronize or a static variable.
You are incorrect that it is impossible to have an atomic action in the App Engine environment. (See what atomic means here) When you do a transaction, it is atomic - either everything happens, or nothing happens. It sounds like what you want is some kind of global locking mechanism. In the RDBMS world, that might be something like "select for update" or setting your transaction isolation level to serialized transactions. Neither one of those types of options are very scalable. Or as you would say, they are both horrible solutions :)
If you really want global locking in app engine, you can do it, but it will be ugly and seriously impair scalability. All you need to do is create some kind of CurrentUser entity, where you store the username of the current user who has a global lock. Before you let a user do anything, you would need to first check that no user is already listed as the CurrentUser, and then write that user's key into the CurrentUser entity. The check and the write would have to be in a transaction. This way, only one user will ever be "Current" and therefore have the global lock.
Do you mean like this:
public void func(Data data2) {
String query = "select from " + objectA.class.getName()
+ " where reserved == false";
List<objectA> Table = (List<objectA>) pm.newQuery(
query).execute();
for (objectA row : Table)
{
Data data1 = row.getData1();
row.setWeight(JUtils.CalcWeight(data1, data2));
}
Collections.sort(Table, new objectA.SortByWeight());
int retries = 0;
int NUM_RETRIES = 10;
for (int i = 0; i < Table.size() ; i++)
{
retries++;
pm.currentTransaction().begin(); // <---- BEGIN
ObjectA obj = pm.getObjectById(Table.get(i).class, Table.get(i).getKey());
if (obj .getReserved() == false) // <--- CHECK if still reserved
obj.setReserved(true);
else
break;
try
{
pm.currentTransaction().commit();
break;
}
catch (JDOCanRetryException ex)
{
if (j == (NUM_RETRIES - 1))
{
throw ex;
}
i--; //so we retry again on the same object
}
}
}

Try Catch block in Siebel

I have a script which sends a set of records into a file. I'm using Try - Catch block to handle the exceptions. In the catch block I have a code where it has the pointer to next record. But this is not executing . Basically I wan to skip the bad record n move to next record.
while(currentrecord)
{
try
{
writerecord event
}
catch
{
currentrecord = next record
}
}
In most languages (unless you're using something very strange), If 'writerecord event' doesn't throw an exception, the catch block will not be called.
Don't you mean :
while(currentrecord) {
try { writerecord event }
catch { log error }
finally { currentrecord = next record}
}
Are you trying to loop through some records that are returned by a query? Do something like this:
var yourBusObject = TheApplication().GetBusObject("Your Business Object Name");
var yourBusComp = yourBusObject.GetBusComp("Your Business Component Name");
// activate fields that you need to access or update here
yourBusComp.ClearToQuery();
// set search specs here
yourBusComp.ExecuteQuery(ForwardOnly);
if (yourBusComp.FirstRecord()) {
do {
try {
// update the fields here
yourBusComp.WriteRecord();
} catch (e) {
// undo any changes so we can go to the next record
// If you don't do this I believe NextRecord() will implicitly save and trigger the exception again.
yourBusComp.UndoRecord();
// maybe log the error here, or just ignore it
}
} while (yourBusComp.NextRecord());
}
You can use try-finally structure so that whatever inside the finally block will always be executed, regardless of whether the code throws an exception or not. It's often used to clean up resources such as closing files or connections. Without a catch clause, any thrown exception in your try block will abort execution, jump to your finally block and run that code.
Agree that 'finally' might be the best bet here - but do we know what the exception actually is ? - can you output it in your catch loop, so that :
A) you can prove an exception is being thrown (rather than say a 'null' being returned or something)
B) Make sure the exception you get isn't something that could prevent 'nextrecord' working as well...[not sure what the 'finally' would achieve in the case - presumably the exception would have to bubble up to calling code?
So you're trying to move onto the next record if you failed to commit this one. Robert Muller had it right. To explain...
If the WriteRecord fails, then the business component will still be positioned on the dirty record. Attempting to move to the next record will make the buscomp try to write it again--because of a feature called "implicit saving".
Solution: You'll have to undo the record (UndoRecord) to abandon your failing field changes before moving onto the next one.

Resources