Java more than one DB connection in UserTransaction - database

static void clean() throws Exception {
final UserTransaction tx = InitialContext.doLookup("UserTransaction");
tx.begin();
try {
final DataSource ds = InitialContext.doLookup(Databases.ADMIN);
Connection connection1 = ds.getConnection();
Connection connection2 = ds.getConnection();
PreparedStatement st1 = connection1.prepareStatement("XXX delete records XXX"); // delete data
PreparedStatement st2 = connection2.prepareStatement("XXX insert records XXX"); // insert new data that is same primary as deleted data above
st1.executeUpdate();
st1.close();
connection1.close();
st2.executeUpdate();
st2.close();
connection2.close();
tx.commit();
} finally {
if (tx.getStatus() == Status.STATUS_ACTIVE) {
tx.rollback();
}
}
}
I have a web app, the DAO taking DataSource as the object to create individual connection to perform database operations.
So I have a UserTransaction, inside there are two DAO object doing separated action, first one is doing deletion and second one is doing insertion. The deletion is to delete some records to allow insertion to take place because insertion will insert same primary key's data.
I take out the DAO layer and translate the logic into the code above. There is one thing I couldn't understand, based on the code above, the insertion operation should fail, because the code (inside the UserTransaction) take two different connections, they don't know each other, and the first deletion haven't committed obviously, so second statement (insertion) should fail (due to unique constraint), because two database operation not in same connection, second connection is not able to detect uncommitted changes. But amazingly, it doesn't fail, and both statement can work perfectly.
Can anyone help explain this? Any configuration can be done to achieve this result? Or whether my understanding is wrong?

Since your application is running in weblogic server, the java-EE-container is managing the transaction and the connection for you. If you call DataSource#getConnection multiple times in a java-ee transaction, you will get multiple Connection instances joining the same transaction. Usually those connections connect to database with the identical session. Using oracle you can check that with the following snippet in a #Stateless ejb:
#Resource(lookup="jdbc/myDS")
private DataSource ds;
#TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
#Schedule(hour="*", minute="*", second="42")
public void testDatasource() throws SQLException {
try ( Connection con1 = ds.getConnection();
Connection con2 = ds.getConnection();
) {
String sessId1 = null, sessId2 = null;
try (ResultSet rs1 = con1.createStatement().executeQuery("select userenv('SESSIONID') from dual") ){
if ( rs1.next() ) sessId1 = rs1.getString(1);
};
try (ResultSet rs2 = con2.createStatement().executeQuery("select userenv('SESSIONID') from dual") ){
if ( rs2.next() ) sessId2 = rs2.getString(1);
};
LOG.log( Level.INFO," con1={0}, con2={1}, sessId1={2}, sessId2={3}"
, new Object[]{ con1, con2, sessId1, sessId2}
);
}
}
This results in the following log-Message:
con1=com.sun.gjc.spi.jdbc40.ConnectionWrapper40#19f32aa,
con2=com.sun.gjc.spi.jdbc40.ConnectionWrapper40#1cb42e0,
sessId1=9347407,
sessId2=9347407
Note that you get different Connection instances with same session-ID.
For more details see eg this question

The only way to do this properly is to use a transaction manager and two phase commit XA drivers for all databases involved in this transaction.

My guess is that you have autocommit enabled on the connections. This is the default when creating a new connection, as is documented here
https://docs.oracle.com/javase/tutorial/jdbc/basics/transactions.html
System.out.println(connection1.getAutoCommit());
will most likely print true.
You could try
connection1.setAutoCommit(false);
and see if that changes the behavior.
In addition to that, it's not really defined what happens if you call close() on a connection and haven't issued a commit or rollback statement beforehand. Therefore it is strongly recommended to either issue one of the two before closing the connection, see https://docs.oracle.com/javase/7/docs/api/java/sql/Connection.html#close()
EDIT 1:
If autocommit is false, the it's probably due to the undefined behavior of close. What happens if you switch the statements? :
st2.executeUpdate();
st2.close();
connection2.close();
st1.executeUpdate();
st1.close();
connection1.close();
EDIT 2:
You could also try the "correct" way of doing it:
st1.executeUpdate();
st1.close();
st2.executeUpdate();
st2.close();
tx.commit();
connection1.close();
connection2.close();
If that doesn't fail, then something is wrong with your setup for UserTransactions.

Depending on your database this is quite a normal case.
An object implementing UserTransaction interface represents a "logical transaction". It doesn't always map to a real, "physical" transaction that a database engine respects.
For example, there are situations that cause implicit commits (as well as implicit starts) of transactions. In case of Oracle (can't vouch for other DBs), closing a connection is one of them.
From Oracle's docs:
"If the auto-commit mode is disabled and you close the connection
without explicitly committing or rolling back your last changes, then
an implicit COMMIT operation is run".
But there can be other possible reasons for implicit commits: select for update, various locking statements, DDLs, and so on. They are database-specific.
So, back to our code.
The first transaction is committed by closing a connection.
Then another transaction is implicitly started by the DML on the second connection. It inserts non-conflicting changes and the second connection.close() commits them without PK violation. tx.commit() won't even get a chance to commit anything (and how could it? the connection is already closed).
The bottom line: "logical" transaction managers don't always give you the full picture.
Sometimes transactions are started and committed without an explicit reason. And sometimes they are even ignored by a DB.
PS: I assumed you used Oracle, but the said holds true for other databases as well. For example, MySQL's list of implicit commit reasons.

If auto-commit mode is disabled and you close the connection
without explicitly committing or rolling back your last changes,
then an implicit COMMIT operation is executed.
Please check below link for details:
http://in.relation.to/2005/10/20/pop-quiz-does-connectionclose-result-in-commit-or-rollback/

Related

Suspending Azure function until Entity Framework makes changes

I have an Azure function (Iot hub trigger) that:
selects a top 1 record ordered by time in descending order
compares with a new record that comes
writes the coming record only if it differs from the selected one (some fields are different)
The issue pops up when records come into the azure function very rapidly - I end up with duplicates in the database. I guess this is because SQL Server doesn't have enough time to make changes in the database by the time the next record comes and Azure function selects, and when the Azure function selects the latest record, it actually receives an outdated one.
I use EF Core.
I do believe that there is no issue with function but with the transactional nature of the operation you described. To solve your issue trivially, you can try using transaction with the highest isolation level:
using (var transaction = new TransactionScope(
TransactionScopeOption.Required,
new TransactionOptions
{
// With this isolation level all data modifications are sequential
IsolationLevel = IsolationLevel.Serializable
}))
{
using (var connection = new SqlConnection("YOUR CONNECTION"))
{
connection.Open();
try
{
// Run raw ADO.NET command in the transaction
var command = connection.CreateCommand();
// Your reading query (just for example sake)
command.CommandText = "SELECT TOP 1 FROM dbo.Whatever";
var result = command.ExecuteScalar();
// Run an EF Core command in the transaction
var options = new DbContextOptionsBuilder<TestContext>()
.UseSqlServer(connection)
.Options;
using (var context = new TestContext(options))
{
context.Items.Add(result);
context.SaveChanges();
}
// Commit transaction if all commands succeed, transaction will auto-rollback
// when disposed if either commands fails
transaction.Complete();
}
catch (System.Exception)
{
// TODO: Handle failure
}
}
}
You should adjust the code for your need, but you have an idea.
Although, I would rather avoid the problem entirely and not modify any records, but rather insert them and select the latests afterwards. Transactions are tricky in application, they may cause performance degradation and deadlocks being applied in the wrong place and in the wrong way.

How should I properly handle CommandTimeout when calling ExecuteSqlCommand

I have some long running commands in stored procedures that are at risk of timing out, and I run them using Context.Database.ExecuteSqlCommand
It would appear that when a command times out, it leaves a lock in the database because the transaction is not rolled back.
I found an explanation for that here: CommandTimeout – How to handle it properly?
Based on the linked example I changed my code to:
Database database = Context.Database;
try
{
return database.ExecuteSqlCommand(sql, parameters);
}
catch (SqlException e)
{
//Transactions can stay open after a CommandTimeout,
//so need to rollback any open transactions
if (e.Number == -2) //CommandTimeout occurred
{
//Single rollback exits all levels of nested transactions,
//no need to loop.
database.ExecuteSqlCommand("IF ##TRANCOUNT>0 ROLLBACK TRAN;");
}
throw;
}
However, that threw an exception inside the catch, because the connection is now null:
ArgumentNullException: Value cannot be null.
Parameter name: connection
Following the comments from Annie and usr I changed my code to this:
Database database = Context.Database;
using (var tran = database.BeginTransaction())
{
try
{
int result = database.ExecuteSqlCommand(sql, parameters);
tran.Commit();
return result;
}
catch (SqlException)
{
var debug = database.SqlQuery<Int16>("SELECT ##SPID");
tran.Rollback();
throw;
}
}
I really thought that would do it, but the locks in the database continue to accumulate when I set my CommandTimeout to a really small value to test it out.
I put a breakpoint at the throw, so I know the transaction has been rolled back. The debug variable tells me the session id and when I check my locks using this query: SELECT * FROM sys.dm_tran_locks, I find a match for the session id in the request_session_id, but it's a lock that was already there, not one of the new ones, so I'm a bit confused.
So, how should I properly handle CommandTimeout when using ExecuteSqlCommand to ensure locks are released immediately?
I downloaded sp_whoisactive and ran it, the spid appears to be linked to a query on tables used by Hangfire - I am using Hangfire to run the long running queries in a background process. So, I think that perhaps I am barking up the wrong tree. I did have a problem with locking but I've rewritten my own queries to avoid locking too many rows and I've disabled lock escalation on the tables where I had a problem. These last locks may be coming from Hangfire, and may not be significant, nonetheless I've decided to go with XACT_ABORT ON for now.

Cassandra data reappearing

Inspired by this, I wrote a simple mutex on Cassandra 2.1.4.
Here is a how the lock/unlock (pseudo) code looks:
public boolean lock(String uuid){
try {
Statement stmt = new SimpleStatement("INSERT INTO LOCK (id) VALUES (?) IF NOT EXISTS", uuid);
stmt.setConsistencyLevel(ConsistencyLevel.QUORUM);
ResultSet rs = session.execute(stmt);
if (rs.wasApplied()) {
return true;
}
} catch (Throwable t) {
Statement stmt = new SimpleStatement("DELETE FROM LOCK WHERE id = ?", uuid);
stmt.setConsistencyLevel(ConsistencyLevel.QUORUM);
session.execute(stmt); // DATA DELETED HERE REAPPEARS!
}
return false;
}
public void unlock(String uuid) {
try {
Statement stmt = new SimpleStatement("DELETE FROM LOCK WHERE id = ?", uuid);
stmt.setConsistencyLevel(ConsistencyLevel.QUORUM);
session.execute(stmt);
} catch (Throwable t) {
}
}
Now, I am able to recreate at will a situation where a WriteTimeoutException is thrown in lock() in a high load test. This means the data may or may not be written. After this my code deletes the lock - and again a WriteTimeoutException is thrown. However, the lock remains (or reappears).
Why is this?
Now I know I can easily put a TTL on this table (for this usecase), but how do I reliably delete that row?
My guess on seeing this code is a common error that happens in Distributed Systems programming. There is an assumption that in case in failure your attempt to correct the failure will succeed.
In the above code you check to make sure that initial write is successful, but don't make sure that the "rollback" is also successful. This can lead to a variety of unwanted states.
Let's imagine a few scenarios with Replicas A, B and C.
Client creates Lock but an error is thrown. The lock is present on all replicas but the client gets a timeout because that connection is lost or broken.
State of System
A[Lock], B[Lock], C[Lock]
We have an exception on the client and attempt to undo the lock by issuing a delete but this fails with an exception back at the client. This means the system can be in a variety of states.
0 Successful Writes of the Delete
A[Lock], B[Lock], C[Lock]
All quorum requests will see the Lock. There exists no combination of replicas which would show us the Lock has been removed.
1 Successful Writes of the Delete
A[Lock], B[Lock], C[]
In this case we are still vulnerable. Any request which excludes C as part of the quorum call will miss the deletion. If only A and B are polled than we'll still see the lock existing.
2/3 Successful Writes of the Delete (Quorum CL Is Met)
A[Lock/], B[], C[]
In this case we have once more lost the connection to the driver but somehow succeeded internally in replicating the delete request. These scenarios are the only ones in which we are actually safe and that future reads will not see the Lock.
Conclusion
One of the tricky things with situations like this is that if you fail do make your lock correctly because of network instability it is also unlikely that your correction will succeed since it has to work in the exact same environment.
This may be an instance where CAS operations can be beneficial. But in most cases it is better to not attempt to use distributing locking if at all possible.

How to handle unique constraint exception to update row after failing to insert?

I am trying to handle near-simultaneous input to my Entity Framework application. Members (users) can rate things, so I have a table for their ratings, where one column is the member's ID, one is the ID of the thing they're rating, one is the rating, and another is the time they rated it. The most recent rating is supposed to override the earlier ratings. When I receive input, I check to see if the member has already rated a thing or not, and if they have, I just update the rating using the existing row, or if they haven't, I add a new row. I noticed that when input comes in from the same user for the same item at nearly the same time, that I end up with two ratings for that user for the same thing.
Earlier I asked this question: How can I avoid duplicate rows from near-simultaneous SQL adds? and I followed the suggestion to add a SQL constraint requiring unique combinations of MemberID and ThingID, which makes sense, but I am having trouble getting this technique to work, probably because I don't know the syntax for doing what I want to do when an exception occurs. The exception comes up saying the constraint was violated, and what I would like to do then is forget the attemptd illegal addition of a row with the same MemberID and ThingID, and instead fetch the existing one and simply set the values to this slightly more recent data. However I have not been able to come up with a syntax that will do that. I have tried a few things and always I get an exception when I try to SaveChanges after getting the exception - either the unique constraint is still coming up, or I get a deadlock exception.
The latest version I tried was like this:
// Get the member's rating for the thing, or create it.
Member_Thing_Rating memPref = (from mip in _myEntities.Member_Thing_Rating
where mip.thingID == thingId
where mip.MemberID == memberId
select mip).FirstOrDefault();
bool RetryGet = false;
if (memPref == null)
{
using (TransactionScope txScope = new TransactionScope())
{
try
{
memPref = new Member_Thing_Rating();
memPref.MemberID = memberId;
memPref.thingID = thingId;
memPref.EffectiveDate = DateTime.Now;
_myEntities.Member_Thing_Rating.AddObject(memPref);
_myEntities.SaveChanges();
}
catch (Exception ex)
{
Thread.Sleep(750);
RetryGet = true;
}
}
if (RetryGet == true)
{
Member_Thing_Rating memPref = (from mip in _myEntities.Member_Thing_Rating
where mip.thingID == thingId
where mip.MemberID == memberId
select mip).FirstOrDefault();
}
}
After writing the above, I also tried wrapping the logic in a function call, because it seems like Entity Framework cleans up database transactions when leaving scope from where changes were submitted. So instead of using TransactionScope and managing the exception at the same level as above, I wrapped the whole thing inside a managing function, like this:
bool Succeeded = false;
while (Succeeded == false)
{
Thread.Sleep(750);
Exception Problem = AttemptToSaveMemberIngredientPreference(memberId, ingredientId, rating);
if (Problem == null)
Succeeded = true;
else
{
Exception BaseEx = Problem.GetBaseException();
}
}
But this only results in an unending string of exceptions on the unique constraint, being handled forever at the higher-level function. I have a 3/4 second delay between attempts, so I am surprised that there can be a reported conflict yet still there is nothing found when I query for a row. I suppose that indicates that all of the threads are failing because they are running at the same time and Entity Framework notices them all and fails them all before any succeed. So I suppose there should be a way to respond to the exception by looking at all the submissions and adjusting them? I don't know or see the syntax for that. So again, what is the way to handle this?
Update:
Paddy makes three good suggestions below. I expect his Stored Procedure technique would work around the problem, but I am still interested in the answer to the question. That is, surely one should be able to respond to this exception by manipulating the submission, but I haven't yet found the syntax to get it to insert one row and use the latest value.
To quote Eric Lippert, "if it hurts, stop doing it". If you are anticipating getting very high volumnes and you want to do an 'insert or update', then you may want to consider handling this within a stored procedure instead of using the methods outlined above.
Your problem is coming because there is a small gap between your call to the DB to check for existence and your insert/update.
The sproc could use a MERGE to do the insert or update in a single pass on the table, guaranteeing that you will only see a single row for a rating and that it will be the most recent update you receive.
Note - you can include the sproc in your EF model and call it using similar EF syntax.
Note 2 - Looking at your code, you don't rollback the transaction scope prior to sleeping your thread in the case of exception. This is a relatively long time to be holding a transaction open, particularly when you are expecting very high volumes. You may want to update your code something like this:
try
{
memPref = new Member_Thing_Rating();
memPref.MemberID = memberId;
memPref.thingID = thingId;
memPref.EffectiveDate = DateTime.Now;
_myEntities.Member_Thing_Rating.AddObject(memPref);
_myEntities.SaveChanges();
txScope.Complete();
}
catch (Exception ex)
{
txScope.Dispose();
Thread.Sleep(750);
RetryGet = true;
}
This may be why you seem to be suffering from deadlocks when you retry, particularly if you are getting rapid concurrent requests.

Why qry.post executed with asynchronous mode?

Recently I met a strange problem, see code snips as below:
var
sqlCommand: string;
connection: TADOConnection;
qry: TADOQuery;
begin
connection := TADOConnection.Create(nil);
try
connection.ConnectionString := 'Provider=Microsoft.Jet.OLEDB.4.0;Data Source=Test.MDB;Persist Security Info=False';
connection.Open();
qry := TADOQuery.Create(nil);
try
qry.Connection := connection;
qry.SQL.Text := 'Select * from aaa';
qry.Open;
qry.Append;
qry.FieldByName('TestField1').AsString := 'test';
qry.Post;
beep;
finally
qry.Free;
end;
finally
connection.Free;
end;
end;
First, Create a new access database named test.mdb and put it under the directory of this test project, we can create a new table named aaa in it which has only one text type field named TestField1.
We set a breakpoint at line of "beep", then lunch the test application under ide debug mode, when ide stops at the breakpoint line (qry.post has been executed), at this time we use microsoft access to open test.mdb and open table aaa you will find there are no any changes in table aaa, if you let the ide continue running after pressing f9 you can find a new record is inserted in to table aaa, but if you press ctrl+f2 to terminate the application at the breakpoint, you will find the table aaa has no record been inserted, but in normal circumstance, a new record should be inserted in to the table aaa after qry.post executed.
who can explain this problem , it troubles me so long time. thanks !!!
BTW, the ide is delphi 2010, and the access mdb file is created by microsoft access 2007 under windows 7
Access won't show you records from transactions that haven't been committed yet. At the point where you pause your program, the implicit transaction created by the connection hasn't been committed yet. Haven't experimented, but my guess would be that the implicit transaction will be committed after you free the query. So if you pause just after that, you should see your record in MS Access.
After more information from Ryan (see his answer to himself), I did a little more investigating.
Having a primary key (autonumber or otherwise) doesn't seem to affect the behaviour.
Table with autonumber column as primary key
connection.Execute('insert into aaa (TestField1) values (''Test'')');
connection.Execute('select * from aaa');
connection.Execute('delete * from aaa');
beep;
finally
connection.Free;
end;
Stopping on the "select" does not show the new record.
Stopping on the "delete" shows the new record.
Stopping on the "beep" still shows all records in the table even after repeated refresh's.
Stopping on the "connection.Free" shows no more records in the table. Huh?
Stopping on a "select" inserted between the "delete" and the "beep" shows no more records in the table.
Same table
connection.Execute('insert into aaa (TestField1) values (''Test'')');
beep;
connection.Execute('delete * from aaa');
beep;
beep;
Stopping on each statement shows that Access doesn't receive the "command" until at least one other statement has been executed. In other words: the beep after the "Execute" statement must have been processed before the statement is processed by Access (it may take a couple of refreshes to show up, the first refresh isn't always enough). If you stop on the first beep after the "Execute" statement nothing has happened in Access and won't if you reset the program without executing any other statements.
Stepping into the connection.Execute (Use debug dcu's on): the effect of the executed sql statement is now visible in Access on return to the beep. Actually, it is visible much earlier. For example stepping into the "delete" statement, the record becomes marked #deleted somewhere still in the ADODB code.
In fact, when stepping through the adodb code, the record becomes visible in Access when stopped in the OnExecuteComplete handler. Not when stopped on the "begin", but when stopped on the "if Assigned" immediately thereafter. The same applies to the delete statement. The effect becomes visible in Access when stopped on the if statement in the OnExecuteComplete handler in AdoDb.
Ado does have an ExecuteOption to execute statements asynchronously. It wasn't in effect during all this (its not included by default). And while we are dealing with an out-of-process COM server and with call backs such as the OnExecuteComplete handler, that handler was executed before returning to the statement right after the ConnectionObject.Execute statement in the TAdoConnection.Execute method in AdoDb.
All in all I think it isn't so much a matter of synchronous or asynchronous execution, but more a matter of when references are released (we are dealing with COM and interface reference counting), or with thread and process timing issues (in app, Access and between them), or with a combination thereof.
And the debugger may just be muddling things more than clarifying them. Would be interesting to see what happens in D2010 with its single thread debugging capabilities, but haven't got it available where I am (now and for the next two weeks).
First , Marjan, Thank you for your answer, I am very sure I had clicked the refesh button in that time, but there was still nothing changed....
After many experiments, I found that if I inserted an auto-increment id into table fields as primary key , this strange behaviour would not happen, although i have done this , there is another strange behaviour , I will show my code snips , as below:
procedure TForm9.btn1Click(Sender: TObject);
var
sqlCommand: string;
connection: TADOConnection;
begin
connection := TADOConnection.Create(nil);
try
connection.ConnectionString := 'Provider=Microsoft.Jet.OLEDB.4.0;Data Source=Test.MDB;Persist Security Info=False';
connection.Open();
connection.Execute('insert into aaa (TestField1) values (''Test'')');
connection.Execute('select * from aaa');
connection.Execute('delete * from aaa'); // breakpoint 1
beep; // breakpoint2
finally
connection.Free;
end;
end;
Put two breakpoints at line “delete” and “beep”, when codes stoped at breakpoint1, you can refresh the database , and you would find that the record was inserted, continue running when the codes stoped at the breakpoint2, you would find the record was still in there..... If at this time you pressed ctrl+f2, the record would be not deleted.... if connection.execute is a real sychronouse procedure , this should not happend. sorry for checking your answer so late, because i am on our dragon boat festival...
Marjan, thanks for your response again, but i can't accept this behaviour what the connection enginee processes, today I find something useful on MSDN website, see:
http://msdn.microsoft.com/en-us/library/ms719649(v=VS.85).aspx
I have resolved the problem fortunately according to the article, Actually, the default value of the property "Jet OLEDB:Implicit Commit Sync" is false, According to the explanation of this property, Be false implies that the implicit transaction will use asynchronouse mode. so what we can do is set this property be true by using code snips as below :
connection.Properties.Item['Jet OLEDB:Implicit Commit Sync'].Value := true;
BTW, according to that article, this property can only be set by using the Properties property of the connection object, otherwise if it is set in connection string, an error will occur

Resources