Custom constraint in EF fails, async issue - database

I have a controller action like this (ASP.NET web api)
public HttpResponseMessage<Component> Post(Component c)
{
//Don't allow equal setup ids in within the same installation when the component is of type 5
if (db.Components.Any(d => d.InstallationId == c.InstallationId && d.SetupId == c.SetupId && d.ComponentTypeId == 5)) return new HttpResponseMessage<Component>(c, HttpStatusCode.Conflict);
db.Components.Add(c);
db.SaveChanges();
return new HttpResponseMessage<Component>(c, HttpStatusCode.OK);
}
I send a number of posts request from javascript with 2 of them being equal
{SetupId : 7, InstallationId : 1, ComponentTypeId: 5}
I have verified this both using fiddler and stepping through the code on the server.
However sometimes the constraint that I do above is working as it should and other times it is not. I guess since Post is an async action the request #2 sometimes checks the database for duplicates BEFORE the first request have managed to save to the database.
How can I solve this? Is there a way to lock EF operations to the database from beginning of the post action until the end? Is that even a good idea?
I have though of database restraints, however, since this is only when componenttype is 5 then I'm not sure how to implement that. Or if it's even possible.

This is quite difficult to achieve with EF. In normal SQL you would start transaction and add some table hint to your constraint query to force locking records. The problem is EF doesn't support table hints. You cannot force linq or ESQL query to lock record.
Your options are:
Manual locking in your method. Using for example lock will dramatically reduce throughput of your method so you will most probably need some custom clever implementation locking per those Ids
Using custom SQL or stored procedure instead of LINQ query and force locking. I think UPDLOCK with HOLDLOCK hints should probably work in this case.
Alternatively you can place unique index on your InstallationId, SetupId and ComponentTypeId and simply catch exception if the concurrent request tryes to insert duplicate record. The problem is if that combination must be unique only for some cases but not for other.

I solved this in the database with the help of this answer: https://stackoverflow.com/a/5149263/94394
A conditional constraint allowed in SQL server 2008.
create unique nonclustered index [funcix_Components_setupid_installationid_RecordStatus]
on [components]([setupid], [Installationid])
where [componenttypeid] = 5
Then I catched DbUpdateException and checked if i got a constraint exception error code
try
{
db.Components.Add(c);
db.SaveChanges();
}
catch (DbUpdateException ex) {
Exception innermostException = ex;
while (innermostException.InnerException != null)//Get innermost exception
{
innermostException = innermostException.InnerException;
}
if (((System.Data.SqlClient.SqlException)innermostException).Number == 2601)//Constraint exception id
{
return new HttpResponseMessage<Component>(c, HttpStatusCode.Conflict);
}
}
return new HttpResponseMessage<Component>(c, HttpStatusCode.OK);

Related

Common strategy in handling concurrent global 'inventory' updates

To give a simplified example:
I have a database with one table: names, which has 1 million records each containing a common boy or girl's name, and more added every day.
I have an application server that takes as input an http request from parents using my website 'Name Chooser' . With each request, I need to pick up a name from the db and return it, and then NOT give that name to another parent. The server is concurrent so can handle a high volume of requests, and yet have to respect "unique name per request" and still be high available.
What are the major components and strategies for an architecture of this use case?
From what I understand, you have two operations: Adding a name and Choosing a name.
I have couple of questions:
Qustion 1: Do parents choose names only or do they also add names?
Question 2 If they add names, doest that mean that when a name is added it should also be marked as already chosen?
Assuming that you don't want to make all name selection requests to wait for one another (by locking of queueing them):
One solution to resolve concurrency in case of choosing a name only is to use Optimistic offline lock.
The most common implementation to this is to add a version field to your table and increment this version when you mark a name as chosen. You will need DB support for this, but most databases offer a mechanism for this. MongoDB adds a version field to the documents by default. For a RDBMS (like SQL) you have to add this field yourself.
You havent specified what technology you are using, so I will give an example using pseudo code for an SQL DB. For MongoDB you can check how the DB makes these checks for you.
NameRecord {
id,
name,
parentID,
version,
isChosen,
function chooseForParent(parentID) {
if(this.isChosen){
throw Error/Exception;
}
this.parentID = parentID
this.isChosen = true;
this.version++;
}
}
NameRecordRepository {
function getByName(name) { ... }
function save(record) {
var oldVersion = record.version - 1;
var query = "UPDATE records SET .....
WHERE id = {record.id} AND version = {oldVersion}";
var rowsCount = db.execute(query);
if(rowsCount == 0) {
throw ConcurrencyViolation
}
}
}
// somewhere else in an object or module or whatever...
function chooseName(parentID, name) {
var record = NameRecordRepository.getByName(name);
record.chooseForParent(parentID);
NameRecordRepository.save(record);
}
Before whis object is saved to the DB a version comparison must be performed. SQL provides a way to execute a query based on some condition and return the row count of affected rows. In our case we check if the version in the Database is the same as the old one before update. If it's not, that means that someone else has updated the record.
In this simple case you can even remove the version field and use the isChosen flag in your SQL query like this:
var query = "UPDATE records SET .....
WHERE id = {record.id} AND isChosend = false";
When adding a new name to the database you will need a Unique constrant that will solve concurrenty issues.

Java more than one DB connection in UserTransaction

static void clean() throws Exception {
final UserTransaction tx = InitialContext.doLookup("UserTransaction");
tx.begin();
try {
final DataSource ds = InitialContext.doLookup(Databases.ADMIN);
Connection connection1 = ds.getConnection();
Connection connection2 = ds.getConnection();
PreparedStatement st1 = connection1.prepareStatement("XXX delete records XXX"); // delete data
PreparedStatement st2 = connection2.prepareStatement("XXX insert records XXX"); // insert new data that is same primary as deleted data above
st1.executeUpdate();
st1.close();
connection1.close();
st2.executeUpdate();
st2.close();
connection2.close();
tx.commit();
} finally {
if (tx.getStatus() == Status.STATUS_ACTIVE) {
tx.rollback();
}
}
}
I have a web app, the DAO taking DataSource as the object to create individual connection to perform database operations.
So I have a UserTransaction, inside there are two DAO object doing separated action, first one is doing deletion and second one is doing insertion. The deletion is to delete some records to allow insertion to take place because insertion will insert same primary key's data.
I take out the DAO layer and translate the logic into the code above. There is one thing I couldn't understand, based on the code above, the insertion operation should fail, because the code (inside the UserTransaction) take two different connections, they don't know each other, and the first deletion haven't committed obviously, so second statement (insertion) should fail (due to unique constraint), because two database operation not in same connection, second connection is not able to detect uncommitted changes. But amazingly, it doesn't fail, and both statement can work perfectly.
Can anyone help explain this? Any configuration can be done to achieve this result? Or whether my understanding is wrong?
Since your application is running in weblogic server, the java-EE-container is managing the transaction and the connection for you. If you call DataSource#getConnection multiple times in a java-ee transaction, you will get multiple Connection instances joining the same transaction. Usually those connections connect to database with the identical session. Using oracle you can check that with the following snippet in a #Stateless ejb:
#Resource(lookup="jdbc/myDS")
private DataSource ds;
#TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
#Schedule(hour="*", minute="*", second="42")
public void testDatasource() throws SQLException {
try ( Connection con1 = ds.getConnection();
Connection con2 = ds.getConnection();
) {
String sessId1 = null, sessId2 = null;
try (ResultSet rs1 = con1.createStatement().executeQuery("select userenv('SESSIONID') from dual") ){
if ( rs1.next() ) sessId1 = rs1.getString(1);
};
try (ResultSet rs2 = con2.createStatement().executeQuery("select userenv('SESSIONID') from dual") ){
if ( rs2.next() ) sessId2 = rs2.getString(1);
};
LOG.log( Level.INFO," con1={0}, con2={1}, sessId1={2}, sessId2={3}"
, new Object[]{ con1, con2, sessId1, sessId2}
);
}
}
This results in the following log-Message:
con1=com.sun.gjc.spi.jdbc40.ConnectionWrapper40#19f32aa,
con2=com.sun.gjc.spi.jdbc40.ConnectionWrapper40#1cb42e0,
sessId1=9347407,
sessId2=9347407
Note that you get different Connection instances with same session-ID.
For more details see eg this question
The only way to do this properly is to use a transaction manager and two phase commit XA drivers for all databases involved in this transaction.
My guess is that you have autocommit enabled on the connections. This is the default when creating a new connection, as is documented here
https://docs.oracle.com/javase/tutorial/jdbc/basics/transactions.html
System.out.println(connection1.getAutoCommit());
will most likely print true.
You could try
connection1.setAutoCommit(false);
and see if that changes the behavior.
In addition to that, it's not really defined what happens if you call close() on a connection and haven't issued a commit or rollback statement beforehand. Therefore it is strongly recommended to either issue one of the two before closing the connection, see https://docs.oracle.com/javase/7/docs/api/java/sql/Connection.html#close()
EDIT 1:
If autocommit is false, the it's probably due to the undefined behavior of close. What happens if you switch the statements? :
st2.executeUpdate();
st2.close();
connection2.close();
st1.executeUpdate();
st1.close();
connection1.close();
EDIT 2:
You could also try the "correct" way of doing it:
st1.executeUpdate();
st1.close();
st2.executeUpdate();
st2.close();
tx.commit();
connection1.close();
connection2.close();
If that doesn't fail, then something is wrong with your setup for UserTransactions.
Depending on your database this is quite a normal case.
An object implementing UserTransaction interface represents a "logical transaction". It doesn't always map to a real, "physical" transaction that a database engine respects.
For example, there are situations that cause implicit commits (as well as implicit starts) of transactions. In case of Oracle (can't vouch for other DBs), closing a connection is one of them.
From Oracle's docs:
"If the auto-commit mode is disabled and you close the connection
without explicitly committing or rolling back your last changes, then
an implicit COMMIT operation is run".
But there can be other possible reasons for implicit commits: select for update, various locking statements, DDLs, and so on. They are database-specific.
So, back to our code.
The first transaction is committed by closing a connection.
Then another transaction is implicitly started by the DML on the second connection. It inserts non-conflicting changes and the second connection.close() commits them without PK violation. tx.commit() won't even get a chance to commit anything (and how could it? the connection is already closed).
The bottom line: "logical" transaction managers don't always give you the full picture.
Sometimes transactions are started and committed without an explicit reason. And sometimes they are even ignored by a DB.
PS: I assumed you used Oracle, but the said holds true for other databases as well. For example, MySQL's list of implicit commit reasons.
If auto-commit mode is disabled and you close the connection
without explicitly committing or rolling back your last changes,
then an implicit COMMIT operation is executed.
Please check below link for details:
http://in.relation.to/2005/10/20/pop-quiz-does-connectionclose-result-in-commit-or-rollback/

How to handle unique constraint exception to update row after failing to insert?

I am trying to handle near-simultaneous input to my Entity Framework application. Members (users) can rate things, so I have a table for their ratings, where one column is the member's ID, one is the ID of the thing they're rating, one is the rating, and another is the time they rated it. The most recent rating is supposed to override the earlier ratings. When I receive input, I check to see if the member has already rated a thing or not, and if they have, I just update the rating using the existing row, or if they haven't, I add a new row. I noticed that when input comes in from the same user for the same item at nearly the same time, that I end up with two ratings for that user for the same thing.
Earlier I asked this question: How can I avoid duplicate rows from near-simultaneous SQL adds? and I followed the suggestion to add a SQL constraint requiring unique combinations of MemberID and ThingID, which makes sense, but I am having trouble getting this technique to work, probably because I don't know the syntax for doing what I want to do when an exception occurs. The exception comes up saying the constraint was violated, and what I would like to do then is forget the attemptd illegal addition of a row with the same MemberID and ThingID, and instead fetch the existing one and simply set the values to this slightly more recent data. However I have not been able to come up with a syntax that will do that. I have tried a few things and always I get an exception when I try to SaveChanges after getting the exception - either the unique constraint is still coming up, or I get a deadlock exception.
The latest version I tried was like this:
// Get the member's rating for the thing, or create it.
Member_Thing_Rating memPref = (from mip in _myEntities.Member_Thing_Rating
where mip.thingID == thingId
where mip.MemberID == memberId
select mip).FirstOrDefault();
bool RetryGet = false;
if (memPref == null)
{
using (TransactionScope txScope = new TransactionScope())
{
try
{
memPref = new Member_Thing_Rating();
memPref.MemberID = memberId;
memPref.thingID = thingId;
memPref.EffectiveDate = DateTime.Now;
_myEntities.Member_Thing_Rating.AddObject(memPref);
_myEntities.SaveChanges();
}
catch (Exception ex)
{
Thread.Sleep(750);
RetryGet = true;
}
}
if (RetryGet == true)
{
Member_Thing_Rating memPref = (from mip in _myEntities.Member_Thing_Rating
where mip.thingID == thingId
where mip.MemberID == memberId
select mip).FirstOrDefault();
}
}
After writing the above, I also tried wrapping the logic in a function call, because it seems like Entity Framework cleans up database transactions when leaving scope from where changes were submitted. So instead of using TransactionScope and managing the exception at the same level as above, I wrapped the whole thing inside a managing function, like this:
bool Succeeded = false;
while (Succeeded == false)
{
Thread.Sleep(750);
Exception Problem = AttemptToSaveMemberIngredientPreference(memberId, ingredientId, rating);
if (Problem == null)
Succeeded = true;
else
{
Exception BaseEx = Problem.GetBaseException();
}
}
But this only results in an unending string of exceptions on the unique constraint, being handled forever at the higher-level function. I have a 3/4 second delay between attempts, so I am surprised that there can be a reported conflict yet still there is nothing found when I query for a row. I suppose that indicates that all of the threads are failing because they are running at the same time and Entity Framework notices them all and fails them all before any succeed. So I suppose there should be a way to respond to the exception by looking at all the submissions and adjusting them? I don't know or see the syntax for that. So again, what is the way to handle this?
Update:
Paddy makes three good suggestions below. I expect his Stored Procedure technique would work around the problem, but I am still interested in the answer to the question. That is, surely one should be able to respond to this exception by manipulating the submission, but I haven't yet found the syntax to get it to insert one row and use the latest value.
To quote Eric Lippert, "if it hurts, stop doing it". If you are anticipating getting very high volumnes and you want to do an 'insert or update', then you may want to consider handling this within a stored procedure instead of using the methods outlined above.
Your problem is coming because there is a small gap between your call to the DB to check for existence and your insert/update.
The sproc could use a MERGE to do the insert or update in a single pass on the table, guaranteeing that you will only see a single row for a rating and that it will be the most recent update you receive.
Note - you can include the sproc in your EF model and call it using similar EF syntax.
Note 2 - Looking at your code, you don't rollback the transaction scope prior to sleeping your thread in the case of exception. This is a relatively long time to be holding a transaction open, particularly when you are expecting very high volumes. You may want to update your code something like this:
try
{
memPref = new Member_Thing_Rating();
memPref.MemberID = memberId;
memPref.thingID = thingId;
memPref.EffectiveDate = DateTime.Now;
_myEntities.Member_Thing_Rating.AddObject(memPref);
_myEntities.SaveChanges();
txScope.Complete();
}
catch (Exception ex)
{
txScope.Dispose();
Thread.Sleep(750);
RetryGet = true;
}
This may be why you seem to be suffering from deadlocks when you retry, particularly if you are getting rapid concurrent requests.

Datastore - Resource contention in a single entity group -

I am getting lost on the following regarding the Datastore :
It is recommended to denormalize data as the Datastore does not support join queries. This means that the same information is copied in several entities
Denormalization means that whenever you have to update
data, it must be updated in different entities
But there is a limit of 1 write / second in a single entity group.
The problem I have is therefore the following :
In order to update records, I open a transaction then
Update all the required entities. The entities to be updated are within the same entity group but relate to different kinds
I am getting a "resource contention" exception
==> It seems therefore that the only way to update denormalized data is outside of a transaction. But doing this is really bad as some entities could be updated whereas other entities wouldn't.
Am I the only one having this problem ? How did you solve it ?
Thanks,
Hugues
The (simplified version of the ) code is as follows :
Objectify ofy=ObjectifyService.beginTransaction();
try {
Key<Party> partyKey=new Key<Party>(realEstateKey, Party.class, partyDTO.getId());
//--------------------------------------------------------------------------
//-- 1 - We update the party
//--------------------------------------------------------------------------
Party party=ofy.get(partyKey);
party.update(partyDTO);
//---------------------------------------------------------------------------------------------
//-- 2 - We update the kinds which have Party as embedded field, all in the same entity group
//---------------------------------------------------------------------------------------------
//2.1 Invoices
Query<Invoice> q1=ofy.query(Invoice.class).ancestor(realEstateKey).filter("partyKey", partyKey);
for (Invoice invoice: q1) {
invoice.setParty(party);
ofy.put(invoice);
}
//2.2Payments
Query<Payment> q2=ofy.query(Payment.class).ancestor(realEstateKey).filter("partyKey", partyKey);
for (Payment payment: q2) {
payment.setParty(payment);
ofy.put(payment);
}
}
ofy.getTxn().commit();
return (RPCResults.SUCCESS);
}
catch (Exception e) {
final Logger log = Logger.getLogger(InternalServiceImpl.class.getName());
log.severe("Problem while updating party : " + e.getLocalizedMessage());
return (RPCResults.FAILURE) ;
}
finally {
if (ofy.getTxn().isActive()) {
ofy.getTxn().rollback();
partyDTO.setCreationResult(RPCResults.FAILURE);
return (RPCResults.FAILURE) ;
}
}
This is happening because multiple requests to update the same entity group are occurring in a short period of time, not because you are updating many entities in the same entity group at once.
Since you have not shown your code, I can assume one of two things are happening:
The method you describe above is not actually using a transaction and you are running put_multi() with many entities of the same entity group. (If I had to guess, it'd be this.)
You have a high-traffic site and many other updates are simultaneously occurring at the same time.
Just in case someones gets in the same issue.
The problem was in the party.update(partyDTO) where under some specific conditions, I was initiating another transaction.
What I learned today is that :
--> Inside a transaction, you are allowed to include multiple puts even getting over the 1 entity / second
--> However, you should take care not initiating another transaction within your transaction

How to solve "Batch update returned unexpected row count from update; actual row count: 0; expected: 1" problem?

Getting this every time I attempt to CREATE a particular entity ... just want to know how I should go about figuring out the cause.
I'm using Fluent NHibernate auto-mapping so perhaps I haven't set a convention appropriately and/or need to override somethings in one or more mapping files. I've gone thru a number of posts on the web regarding this problem and having a hard time figuring out exactly why it is happening in my case.
The object I'm saving is pretty simple. It is a "Person" object that references a "Company" entity and has a collection of "Address" entities. UPDATES work fine on existing Person objects that are already in the database.
Any suggestions?
The error means that the SQL INSERT statement is being executed, but the ROWCOUNT being returned by SQL Server after it runs is 0, not 1 as expected.
There are several causes, from incorrect mappings, to UPDATE/INSERT triggers that have rowcount turned off.
Your best beat is to profile the SQL statements and see what happens. To do that either turn on nHibernate sql logging, or use the sql profiler. Once you have the SQL you may know the cause, if not try running the SQL manually and see what happens.
Also I suggest you to post your mapping, as it will help people spot any issues.
This can happen when trigger(s) execute additional DML (data modification) queries which affect the row counts. My solution was to add the following at the top of my trigger:
SET NOCOUNT ON;
This may occur because of Auto increment primary key. To solve this problem do not inset auto increment value with data set. Insert data without the primary key.
When targeting a view with an INSTEAD OF trigger it can be next to impossible to get the correct row count. After delving a bit into the source I found out that you can make a custom persister which makes NHibernate ignore the count checks.
public class SingleTableNoResultCheckEntityPersister : SingleTableEntityPersister
{
public SingleTableNoResultCheckEntityPersister(PersistentClass persistentClass, ICacheConcurrencyStrategy cache, ISessionFactoryImplementor factory, IMapping mapping)
: base(persistentClass, cache, factory, mapping)
{
for (int i = 0; i < this.insertResultCheckStyles.Length; i++)
{
this.insertResultCheckStyles[i] = ExecuteUpdateResultCheckStyle.None;
}
for (int i = 0; i < this.updateResultCheckStyles.Length; i++)
{
this.updateResultCheckStyles[i] = ExecuteUpdateResultCheckStyle.None;
}
for (int i = 0; i < this.deleteResultCheckStyles.Length; i++)
{
this.deleteResultCheckStyles[i] = ExecuteUpdateResultCheckStyle.None;
}
}
}
if you get row count greater than 1 its usually because you have duplicate rows in a table that have same ids. Check your table for duplicated records. Deleting duplicates will fix it.
NHibernate.AdoNet.TooManyRowsAffectedException: 'Batch update returned
unexpected row count from update; actual row count: 2; expected: 1'

Resources