Suspending Azure function until Entity Framework makes changes - sql-server

I have an Azure function (Iot hub trigger) that:
selects a top 1 record ordered by time in descending order
compares with a new record that comes
writes the coming record only if it differs from the selected one (some fields are different)
The issue pops up when records come into the azure function very rapidly - I end up with duplicates in the database. I guess this is because SQL Server doesn't have enough time to make changes in the database by the time the next record comes and Azure function selects, and when the Azure function selects the latest record, it actually receives an outdated one.
I use EF Core.

I do believe that there is no issue with function but with the transactional nature of the operation you described. To solve your issue trivially, you can try using transaction with the highest isolation level:
using (var transaction = new TransactionScope(
TransactionScopeOption.Required,
new TransactionOptions
{
// With this isolation level all data modifications are sequential
IsolationLevel = IsolationLevel.Serializable
}))
{
using (var connection = new SqlConnection("YOUR CONNECTION"))
{
connection.Open();
try
{
// Run raw ADO.NET command in the transaction
var command = connection.CreateCommand();
// Your reading query (just for example sake)
command.CommandText = "SELECT TOP 1 FROM dbo.Whatever";
var result = command.ExecuteScalar();
// Run an EF Core command in the transaction
var options = new DbContextOptionsBuilder<TestContext>()
.UseSqlServer(connection)
.Options;
using (var context = new TestContext(options))
{
context.Items.Add(result);
context.SaveChanges();
}
// Commit transaction if all commands succeed, transaction will auto-rollback
// when disposed if either commands fails
transaction.Complete();
}
catch (System.Exception)
{
// TODO: Handle failure
}
}
}
You should adjust the code for your need, but you have an idea.
Although, I would rather avoid the problem entirely and not modify any records, but rather insert them and select the latests afterwards. Transactions are tricky in application, they may cause performance degradation and deadlocks being applied in the wrong place and in the wrong way.

Related

How should I properly handle CommandTimeout when calling ExecuteSqlCommand

I have some long running commands in stored procedures that are at risk of timing out, and I run them using Context.Database.ExecuteSqlCommand
It would appear that when a command times out, it leaves a lock in the database because the transaction is not rolled back.
I found an explanation for that here: CommandTimeout – How to handle it properly?
Based on the linked example I changed my code to:
Database database = Context.Database;
try
{
return database.ExecuteSqlCommand(sql, parameters);
}
catch (SqlException e)
{
//Transactions can stay open after a CommandTimeout,
//so need to rollback any open transactions
if (e.Number == -2) //CommandTimeout occurred
{
//Single rollback exits all levels of nested transactions,
//no need to loop.
database.ExecuteSqlCommand("IF ##TRANCOUNT>0 ROLLBACK TRAN;");
}
throw;
}
However, that threw an exception inside the catch, because the connection is now null:
ArgumentNullException: Value cannot be null.
Parameter name: connection
Following the comments from Annie and usr I changed my code to this:
Database database = Context.Database;
using (var tran = database.BeginTransaction())
{
try
{
int result = database.ExecuteSqlCommand(sql, parameters);
tran.Commit();
return result;
}
catch (SqlException)
{
var debug = database.SqlQuery<Int16>("SELECT ##SPID");
tran.Rollback();
throw;
}
}
I really thought that would do it, but the locks in the database continue to accumulate when I set my CommandTimeout to a really small value to test it out.
I put a breakpoint at the throw, so I know the transaction has been rolled back. The debug variable tells me the session id and when I check my locks using this query: SELECT * FROM sys.dm_tran_locks, I find a match for the session id in the request_session_id, but it's a lock that was already there, not one of the new ones, so I'm a bit confused.
So, how should I properly handle CommandTimeout when using ExecuteSqlCommand to ensure locks are released immediately?
I downloaded sp_whoisactive and ran it, the spid appears to be linked to a query on tables used by Hangfire - I am using Hangfire to run the long running queries in a background process. So, I think that perhaps I am barking up the wrong tree. I did have a problem with locking but I've rewritten my own queries to avoid locking too many rows and I've disabled lock escalation on the tables where I had a problem. These last locks may be coming from Hangfire, and may not be significant, nonetheless I've decided to go with XACT_ABORT ON for now.

Java more than one DB connection in UserTransaction

static void clean() throws Exception {
final UserTransaction tx = InitialContext.doLookup("UserTransaction");
tx.begin();
try {
final DataSource ds = InitialContext.doLookup(Databases.ADMIN);
Connection connection1 = ds.getConnection();
Connection connection2 = ds.getConnection();
PreparedStatement st1 = connection1.prepareStatement("XXX delete records XXX"); // delete data
PreparedStatement st2 = connection2.prepareStatement("XXX insert records XXX"); // insert new data that is same primary as deleted data above
st1.executeUpdate();
st1.close();
connection1.close();
st2.executeUpdate();
st2.close();
connection2.close();
tx.commit();
} finally {
if (tx.getStatus() == Status.STATUS_ACTIVE) {
tx.rollback();
}
}
}
I have a web app, the DAO taking DataSource as the object to create individual connection to perform database operations.
So I have a UserTransaction, inside there are two DAO object doing separated action, first one is doing deletion and second one is doing insertion. The deletion is to delete some records to allow insertion to take place because insertion will insert same primary key's data.
I take out the DAO layer and translate the logic into the code above. There is one thing I couldn't understand, based on the code above, the insertion operation should fail, because the code (inside the UserTransaction) take two different connections, they don't know each other, and the first deletion haven't committed obviously, so second statement (insertion) should fail (due to unique constraint), because two database operation not in same connection, second connection is not able to detect uncommitted changes. But amazingly, it doesn't fail, and both statement can work perfectly.
Can anyone help explain this? Any configuration can be done to achieve this result? Or whether my understanding is wrong?
Since your application is running in weblogic server, the java-EE-container is managing the transaction and the connection for you. If you call DataSource#getConnection multiple times in a java-ee transaction, you will get multiple Connection instances joining the same transaction. Usually those connections connect to database with the identical session. Using oracle you can check that with the following snippet in a #Stateless ejb:
#Resource(lookup="jdbc/myDS")
private DataSource ds;
#TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
#Schedule(hour="*", minute="*", second="42")
public void testDatasource() throws SQLException {
try ( Connection con1 = ds.getConnection();
Connection con2 = ds.getConnection();
) {
String sessId1 = null, sessId2 = null;
try (ResultSet rs1 = con1.createStatement().executeQuery("select userenv('SESSIONID') from dual") ){
if ( rs1.next() ) sessId1 = rs1.getString(1);
};
try (ResultSet rs2 = con2.createStatement().executeQuery("select userenv('SESSIONID') from dual") ){
if ( rs2.next() ) sessId2 = rs2.getString(1);
};
LOG.log( Level.INFO," con1={0}, con2={1}, sessId1={2}, sessId2={3}"
, new Object[]{ con1, con2, sessId1, sessId2}
);
}
}
This results in the following log-Message:
con1=com.sun.gjc.spi.jdbc40.ConnectionWrapper40#19f32aa,
con2=com.sun.gjc.spi.jdbc40.ConnectionWrapper40#1cb42e0,
sessId1=9347407,
sessId2=9347407
Note that you get different Connection instances with same session-ID.
For more details see eg this question
The only way to do this properly is to use a transaction manager and two phase commit XA drivers for all databases involved in this transaction.
My guess is that you have autocommit enabled on the connections. This is the default when creating a new connection, as is documented here
https://docs.oracle.com/javase/tutorial/jdbc/basics/transactions.html
System.out.println(connection1.getAutoCommit());
will most likely print true.
You could try
connection1.setAutoCommit(false);
and see if that changes the behavior.
In addition to that, it's not really defined what happens if you call close() on a connection and haven't issued a commit or rollback statement beforehand. Therefore it is strongly recommended to either issue one of the two before closing the connection, see https://docs.oracle.com/javase/7/docs/api/java/sql/Connection.html#close()
EDIT 1:
If autocommit is false, the it's probably due to the undefined behavior of close. What happens if you switch the statements? :
st2.executeUpdate();
st2.close();
connection2.close();
st1.executeUpdate();
st1.close();
connection1.close();
EDIT 2:
You could also try the "correct" way of doing it:
st1.executeUpdate();
st1.close();
st2.executeUpdate();
st2.close();
tx.commit();
connection1.close();
connection2.close();
If that doesn't fail, then something is wrong with your setup for UserTransactions.
Depending on your database this is quite a normal case.
An object implementing UserTransaction interface represents a "logical transaction". It doesn't always map to a real, "physical" transaction that a database engine respects.
For example, there are situations that cause implicit commits (as well as implicit starts) of transactions. In case of Oracle (can't vouch for other DBs), closing a connection is one of them.
From Oracle's docs:
"If the auto-commit mode is disabled and you close the connection
without explicitly committing or rolling back your last changes, then
an implicit COMMIT operation is run".
But there can be other possible reasons for implicit commits: select for update, various locking statements, DDLs, and so on. They are database-specific.
So, back to our code.
The first transaction is committed by closing a connection.
Then another transaction is implicitly started by the DML on the second connection. It inserts non-conflicting changes and the second connection.close() commits them without PK violation. tx.commit() won't even get a chance to commit anything (and how could it? the connection is already closed).
The bottom line: "logical" transaction managers don't always give you the full picture.
Sometimes transactions are started and committed without an explicit reason. And sometimes they are even ignored by a DB.
PS: I assumed you used Oracle, but the said holds true for other databases as well. For example, MySQL's list of implicit commit reasons.
If auto-commit mode is disabled and you close the connection
without explicitly committing or rolling back your last changes,
then an implicit COMMIT operation is executed.
Please check below link for details:
http://in.relation.to/2005/10/20/pop-quiz-does-connectionclose-result-in-commit-or-rollback/

Entity Framework insert (of one object) slow when table has large number of records

I have a large asp.net mvc application that runs on a database that is rapidly growing in size. When the database is empty, everything works quickly, but one of my tables now has 350K records in it and an insert is now taking 15s. Here is a snippet:
foreach (var packageSheet in packageSheets)
{
// Create OrderSheets
var orderSheet = new OrderSheet { Sheet = packageSheet.Sheet };
// Add Default Options
orderSheet.AddDefaultOptions();
orderSheet.OrderPrints.Add(
new OrderPrint
{
OrderPose = CurrentOrderSubject.OrderPoses.Where(op => op.Id == orderPoseId).Single(),
PrintId = packageSheet.Sheet.Prints.First().Id
});
// Create OrderPackageSheets and add it to the order package held in the session
var orderPackageSheet =
new OrderPackageSheet
{
OrderSheet = orderSheet,
PackageSheet = packageSheet
};
_orderPackageRepository.SaveChanges();
...
}
When I SaveChanges at this point it takes 15s the on the first loop; each iteration after is fast. I have indexed the tables in question so I believe the database is tuned properly. It's the OrderPackageSheets table that contains 350K rows.
Can anyone tell me how I can optimize this to get rid of the delay?
Thank you!
EF can be slow if you are inserting a lot of rows at same time.
context.Configuration.AutoDetectChangesEnabled = false; wont do too much for you if this is really web app
You need to share your table definition and for instance you can use Simple recovery model which will improve insert performances.
Or, as mentioned, if you need to insert a lot of rows use bulk inserts
If the number of records is too high ,You can use stored procedure instead of EF.
If you need to use EF itself ,Disable auto updating of the context using
context.Configuration.AutoDetectChangesEnabled = false;
and save the context after the loop
Check these links
Efficient way to do bulk insert/update with Entity Framework
http://weblog.west-wind.com/posts/2013/Dec/22/Entity-Framework-and-slow-bulk-INSERTs

How to handle unique constraint exception to update row after failing to insert?

I am trying to handle near-simultaneous input to my Entity Framework application. Members (users) can rate things, so I have a table for their ratings, where one column is the member's ID, one is the ID of the thing they're rating, one is the rating, and another is the time they rated it. The most recent rating is supposed to override the earlier ratings. When I receive input, I check to see if the member has already rated a thing or not, and if they have, I just update the rating using the existing row, or if they haven't, I add a new row. I noticed that when input comes in from the same user for the same item at nearly the same time, that I end up with two ratings for that user for the same thing.
Earlier I asked this question: How can I avoid duplicate rows from near-simultaneous SQL adds? and I followed the suggestion to add a SQL constraint requiring unique combinations of MemberID and ThingID, which makes sense, but I am having trouble getting this technique to work, probably because I don't know the syntax for doing what I want to do when an exception occurs. The exception comes up saying the constraint was violated, and what I would like to do then is forget the attemptd illegal addition of a row with the same MemberID and ThingID, and instead fetch the existing one and simply set the values to this slightly more recent data. However I have not been able to come up with a syntax that will do that. I have tried a few things and always I get an exception when I try to SaveChanges after getting the exception - either the unique constraint is still coming up, or I get a deadlock exception.
The latest version I tried was like this:
// Get the member's rating for the thing, or create it.
Member_Thing_Rating memPref = (from mip in _myEntities.Member_Thing_Rating
where mip.thingID == thingId
where mip.MemberID == memberId
select mip).FirstOrDefault();
bool RetryGet = false;
if (memPref == null)
{
using (TransactionScope txScope = new TransactionScope())
{
try
{
memPref = new Member_Thing_Rating();
memPref.MemberID = memberId;
memPref.thingID = thingId;
memPref.EffectiveDate = DateTime.Now;
_myEntities.Member_Thing_Rating.AddObject(memPref);
_myEntities.SaveChanges();
}
catch (Exception ex)
{
Thread.Sleep(750);
RetryGet = true;
}
}
if (RetryGet == true)
{
Member_Thing_Rating memPref = (from mip in _myEntities.Member_Thing_Rating
where mip.thingID == thingId
where mip.MemberID == memberId
select mip).FirstOrDefault();
}
}
After writing the above, I also tried wrapping the logic in a function call, because it seems like Entity Framework cleans up database transactions when leaving scope from where changes were submitted. So instead of using TransactionScope and managing the exception at the same level as above, I wrapped the whole thing inside a managing function, like this:
bool Succeeded = false;
while (Succeeded == false)
{
Thread.Sleep(750);
Exception Problem = AttemptToSaveMemberIngredientPreference(memberId, ingredientId, rating);
if (Problem == null)
Succeeded = true;
else
{
Exception BaseEx = Problem.GetBaseException();
}
}
But this only results in an unending string of exceptions on the unique constraint, being handled forever at the higher-level function. I have a 3/4 second delay between attempts, so I am surprised that there can be a reported conflict yet still there is nothing found when I query for a row. I suppose that indicates that all of the threads are failing because they are running at the same time and Entity Framework notices them all and fails them all before any succeed. So I suppose there should be a way to respond to the exception by looking at all the submissions and adjusting them? I don't know or see the syntax for that. So again, what is the way to handle this?
Update:
Paddy makes three good suggestions below. I expect his Stored Procedure technique would work around the problem, but I am still interested in the answer to the question. That is, surely one should be able to respond to this exception by manipulating the submission, but I haven't yet found the syntax to get it to insert one row and use the latest value.
To quote Eric Lippert, "if it hurts, stop doing it". If you are anticipating getting very high volumnes and you want to do an 'insert or update', then you may want to consider handling this within a stored procedure instead of using the methods outlined above.
Your problem is coming because there is a small gap between your call to the DB to check for existence and your insert/update.
The sproc could use a MERGE to do the insert or update in a single pass on the table, guaranteeing that you will only see a single row for a rating and that it will be the most recent update you receive.
Note - you can include the sproc in your EF model and call it using similar EF syntax.
Note 2 - Looking at your code, you don't rollback the transaction scope prior to sleeping your thread in the case of exception. This is a relatively long time to be holding a transaction open, particularly when you are expecting very high volumes. You may want to update your code something like this:
try
{
memPref = new Member_Thing_Rating();
memPref.MemberID = memberId;
memPref.thingID = thingId;
memPref.EffectiveDate = DateTime.Now;
_myEntities.Member_Thing_Rating.AddObject(memPref);
_myEntities.SaveChanges();
txScope.Complete();
}
catch (Exception ex)
{
txScope.Dispose();
Thread.Sleep(750);
RetryGet = true;
}
This may be why you seem to be suffering from deadlocks when you retry, particularly if you are getting rapid concurrent requests.

Bulk inserts with LINQ to SQL allowing some failures

So, we have a service that takes list of updates via a WCF service, which should then be written to the DB. some of these could potentially fail (1000 updates/inserts per query) and if they do, the requirement is for the rest of the inserts/updates to be written, and the fails to be logged for processing later...
the question is how should this be done in LINQ to SQL. at the moment, i am doing something like follows:
TestDataContext dc = new TestDataContext();
Random r = new Random();
for (int i = 0; i < 300; i++)
{
Table table = new Table { randomNumber = r.Next(150) };
dc.Tables.InsertOnSubmit(table);
}
try
{
dc.SubmitChanges();
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
in this example, Table.randomNumber is primary key. there should be duplicates in the dataset before getting written, but when i call dc.SubmitChanges() it throws an exception... instead of calling dc.SubmitChanges() after each iteration, is there a better way of doing this?
I ended up submitting each row as they were written, and catching any errors. this way, i can catch the fails, but insert the rest correctly. Its not as slow as i though it would be, but not ideal... mind you, since this is a process that is done "offline" (backend processing) it works for what we need.

Resources