Commit behavior on multiple VOs[EO Based] in Application Module - oracle-adf

Lets say i have 2 tables bound to 2 Different VOs[EO Based] which allow editing of data.
After Editing of the data on both the tables , i call commit on the Data control Frame .
Now my understanding of AM is that it's a unit of work which is a collective representation of a functionality [let's say , Create PO] .
So can there be a scenario that while processing commit , changes performed on one EO gets committed to the Database and the second EO threw some error , so the changes for second EO didn't go thru .
Or this scenario itself is hypothetical and if any of the VOs associated with AM threw error while committing data [not validating] , rest of the committed data will also be deleted by framework .
Kindly explain or point to a resource on how multiple VOs would be handled by AM while committing .
Regards

Google is your friend here and here and more than you probably need, here and here.
A single AM contains the EO caches for changed rows. Strictly speaking, tables are not bound to VOs they are bound to EOs. VOs do not commit work. VOs describe the shape of the data and the query required for retrieving the data, which is then cached in the EO cache.
Basically, if one commits an AM, all "dirty" EOs are validated and if they pass validation, are committed. If any rollback, they all rollback. This behavior can be changed by, for example, having separate AMs.

Related

Nhibernate .SaveOrUpdate, how to get if row was update or not, means RowCount

I am updating a column in a SQL table and I want to check if it was updated successfully or it was updated already and my query didn't do anything
as we get ##rowcount in SQL Server.
In my case, I want to update a column named lockForProcessing, so if it is already processing, then my query would not affect any row, it means someone else is already processing it, else I would process it.
If I understand you correctly, your problem is related to a multi threading / concurrency problem, where the same table may be updated simultaneously.
You may want to have a look at the :
Chapter 11. Transactions And Concurrency
The ISession is not threadsafe!
The entity is not stored the moment the code session.SaveOrUpdate() is executed, but typically after transaction.Commit().
stored and commited are two different things.
The entity is stored after any session.Flush(). Depending on the IsolationLevel, the entity won't be seen by other transactions.
The entity is commited after a transaction.Commit(). A commit also flushes.
Maybe all you need to do is choose the right IsolationLevel when beginning transactions and then read the table row to get the current value:
using (var transaction = session.BeginTransaction(IsolationLevel.Serializable))
{
session.Get(); // Read your row
transaction.Commit();
}
Maybe it is easier to create some locking or pipeline mechanism in your application code though. Without knowing more about who is accessing the database (other transactions, sessions, processes?) it is hard to answer more precisely.

How to capture table level data changes in SQL Server 2008 R2?

I have high volume of data normalized into more than 100 tables. There are multiple applications which change underlying data in those tables and I want to raise events on those changes. Possible options that I know of are:
Change Data Capture
Change Tracking
Using Triggers on each table (bad option but possible)
Can someone share the best way of doing this if someone has already done this before?
What I really want in the end is if there is one transaction that affected 12 tables off 100 I should be able to bubble one event up instead of 12. Assume there are concurrent users change these tables.
Two options I can think of:
Triggers ARE the right way to capture change events in the DB layer
Codewise, I make sure in my app that each table is changed through only one place in the code, regardless what the change is (I call it a hub for that table, as it channels many different pathways into one place), it becomes very easy to catch change events that way in the code layer
One possibility is SQL Server Query Notifications: Using Query Notifications
As long as you want to 'batch' multiple changes, I think you should follow the route of Change Data Capture or Change Tracking (depending on whether you just want to know that something changed or what changes happened).
They should be used by a 'polling' procedure, where you poll for changes every few minutes (seconds, miliseconds???) and raise events. The nice thing about this is that as long as you store the last rowversion of the previous poll -for each table- you can check whenever you like for changes since the last poll. You don't rely on a real time triggers approach, that if halted you would loose all events forever. The procedure could be easily created inside a procedure that checks each table and you would need only 1 more table to store last rowversion per table.
Also, the overhead of this approach would be controlled by you and by how frequently the polling happens.

How does the DataContext handle concurrency?

I wonder how the DataContext handles concurrency violation.
For example -
Two users are fetching some data from the database, then some of them change some row and commit changes, then other user trying commit their changes so a ChangeConflictException should occur, but how does the DataContext know that the data has changed?
Fetching this data again and comparing? Or some database notification mechanism?
Concurrency control can be done by using either TimeStamp column in database or UpdateCheck attribute in LINQ-to-SQL.
MSDN Concurrency Overview (LINQ-to-SQL)
MSDN How to manage change conflicts (LINQ-to-SQL)
The previous values of all columns that aren't being changed and are marked UpdateCheck are included in the where clause of the generated update statement. If the update affected 1 row, all is well- if it affected 0 rows (eg, somebody else changed one of those values, hence it couldn't locate the row after filtering), you get the ChangeConflictException.
Yes it fetches the data again to validate concurrency.
The LINQ to SQL employs optimistic concurrency control, which means L2S checks the status of data as opposed to locking the data. You can specify which column L2S should use to determine if data is changed or not. By default it would compare every column.
See Understanding LINQ to SQL (9) Concurrent Conflict for in depth discussion.

Ensuring Database Integrity when Adding and Deleting

As I am developing my database, I am working to ensure data integrity. So, for example, a Book should be deleted when its Author is deleted from the database (but not vice-versa), assuming one author.
When I setup the foreign-key, I did set up a CASCADE, so I feel like this should happen automatically if I perform a delete from LINQ. Is this true? If not, do I need to perform all the deletes on my own, or how is this accomplished?
My second question, which goes along with that, is: does the database ensure that I have all the appropriate information I need for a row when I add it to the table (e.g. I can't add a book that doesn't have an author), or do I need to ensure this myself in the business logic? What would happen if I did try to do this using LINQ to SQL? Would I get an exception?
Thanks for the help.
A cascading foreign key will cascade the delete automatically for you.
Referencial integrity will be enforced by the database; in this case, you should add the Author first and then the Book. If you violate referencial integrity, you will get an exception.
It sounds like for second question you may be interested in using a transaction. For example, you need to add several objects to the database and want to make sure all get added or none. This is what a database transaction accomplishes. And, yes you should do this in your data/business layer, you can do this by adding partial class to your datacontext classes. If your business process states that for example EVERY user MUST have ADDRESS or something to that nature. This is up to your case scenario.
LINQ automatically uses transactions provided you are within a single (using), i.e you perform everything in that one step.
If you need to perform multiple steps or combine with non LINQ database action then you can use the transaction scope. You need to enable DISTRIBUTED TRANSACTION SERVICE. This allows transactions across for example files and database.
See TransactionScope
using (TransactionScope scope = new TransactionScope())
{
do stuff here
scope.Complete
}

NHibernate session.flush() fails but makes changes

We have a SQL Server database table that consists of user id, some numeric value, e.g. balance, and a version column.
We have multiple threads updating this table's value column in parallel, each in its own transaction and session (we're using a session-per-thread model). Since we want all logical transaction to occur, each thread does the following:
load the current row (mapped to a type).
make the change to the value, based on old value. (e.g. add 50).
session.update(obj)
session.flush() (since we're optimistic, we want to make sure we had the correct version value prior to the update)
if step 4 (flush) threw StaleStateException, refresh the object (with lockmode.read) and goto step 1
we only do this a certain number of times per logical transaction, if we can't commit it after X attempts, we reject the logical transaction.
each such thread commits periodically, e.g. after 100 successful logical transactions, to keep commit-induced I/O to manageable levels. meaning - we have a single database transaction (per transaction) with multiple flushes, at least once per logical change.
what's the problem here, you ask? well, on commits we see changes to failed logical objects.
specifically, if the value was 50 when we went through step 1 (for the first time), and we tried to update it to 100 (but we failed since e.g. another thread changed it to 70), then the value of 50 is committed for this row. obviously this is incorrect.
What are we missing here?
Well, I do not have a ton of experience here, but one thing I remember reading in the documentation is that if an exception occurs, you are supposed to immediately rollback the transaction and dispose of the session. Perhaps your issue is related to the session being in an inconsistent state?
Also, calling update in your code here is not necessary. Since you loaded the object in that session, it is already being tracked by nhibernate.
If you want to make your changes anyway, why do you bother with row versioning? It sounds like you should get the same result if you simply always update the data and let the last transaction win.
As to why the update becomes permanent, it depends on what the SQL statements for the version check/update look like and on your transaction control, which you left out of the code example. If you turn on the Hibernate SQL logging it will probably become obvious how this is happening.
I'm not a nhibernate guru, but answer seems simple.
When nhibernate loads an object, it expects it not to change in db as long as it's in nhibernate session cache.
As you mentioned - you got multi thread app.
This is what happens=>
1st thread loads an entity
2nd thread loads an entity
1st thread changes entity
2nd thread changes entity and => finds out that loaded entity has changed by something else and being afraid that it has screwed up changes 1st thread made - throws an exception to let programmer be aware about that.
You are missing locking mechanism. Can't tell much about how to apply that properly and elegantly. Maybe Transaction would help.
We had similar problems when we used nhibernate and raw ado.net concurrently (luckily - just for querying - at least for production code). All we had to do - force updating db on insert/update so we could actually query something through full-text search for some specific entities.
Had StaleStateException in integration tests when we used raw ado.net to reset db. NHibernate session was alive through bunch of tests, but every test tried to cleanup db without awareness of NHibernate.
Here is the documention for exception in the session
http://nhibernate.info/doc/nhibernate-reference/best-practices.html

Resources