Lately I have been using the Castle ActiveRecord framework a lot and it has worked fine until I found some strange behaviour: the onFlushDirty is triggered twice in some situations. It is even triggered when I make a simple query. I'm really confused cause I'm trying to create auditing data in onFlushDirty and as I read data it's triggered which causes it to save identical data.
How can I avoid this behaviour?
I think I resolved my issue but I don't know why this happens.
First of all, the data I have in database has null values in some date fields and integer fields, (the data came from a migration from Btrieve).
When I loaded the data in a datagrid the onFlushDirty event triggered, when I debugged and watched the data I saw that on a date field with null value it had a previousState with null and currentState with '0001-01-01 00:00:00'. I changed in the database the nulls with '0001-01-01 00:00:00' in the case of date fields and zero for the integer fields and the onFlushDirty event never triggered again when I loaded the data on the datagrid.
But the question still remains. Why does this happens or is it correct this behaviour?
My guess is that you've told Hibernate (in the mapping for that table or an annotation on the entity class) that the date field should not be null, so Hibernate sets it for you, but since this differs from the value it retrieved from the database, it's marked as dirty.
I've also run into the situation where Hibernate flushes the Session several times before committing, which causes any logic in onFlushDirty to be run more times than I wanted. This was happening because when the session is set to FlushMode.AUTO, the session is flushed prior to retrieval in order to include recent uncommitted changes. This is by design, so I began committing immediately after making changes to be audited.
Related
In an ASP.NET MVC application using EF 6 with SQL Server, when updating a table to change one particular row, it takes a very long time (10 minutes plus, and only sometimes the change ultimately gets through).
However, using the same web page to update any other row in the same table, it's immediate. Also, when I open SQL Server Management Studio and use an update query to update that specific row, it's immediate as well, and so is changing the row through the Edit Top 200 Records functionality.
The table in question holds various statuses used for keeping track record processing (there are 23 records in the table). It has an ID which is the primary key (only column referenced by other tables), and it has Name and Description columns. I'm changing the description in the example above.
As the row I'm changing is for the OK status, which is the most used one, the only thing I could come up with is that somehow all records referencing this status are also updated or at least checked, but besides the fact that this is not exactly how relational databases work, that would also still not explain why the update is immediate when I use a query in SSMS. Hence my assumption that this is somehow caused by EF ding or checking something in the background.
Unfortunately this is on a production environment where I have very limited access or debugging options. On the TEST and ACCEPTANCE environments it is working normally.
Any ideas what might cause this behavior?
Thanks, Patrick
Thanks all for taking the time to try and help me out here. I managed to get some debug messages in the controller code, and it turns out that the controller method called by the page submit is not even hit most of the time. I don't see any differences in the generated html between the view for the offending record and the views of any of the other records, so it still strikes me as weird that the same page seems to act differently with only 1 specific record, but at least now I know I have to look for the answer in ASP/MVC, and not EF or the db.
Thanks again!
I'm new to Flink and I'm trying to use it to have a bunch of live views of my application. At least one of the dynamic views I'd like to build would be to show entries that have not met an SLA -- or essentially expired -- and the condition for this would be a simple timestamp comparison. So I would basically want an entry to show up in my dynamic table if it has NOT been touched by an event recently. In playing around with Flink 1.6 (constrained to this due to AWS Kinesis) in a dev environment, I'm not seeing that Flink is re-evaluating a condition unless an event touches that entry.
I've got my dev environment plugged into a Kinesis stream that's sending in live access log events from a web server. This isn't my real use case but it was an easy one to begin testing with. I've written a simple table query that pulls in a request path, its last access time, and computes a boolean flag to indicate whether it hasn't been accessed in the last minute. I'm debugging this via a retract stream connected to PrintSinkFunction so all updates/deletes are printed to my console.
tEnv.registerDataStream("AccessLogs", accessLogs, "username, status, request, responseSize, referrer, userAgent, requestTime, ActionTime.rowtime");
Table paths = tEnv.sqlQuery("SELECT request AS path, MAX(requestTime) as lastTime, CASE WHEN MAX(requestTime) < CURRENT_TIMESTAMP - INTERVAL '1' MINUTE THEN 1 ELSE 0 END AS expired FROM AccessLogs GROUP BY request");
DataStream<Tuple2<Boolean, Row>> retractStream = tEnv.toRetractStream(paths, Row.class);
retractStream .addSink(new PrintSinkFunction<>());
I expect that when I access a page, an Add event is sent to this stream. Then if I wait 1 minute (do nothing), the CASE statement in my table will evaluate to 1, so I should see a Delete and then Add event with that flag set.
What I actually see is that nothing happens until I load that page again. The Delete event actually has the flag set, while the Add event that immediate follows that has it cleared again (as it should since it's no longer "expired).
// add/delete, path, lastAccess, expired
(true,/mypage,2019-05-20 20:02:48.0,0) // first page load, add event
(false,/mypage,2019-05-20 20:02:48.0,1) // second load > 2 mins later, remove event for the entry with expired flag set
(true,/mypage,2019-05-20 20:05:01.0,0) // second load, add event
Edit: The most useful tip I've come across in my searching is to create a ProcessFunction. I think this is something I could make work with my dynamic tables (in some cases I'd end up with intermediate streams to look at computed dates), but hopefully it doesn't have to come to that.
I've gotten the ProcessFunction approach to work but it required a lot more tinkering than I initially thought it would:
I had to add a field to my POJO that changes in the onTimer() method (could be a date or a version that you simply bump each time)
I had to register this field as part of the dynamic table
I had to use this field in my query in order for the query to get re-evaluated and change the boolean flag (even though I don't actually use the new field). I just added it as part of my SELECT clause.
Your approach looks promising but a comparison with a moving "now" timestamp is not supported by Flink's Table API / SQL (yet).
I would solve this in two steps.
register the dynamic table in upsert mode, i.e., a table that is upserted per key (request in your case) based on a version timestamp (requestTime in your case). The resulting dynamic table would hold the latest row for every request.
Have a query with a simple filter predicate like yours that compares the version timestamp of the rows of the dynamic (upsert) table and filters out all rows that have timestamps which are too close to now.
Unfortunately, neither of both features (upsert conversions and comparisons against the moving "now" timestamp) are available in Flink, yet. There is some ongoing work for upsert table conversions though.
I am using an Oracle ADF page to update data in a table. My entity object is based on the table, but I want the DML (inserts, updates, deletes) to go through a package procedure in the database instead of using the default DML generated by the ADF framework.
To accomplish this, I am following Oracle's documentation, found here: http://docs.oracle.com/cd/E23943_01/web.1111/b31974/bcadveo.htm#ADFFD1129
This all works fine. The problem is, the default ADF DML processing will automatically refresh the entity row after writing it, either with a RETURNING INTO clause or with a separately issued SELECT statements (depending on the value of isUseReturningClause() in the EntityDefImpl object). This is done so that the application front end gets updated in case the row was modified by the database during the DML process (e.g., a BEFORE ROW trigger changes values).
But, when I overwrite doDml() to replace the default framework DML with a call to my package procedure, it no longer automatically refreshes, even if isUseReturningClause() returns false.
I tried adding code to my doDml() implementation to requery afterwards, but it didn't work (maybe I didn't do it correctly). But, Oracle's documentation doesn't say anything about having to do that.
Does anyone know how to accomplish this?
Update
I went back to my attempt to have doDml() refresh afterwards by calling doSelect() and it works. My original attempt didn't work because doSelect() wasn't sending notifications of its changes.
Still, I'm concerned that this isn't how Oracle's documentation says to do it so I have no idea if this is correct or a kludge or a plain bad idea. So, my original question still stands.
I logged an SR with Oracle. Their response was that if you override doDML() and do not call super.doDML() then you lose the automatic refresh functionality of the framework.
They wouldn't comment on my solution, which was to call doSelect(false) after any inserts or updates in my doDML() override. Their policy is that if you want advice on customizations, you should engage Oracle Consulting.
In ADF World i faced this case and i solve it by simple way ,
firstly it seems bug here , but i can explain what is the expected procedure should be done.
values which set in background[Model layer] in MVC approach should be not edit by user.
solution in 1 word add property of af|inputText disabled="true".
I have high volume of data normalized into more than 100 tables. There are multiple applications which change underlying data in those tables and I want to raise events on those changes. Possible options that I know of are:
Change Data Capture
Change Tracking
Using Triggers on each table (bad option but possible)
Can someone share the best way of doing this if someone has already done this before?
What I really want in the end is if there is one transaction that affected 12 tables off 100 I should be able to bubble one event up instead of 12. Assume there are concurrent users change these tables.
Two options I can think of:
Triggers ARE the right way to capture change events in the DB layer
Codewise, I make sure in my app that each table is changed through only one place in the code, regardless what the change is (I call it a hub for that table, as it channels many different pathways into one place), it becomes very easy to catch change events that way in the code layer
One possibility is SQL Server Query Notifications: Using Query Notifications
As long as you want to 'batch' multiple changes, I think you should follow the route of Change Data Capture or Change Tracking (depending on whether you just want to know that something changed or what changes happened).
They should be used by a 'polling' procedure, where you poll for changes every few minutes (seconds, miliseconds???) and raise events. The nice thing about this is that as long as you store the last rowversion of the previous poll -for each table- you can check whenever you like for changes since the last poll. You don't rely on a real time triggers approach, that if halted you would loose all events forever. The procedure could be easily created inside a procedure that checks each table and you would need only 1 more table to store last rowversion per table.
Also, the overhead of this approach would be controlled by you and by how frequently the polling happens.
We have a SQL Server database table that consists of user id, some numeric value, e.g. balance, and a version column.
We have multiple threads updating this table's value column in parallel, each in its own transaction and session (we're using a session-per-thread model). Since we want all logical transaction to occur, each thread does the following:
load the current row (mapped to a type).
make the change to the value, based on old value. (e.g. add 50).
session.update(obj)
session.flush() (since we're optimistic, we want to make sure we had the correct version value prior to the update)
if step 4 (flush) threw StaleStateException, refresh the object (with lockmode.read) and goto step 1
we only do this a certain number of times per logical transaction, if we can't commit it after X attempts, we reject the logical transaction.
each such thread commits periodically, e.g. after 100 successful logical transactions, to keep commit-induced I/O to manageable levels. meaning - we have a single database transaction (per transaction) with multiple flushes, at least once per logical change.
what's the problem here, you ask? well, on commits we see changes to failed logical objects.
specifically, if the value was 50 when we went through step 1 (for the first time), and we tried to update it to 100 (but we failed since e.g. another thread changed it to 70), then the value of 50 is committed for this row. obviously this is incorrect.
What are we missing here?
Well, I do not have a ton of experience here, but one thing I remember reading in the documentation is that if an exception occurs, you are supposed to immediately rollback the transaction and dispose of the session. Perhaps your issue is related to the session being in an inconsistent state?
Also, calling update in your code here is not necessary. Since you loaded the object in that session, it is already being tracked by nhibernate.
If you want to make your changes anyway, why do you bother with row versioning? It sounds like you should get the same result if you simply always update the data and let the last transaction win.
As to why the update becomes permanent, it depends on what the SQL statements for the version check/update look like and on your transaction control, which you left out of the code example. If you turn on the Hibernate SQL logging it will probably become obvious how this is happening.
I'm not a nhibernate guru, but answer seems simple.
When nhibernate loads an object, it expects it not to change in db as long as it's in nhibernate session cache.
As you mentioned - you got multi thread app.
This is what happens=>
1st thread loads an entity
2nd thread loads an entity
1st thread changes entity
2nd thread changes entity and => finds out that loaded entity has changed by something else and being afraid that it has screwed up changes 1st thread made - throws an exception to let programmer be aware about that.
You are missing locking mechanism. Can't tell much about how to apply that properly and elegantly. Maybe Transaction would help.
We had similar problems when we used nhibernate and raw ado.net concurrently (luckily - just for querying - at least for production code). All we had to do - force updating db on insert/update so we could actually query something through full-text search for some specific entities.
Had StaleStateException in integration tests when we used raw ado.net to reset db. NHibernate session was alive through bunch of tests, but every test tried to cleanup db without awareness of NHibernate.
Here is the documention for exception in the session
http://nhibernate.info/doc/nhibernate-reference/best-practices.html