Overridden doDML() in EntityImpl does not refresh attributes - oracle-adf

I am using an Oracle ADF page to update data in a table. My entity object is based on the table, but I want the DML (inserts, updates, deletes) to go through a package procedure in the database instead of using the default DML generated by the ADF framework.
To accomplish this, I am following Oracle's documentation, found here: http://docs.oracle.com/cd/E23943_01/web.1111/b31974/bcadveo.htm#ADFFD1129
This all works fine. The problem is, the default ADF DML processing will automatically refresh the entity row after writing it, either with a RETURNING INTO clause or with a separately issued SELECT statements (depending on the value of isUseReturningClause() in the EntityDefImpl object). This is done so that the application front end gets updated in case the row was modified by the database during the DML process (e.g., a BEFORE ROW trigger changes values).
But, when I overwrite doDml() to replace the default framework DML with a call to my package procedure, it no longer automatically refreshes, even if isUseReturningClause() returns false.
I tried adding code to my doDml() implementation to requery afterwards, but it didn't work (maybe I didn't do it correctly). But, Oracle's documentation doesn't say anything about having to do that.
Does anyone know how to accomplish this?
Update
I went back to my attempt to have doDml() refresh afterwards by calling doSelect() and it works. My original attempt didn't work because doSelect() wasn't sending notifications of its changes.
Still, I'm concerned that this isn't how Oracle's documentation says to do it so I have no idea if this is correct or a kludge or a plain bad idea. So, my original question still stands.

I logged an SR with Oracle. Their response was that if you override doDML() and do not call super.doDML() then you lose the automatic refresh functionality of the framework.
They wouldn't comment on my solution, which was to call doSelect(false) after any inserts or updates in my doDML() override. Their policy is that if you want advice on customizations, you should engage Oracle Consulting.

In ADF World i faced this case and i solve it by simple way ,
firstly it seems bug here , but i can explain what is the expected procedure should be done.
values which set in background[Model layer] in MVC approach should be not edit by user.
solution in 1 word add property of af|inputText disabled="true".

Related

Entity Framework 6 update of table takes very long for one of the records

In an ASP.NET MVC application using EF 6 with SQL Server, when updating a table to change one particular row, it takes a very long time (10 minutes plus, and only sometimes the change ultimately gets through).
However, using the same web page to update any other row in the same table, it's immediate. Also, when I open SQL Server Management Studio and use an update query to update that specific row, it's immediate as well, and so is changing the row through the Edit Top 200 Records functionality.
The table in question holds various statuses used for keeping track record processing (there are 23 records in the table). It has an ID which is the primary key (only column referenced by other tables), and it has Name and Description columns. I'm changing the description in the example above.
As the row I'm changing is for the OK status, which is the most used one, the only thing I could come up with is that somehow all records referencing this status are also updated or at least checked, but besides the fact that this is not exactly how relational databases work, that would also still not explain why the update is immediate when I use a query in SSMS. Hence my assumption that this is somehow caused by EF ding or checking something in the background.
Unfortunately this is on a production environment where I have very limited access or debugging options. On the TEST and ACCEPTANCE environments it is working normally.
Any ideas what might cause this behavior?
Thanks, Patrick
Thanks all for taking the time to try and help me out here. I managed to get some debug messages in the controller code, and it turns out that the controller method called by the page submit is not even hit most of the time. I don't see any differences in the generated html between the view for the offending record and the views of any of the other records, so it still strikes me as weird that the same page seems to act differently with only 1 specific record, but at least now I know I have to look for the answer in ASP/MVC, and not EF or the db.
Thanks again!

How don't submit in a database but select the modified data

I have a database with multiple tables
and the user can change the data in the table.
my problems is that I wont that nothing changes in the database until the user click the button "save", and even when he do - it submit only the table he decide to save
but in the meantime it is necessary that the user can see all the changes that he did. and every "select" must give him the modified data ,and not the base data.
how I can on the one hand not submit the data in the database, and On the other hand show the data modified to the user?
I thought to do a transaction and don't submit, (and use read uncommitted) but for that I must don't close the connection (if I close without submit - all the changes are canceled) and I don't wont leave several of connection open.
I also thought to build a list of all the change, and whenever the user make a select - first searching from the list. but it is very complicated , and I prefer a simple solution
Thank you
This is going to be very tricky to handle as you've insisted that you cannot use transactions.
Best I can suggest is to add columns to each table to represent the state - but even then that's going to be tricky on how you'd ensure userA see's the pre-change and userB the post but not yet committed.
Perhaps you could look at using two tables and have a view selecting the pertinent data from both depending on the requirements.
Either way it's a nasty way to go about it and not very performant.
The moment you insisted you couldn't use a transaction is the moment you took away any chance of a simple answer.
A temporary table won't help here (as suggested above) as it's tied to the connection which you state will be closed. The only alternative temp table solution is a global temporary table but that also leads to issues (who creates it, what if you're the last connection to use it, check to see if it exists etc.)
You can use temporary tables to store a temporary data and then move them when it will need.

For Oracle Database How to find when the row was inserted? (timestamp) [duplicate]

Can I find out when the last INSERT, UPDATE or DELETE statement was performed on a table in an Oracle database and if so, how?
A little background: The Oracle version is 10g. I have a batch application that runs regularly, reads data from a single Oracle table and writes it into a file. I would like to skip this if the data hasn't changed since the last time the job ran.
The application is written in C++ and communicates with Oracle via OCI. It logs into Oracle with a "normal" user, so I can't use any special admin stuff.
Edit: Okay, "Special Admin Stuff" wasn't exactly a good description. What I mean is: I can't do anything besides SELECTing from tables and calling stored procedures. Changing anything about the database itself (like adding triggers), is sadly not an option if want to get it done before 2010.
I'm really late to this party but here's how I did it:
SELECT SCN_TO_TIMESTAMP(MAX(ora_rowscn)) from myTable;
It's close enough for my purposes.
Since you are on 10g, you could potentially use the ORA_ROWSCN pseudocolumn. That gives you an upper bound of the last SCN (system change number) that caused a change in the row. Since this is an increasing sequence, you could store off the maximum ORA_ROWSCN that you've seen and then look only for data with an SCN greater than that.
By default, ORA_ROWSCN is actually maintained at the block level, so a change to any row in a block will change the ORA_ROWSCN for all rows in the block. This is probably quite sufficient if the intention is to minimize the number of rows you process multiple times with no changes if we're talking about "normal" data access patterns. You can rebuild the table with ROWDEPENDENCIES which will cause the ORA_ROWSCN to be tracked at the row level, which gives you more granular information but requires a one-time effort to rebuild the table.
Another option would be to configure something like Change Data Capture (CDC) and to make your OCI application a subscriber to changes to the table, but that also requires a one-time effort to configure CDC.
Ask your DBA about auditing. He can start an audit with a simple command like :
AUDIT INSERT ON user.table
Then you can query the table USER_AUDIT_OBJECT to determine if there has been an insert on your table since the last export.
google for Oracle auditing for more info...
SELECT * FROM all_tab_modifications;
Could you run a checksum of some sort on the result and store that locally? Then when your application queries the database, you can compare its checksum and determine if you should import it?
It looks like you may be able to use the ORA_HASH function to accomplish this.
Update: Another good resource: 10g’s ORA_HASH function to determine if two Oracle tables’ data are equal
Oracle can watch tables for changes and when a change occurs can execute a callback function in PL/SQL or OCI. The callback gets an object that's a collection of tables which changed, and that has a collection of rowid which changed, and the type of action, Ins, upd, del.
So you don't even go to the table, you sit and wait to be called. You'll only go if there are changes to write.
It's called Database Change Notification. It's much simpler than CDC as Justin mentioned, but both require some fancy admin stuff. The good part is that neither of these require changes to the APPLICATION.
The caveat is that CDC is fine for high volume tables, DCN is not.
If the auditing is enabled on the server, just simply use
SELECT *
FROM ALL_TAB_MODIFICATIONS
WHERE TABLE_NAME IN ()
You would need to add a trigger on insert, update, delete that sets a value in another table to sysdate.
When you run application, it would read the value and save it somewhere so that the next time it is run it has a reference to compare.
Would you consider that "Special Admin Stuff"?
It would be better to describe what you're actually doing so you get clearer answers.
How long does the batch process take to write the file? It may be easiest to let it go ahead and then compare the file against a copy of the file from the previous run to see if they are identical.
If any one is still looking for an answer they can use Oracle Database Change Notification feature coming with Oracle 10g. It requires CHANGE NOTIFICATION system privilege. You can register listeners when to trigger a notification back to the application.
Please use the below statement
select * from all_objects ao where ao.OBJECT_TYPE = 'TABLE' and ao.OWNER = 'YOUR_SCHEMA_NAME'

Fire triggers on SELECT

I'm new to triggers and I need to fire a trigger when selecting values from a database table in sql server. I have tried firing triggers on insert/update and delete. is there any way to fire trigger when selecting values?
There are only two ways I know that you can do this and neither are trigger.
You can use a stored procedure to run the query and log the query to a table and other information you'd like to know.
You can use the audit feature of SQL Server.
I've never used the latter, so I can't speak of the ease of use.
No there is no provision of having trigger on SELECT operation. As suggested in earlier answer, write a stored procedure which takes parameters that are fetched from SEECT query and call this procedure after desired SELECT query.
SpectralGhost's answer assumes you are trying to do something like a security audit of who or what has looked at which data.
But it strikes me if you are new enough to sql not to know that a SELECT trigger is conceptually daft, you may be trying to do something else, in which case you're really talking about locking rather than auditing - i.e. once one process has read a particular record you want to prevent other processes accessing it (or possibly some other related records in a different table) until the transaction is either committed or rolled back. In that case, triggers are definitely not your solution (they rarely are). See BOL on transaction control and locking

How to track data changes in a database table

What is the best way to track changes in a database table?
Imagine you got an application in which users (in the context of the application not DB users ) are able to change data which are store in some database table. What's the best way to track a history of all changes, so that you can show which user at what time change which data how?
In general, if your application is structured into layers, have the data access tier call a stored procedure on your database server to write a log of the database changes.
In languages that support such a thing aspect-oriented programming can be a good technique to use for this kind of application. Auditing database table changes is the kind of operation that you'll typically want to log for all operations, so AOP can work very nicely.
Bear in mind that logging database changes will create lots of data and will slow the system down. It may be sensible to use a message-queue solution and a separate database to perform the audit log, depending on the size of the application.
It's also perfectly feasible to use stored procedures to handle this, although there may be a bit of work involved passing user credentials through to the database itself.
You've got a few issues here that don't relate well to each other.
At the basic database level you can track changes by having a separate table that gets an entry added to it via triggers on INSERT/UPDATE/DELETE statements. Thats the general way of tracking changes to a database table.
The other thing you want is to know which user made the change. Generally your triggers wouldn't know this. I'm assuming that if you want to know which user changed a piece of data then its possible that multiple users could change the same data.
There is no right way to do this, you'll probably want to have a separate table that your application code will insert a record into whenever a user updates some data in the other table, including user, timestamp and id of the changed record.
Make sure to use a transaction so you don't end up with cases where update gets done without the insert, or if you do the opposite order you don't end up with insert without the update.
One method I've seen quite often is to have audit tables. Then you can show just what's changed, what's changed and what it changed from, or whatever you heart desires :) Then you could write up a trigger to do the actual logging. Not too painful if done properly...
No matter how you do it, though, it kind of depends on how your users connect to the database. Are they using a single application user via a security context within the app, are they connecting using their own accounts on the domain, or does the app just have everyone connecting with a generic sql-account?
If you aren't able to get the user info from the database connection, it's a little more of a pain. And then you might look at doing the logging within the app, so if you have a process called "CreateOrder" or whatever, you can log to the Order_Audit table or whatever.
Doing it all within the app opens yourself up a little more to changes made from outside of the app, but if you have multiple apps all using the same data and you just wanted to see what changes were made by yours, maybe that's what you wanted... <shrug>
Good luck to you, though!
--Kevin
In researching this same question, I found a discussion here very useful. It suggests having a parallel table set for tracking changes, where each change-tracking table has the same columns as what it's tracking, plus columns for who changed it, when, and if it's been deleted. (It should be possible to generate the schema for this more-or-less automatically by using a regexed-up version of your pre-existing scripts.)
Suppose I have a Person Table with 10 columns which include PersonSid and UpdateDate. Now, I want to keep track of any updates in Person Table.
Here is the simple technique I used:
Create a person_log table
create table person_log(date datetime2, sid int);
Create a trigger on Person table that will insert a row into person_log table whenever Person table gets updated:
create trigger tr on dbo.Person
for update
as
insert into person_log(date, sid) select updatedDTTM, PersonSID from inserted
After any updates, query person_log table and you will be able to see personSid that got updated.
Same you can do for Insert, delete.
Above example is for SQL, let me know in case of any queries or use this link :
https://web.archive.org/web/20211020134839/https://www.4guysfromrolla.com/webtech/042507-1.shtml
A trace log in a separate table (with an ID column, possibly with timestamps)?
Are you going to want to undo the changes as well - perhaps pre-create the undo statement (a DELETE for every INSERT, an (un-) UPDATE for every normal UPDATE) and save that in the trace?
Let's try with this open source component:
https://tabledependency.codeplex.com/
TableDependency is a generic C# component used to receive notifications when the content of a specified database table change.
If all changes from php. You may use class to log evry INSERT/UPDATE/DELETE before query. It will be save action, table, column, newValue, oldValue, date, system(if need), ip, UserAgent, clumnReference, operatorReference, valueReference. All tables/columns/actions that need to log are configurable.

Resources