I am getting "ORA-28115: policy with check option violation" while executing a stored procedure.It specifically throws error on an INSERT query within the SP.With the same request body(data) and same SP,it works in other Enviornment's DB but fails in other one with this error.Can anyone point towards what all can I do to debug and solve the issue ?
Most likely someone has created a VPD policy on this table, which (in this case) means you can only add a row that will ultimately be visible to you based on the policy assigned.
Take a look in xxx_POLICIES to see what has been created.
Docs on DBMS_RLE here https://docs.oracle.com/en/database/oracle/oracle-database/19/arpls/DBMS_RLS.html
Related
I have an application which is connected to an external webservice. The webservice sends messages with an ID to the laravel application. Within the controller I check if the ID of the message already exists in the database. If not, I store the message with the ID, if it exists I skip the message.
Unfortunately sometimes the webservice sends a message with the same ID multiple times within the same second. Its an external service, so I have no control over it.
The problem now is, that the messages come so fast, that the database has not saved the message before the next message comes into the controller. As a result, the check if the ID already exists fails and it tries to save the same message once more. This leads to an exception, because I have a unique identifier on the ID column.
What is the best strategy to handle this? To use a queue for it, is not a good solution, because the messages are time critical and the queue is even slower and it would lead to a message jam/congestion within the queue.
Any idea or help is appreciated a lot! Thanks!
You can send to your database INSERT IGNORE requests
INSERT IGNORE INTO messages (...) VALUES (...)
or
INSERT INTO messages (...) VALUES (...) ON DUPLICATE KEY UPDATE id=id.
You can try updating on duplicate. That is a way I have used in the past to get around issues like this. Not sure if it's the perfect solution, but definitely an option. I assume you are using mysql.
https://dev.mysql.com/doc/refman/8.0/en/insert-on-duplicate.html
Error:Apex trigger AccountAddressTrigger caused an unexpected exception, contact your administrator: AccountAddressTrigger: execution of AfterUpdate caused by: System.FinalException: Record is read-only: Trigger.AccountAddressTrigger: line 6, column 1
while solving Create an Apex trigger for Account that matches Shipping Address Postal Code with Billing Address Postal Code based on a custom field.
I got the above mentioned error
I had write correct logic but still get the error.
Your question is really poor, post the code you've writte so far and/or link to that challenge (what is it, a Trailhead task? Homework? Job interview?)
My guess is that your trigger should operate as "before insert, before update", not "after". Before's are for all kinds of validations and field prepopulation and one of notable features is that you don't need to explicitly write update myrecords; - you get save to database for free. After's are more for side effects like creating related records, anything that makes sense only after you generated record's Id.
I am using an Oracle ADF page to update data in a table. My entity object is based on the table, but I want the DML (inserts, updates, deletes) to go through a package procedure in the database instead of using the default DML generated by the ADF framework.
To accomplish this, I am following Oracle's documentation, found here: http://docs.oracle.com/cd/E23943_01/web.1111/b31974/bcadveo.htm#ADFFD1129
This all works fine. The problem is, the default ADF DML processing will automatically refresh the entity row after writing it, either with a RETURNING INTO clause or with a separately issued SELECT statements (depending on the value of isUseReturningClause() in the EntityDefImpl object). This is done so that the application front end gets updated in case the row was modified by the database during the DML process (e.g., a BEFORE ROW trigger changes values).
But, when I overwrite doDml() to replace the default framework DML with a call to my package procedure, it no longer automatically refreshes, even if isUseReturningClause() returns false.
I tried adding code to my doDml() implementation to requery afterwards, but it didn't work (maybe I didn't do it correctly). But, Oracle's documentation doesn't say anything about having to do that.
Does anyone know how to accomplish this?
Update
I went back to my attempt to have doDml() refresh afterwards by calling doSelect() and it works. My original attempt didn't work because doSelect() wasn't sending notifications of its changes.
Still, I'm concerned that this isn't how Oracle's documentation says to do it so I have no idea if this is correct or a kludge or a plain bad idea. So, my original question still stands.
I logged an SR with Oracle. Their response was that if you override doDML() and do not call super.doDML() then you lose the automatic refresh functionality of the framework.
They wouldn't comment on my solution, which was to call doSelect(false) after any inserts or updates in my doDML() override. Their policy is that if you want advice on customizations, you should engage Oracle Consulting.
In ADF World i faced this case and i solve it by simple way ,
firstly it seems bug here , but i can explain what is the expected procedure should be done.
values which set in background[Model layer] in MVC approach should be not edit by user.
solution in 1 word add property of af|inputText disabled="true".
I'm doing an integration on a community platform called Telligent. I'm using a 3rd-party add-on called BlogML to import blog posts from an XML file (in BlogML format) into my local Telligent site. The Telligent platform comes with many classes in their SDK so that I can programmatically add content, such as blog posts. E.g.
myWeblogService.AddPost(myNewPostObject);
The BlogML app I'm using essentially parses the XML and creates blog post objects then adds them to the site using code like the above sample line. After about 40 post imports I get a SQL error:
Exception Details: System.Data.SqlClient.SqlException:
String or binary data would be truncated.
The statement has been terminated.
I believe this error means that I'm trying to insert too much data into a db field that has a max size limit. Unfortunately, I cannot tell which field this is an issue for. I ran the SQL Server Profiler while doing the import but I cannot seem to see what stored procedure the error is occurring on. Is there another way to use the profiler or another tool to see exactly what stored procedure and even what field the error is being caused by? Are there any other tips to get more information about where specifically to look?
Oh the joys of 3rd-party tools...
You are correct in that the exception is due to trying to stuff too much data into a character/binary based field. Running a trace should definitely allow you to see which procedure/statement is throwing the exception if you are capturing the correct events, those you'd want to capture would include:
SQL:BatchStarting
SQL:BatchCompleted
SQL:StmtStarting
SQL:StmtCompleted
RPC:Starting
RPC:Completed
SP:Starting
SP:Completed
SP:StmtStarting
SP:StmtCompleted
Exception
If you know for certain it is a stored procedure that includes the faulty code, you could do away with capturing #'s 1-4. Be sure you capture all associated columns in the trace as well (should be the default if you are running a trace using the Profiler tool). The Exception class will include the actual error in your trace, which should allow you to see the immediate preceding statement within the same SPID that threw the exception. You must include the starting events in addition to the completed events as an exception that occurs will preclude the associated completed events from firing in the trace.
If you can filter your trace to a particular database, application, host name, etc. that will certainly make it easier to debug if you are on a busy server, however if you are on an idle server you may not need to bother with the filtering.
Assuming you are using Sql 2005+, the trace will include a column called 'EventSequence', which is basically an incrementing value ordered by the sequence that events fire. Once you run the trace and capture the output, find the 'Exception' event that fired (if you are using profiler, the row's it will be in Red color), then you should be able to simply find the most recent SP:StmtStarting or SQL:StmtStarting event for the same SPID that occurred before the Exception.
Here is a screen shot of a profile I captured reproducing an event similar to yours:
You can see the exception line in Red, and the line highlighted is the immediate preceding SP:StmtStarting event that fired prior to the exception for the same SPID. If you want to find what stored procedure this statement is a part of, look for the values in the ObjectName and/or ObjectId columns.
By doing some silly mistakes you will get this error.
if you are trying to insert a string like.
String reqName="Food Non veg /n";
here /n is the culprit.Remove /n from the string to get out of this error.
I hope this will help some one.
So, I have 2 database instances, one is for development in general, another was copied from development for unit tests.
Something changed in the development database that I can't figure out, and I don't know how to see what is different.
When I try to delete from a particular table, with for example:
delete from myschema.mytable where id = 555
I get the following normal response from the unit test DB indicating no row was deleted:
SQL0100W No row was found for FETCH, UPDATE or DELETE; or the result of a query is an empty table. SQLSTATE=02000
However, the development database fails to delete at all with the following error:
DB21034E The command was processed as an SQL statement because it was not a valid Command Line Processor command. During SQL processing it returned: SQL0440N No authorized routine named "=" of type "FUNCTION" having compatible arguments was found. SQLSTATE=42884
My best guess is there is some trigger or view that was added or changed that is causing the problem, but I have no idea how to go about finding the problem... has anyone had this problem or know how to figure out what the root of the problem is?
(note that this is a DB2 database)
Hmm, applying the great oracle to this question, I came up with:
http://bytes.com/forum/thread830774.html
It seems to suggest that another table has a foreign key pointing at the problematic one, when that FK on the other table is dropped, the delete should work again. (Presumably you can re-create the foreign key as well)
Does that help any?
You might have an open transaction on the dev db...that gets me sometimes on SQL Server
Is the type of id compatible with 555? Or has it been changed to a non-integer type?
Alternatively, does the 555 argument somehow go missing (e.g. if you are using JDBC and the prepared statement did not get its arguments set before executing the query)?
Can you add more to your question? That error sounds like the sql statement parser is very confused about your statement. Can you do a select on that table for the row where id = 555 ?
You could try running a RUNSTATS and REORG TABLE on that table, those are supposed to sort out wonky tables.
#castaway
A select with the same "where" condition works just fine, just not delete. Neither runstats nor reorg table have any affect on the problem.
#castaway
We actually just solved the problem, and indeed it is just what you said (a coworker found that exact same page too).
The solution was to drop foreign key constraints and re-add them.
Another post on the subject:
http://www.ibm.com/developerworks/forums/thread.jspa?threadID=208277&tstart=-1
Which indicates that the problem is a referential constraint corruption, and is actually, or supposedly anyways, fixed in a later version of db2 V9 (which we are not yet using).
Thanks for the help!
Please check
1. your arguments of triggers, procedure, functions and etc.
2. datatype of arguments.