I'm having problems with creating an instead of insert trigger and I was hoping someone could point me what I'm doing wrong.
I have a table with employees id and functionalities id and the functionalities with id 2 or 4 are only available to the employees with id 1 or 4, any other combination is ok.
I created this trigger but no matter what I insert, the result is always a null value in each column.
This is what I have so far:
CREATE TRIGGER TrigInsertEmployee
ON employee_func
INSTEAD OF INSERT AS
BEGIN
DECLARE #idEmp int, #idFunc int
IF(#idFunc IN (2, 4) AND #idEmp NOT IN (1, 4))
BEGIN
REISERROR('Incorrect functionality', 10, 1)
RETURN
END
INSERT INTO employee_func(idEmp, idFunc) VALUES (#idEmp ,#idFunc)
END
Hope someone can help me out!
Thanks
You declared two variables but never set them. When writing an INSERT trigger, SQL exposes a special table called inserted that holds the values the calling process is attempting to insert. See the link below for more information.
http://msdn.microsoft.com/en-us/library/ms191300.aspx
As dazedandconfused mentioned, the INSERTED "special" table is what you need. Here's an example:
CREATE TRIGGER TrigInsertEmployee
ON employee_func
INSTEAD OF INSERT AS
BEGIN
IF(INSERTED.idFunc IN (2, 4) AND INSERTED.idEmp NOT IN (1, 4))
BEGIN
RAISERROR('Incorrect functionality', 10, 1)
RETURN
END
INSERT INTO employee_func(idEmp, idFunc) VALUES (#idEmp ,#idFunc)
END
Also, it may just be a transposition error, but RAISERROR wasn't spelled right in your code.
Finally, for a simple row-based rule like this you could use a CHECK constraint instead:
ALTER TABLE employee_func
ADD CHECK ( NOT(idFunc IN (2, 4) AND idEmp NOT IN (1, 4)) )
I'm just wrapping a "NOT" around your trigger condition here, as the CHECK defines something that's OK whereas your trigger defines something that isn't OK.
Related
In our tool we use triggers on materialized views in order to create log-entries (and do some other things) when a transaction is commited.
The code works good in Oracle 12. In Oracle 19 the old values in that trigger (":old") seems to be lost.
Investigations:
This seems to be the case in the combination of materialized views/triggers. If we set the same trigger on a table the logs are generated correctly (but we do not get the transaction-awareness which is required).
I have created a MWE and added comments to the DBMS_OUTPUT-Lines which describe what we see in oracle 12 and Oracle 18/19:
/*Create Test-Table*/
CREATE TABLE MAT_VIEW_TEST (
PK number(10,0) PRIMARY KEY ,
NAME NVARCHAR2(50)
);
/*insert some values*/
insert into MAT_VIEW_TEST values (1, 'Herbert');
insert into MAT_VIEW_TEST values (2, 'Hubert');
commit;
/*Create mateterialized view (log) in order to set trigger on it*/
CREATE MATERIALIZED VIEW LOG ON MAT_VIEW_TEST WITH PRIMARY KEY, ROWID including new values;
CREATE MATERIALIZED VIEW MV_MAT_VIEW_TEST
refresh fast on commit
AS select * from MAT_VIEW_TEST;
/*Create trigger to log old and new value*/
CREATE OR REPLACE TRIGGER MAT_VIEW_TRIGGER
BEFORE INSERT OR UPDATE
ON MV_MAT_VIEW_TEST
FOR EACH ROW
DECLARE
old_pk number(10,0);
new_pk number(10,0);
old_name NVARCHAR2(50);
new_name NVARCHAR2(50);
BEGIN
old_pk := :old.pk;
old_name := :old.name;
new_pk := :new.pk;
new_name := :new.name;
DBMS_OUTPUT.PUT_LINE('TEST BEGIN');
DBMS_OUTPUT.PUT_LINE('old p ' || old_pk); /*old is set in oracle 12, but not in oracle18/19*/
DBMS_OUTPUT.PUT_LINE('old n ' || old_name); /*old is set in oracle 12, but not in oracle18/19*/
DBMS_OUTPUT.PUT_LINE('new p ' || new_pk); /*new is set correctly*/
DBMS_OUTPUT.PUT_LINE('new n ' || new_name); /*new is set correctly*/
DBMS_OUTPUT.PUT_LINE('TEST END');
END;
/
/*test the log*/
update MAT_VIEW_TEST set name = 'Test' where pk = 1;
commit;
Any ideas what was changed in Oracle or what we could do to get the old values in our trigger?
I don't have a 12c to rerun your tests, but I did on a 21c, and with the trigger you show, the old values are never shown, neither on insert (normal) nor on update( which is what you're complaining about). When I changed the trigger to be 'on insert or update or delete', and reran an update, I can see the old values. So, the refresh process is converting your UPDATE to DELETE/INSERT, hence the old values when it is deleting the old row.
I have to write a trigger for the tables I made and in insert update, I have to record a separate log table for those that are updated or inserted.
Columns in the log table will be like;
Done_process (will write update, insert)
Person (student number of the person treated)
Before (previous value for update, blank for insert)
After (new value for update, new value for insert)
This is my student_info table,
CREATE TABLE student_info (
school_id NUMBER,
id_no NUMBER NOT NULL UNIQUE,
name VARCHAR2(50) NOT NULL,
surname VARCHAR2(50) NOT NULL,
city VARCHAR2(50) NOT NULL,
birth_date DATE NOT NULL,
CONSTRAINT student_info_pk PRIMARY KEY(school_id )
);
CREATE TABLE og_log(
done_process VARCHAR2(30),
person VARCHAR2(30),
before VARCHAR2(30),
after VARCHAR2(30)
);
CREATE OR REPLACE TRIGGER og_trigger
BEFORE INSERT OR UPDATE OR DELETE ON student_info
REFERENCING OLD AS OLD NEW AS NEW
FOR EACH ROW
ENABLE
DECLARE
BEGIN
IF INSERTING THEN
INSERT INTO og_log(done_process, person, before, after)
VALUES ('Insert',:new.school_id,:old.name,:new.name);
ELSIF UPDATING THEN
INSERT INTO og_log(done_process, person, before, after)
VALUES ('Update',:new.school_id,:old.name,:new.name);
END IF;
END;
/
When I try to run the code it gave an error as follows;
> Trıgger OG_TRIGGER created.
>
>
> Error starting at line : 280 in command - ELSIF UPDATING THEN Error
> report - Unknown Command
>
> SP2-0552: Bind variable "NEW" not declared.
>
> 0 rows inserted.
>
>
> Error starting at line : 283 in command - END IF Error report -
> Unknown Command
>
> SP2-0044: For a list of known commands enter HELP and to leave enter
> EXIT.
>
> Error starting at line : 284 in command - END Error report - Unknown
> Command
I believe you are creating this trigger for learning purpose and not something a real use case because what you do in trigger doesn't really making any sense.
The trigger you have mentioned is not compiling due to syntactical problems like where v_id := 20201033.
Where clause is used to compare the value and thus you should use = instead := which is an assignment operator.
Besides this problem few points which still needs to be taken care
Give a explicit convention for creating local variables. e.g. you have created a local variable v_id and the same column is also available in student_info table. Though it is not a problem in this case but it's good practice to keep the local variable specific like let's say l_v_id.
You have used a select statement inside trigger which could leads to NO_DATA_FOUND error and you should handle it by either in the exception section or another way would be using aggregate function like max() if obviously v_id is primary key. I doubt why you need this select statement ( you could use between old and new using something like coalesce(:old.school_id,:new_schoold_id) if I understood you) but I would leave it open to you to decide and act accordingly.
Considering above points final code will be,
CREATE OR REPLACE TRIGGER og_trigger
BEFORE INSERT OR UPDATE OR DELETE ON student_info
REFERENCING OLD AS OLD NEW AS NEW
FOR EACH ROW
ENABLE
DECLARE
BEGIN
IF INSERTING THEN
INSERT INTO og_log(done_process, person, before, after)
VALUES ('Insert',:new.school_id,:old.city,:new.city);
ELSIF UPDATING THEN
INSERT INTO og_log(done_process, person, before, after)
VALUES ('Update',:new.school_id,:old.city,:new.city);
END IF;
END;
/
Find demo db<>fiddle
EDITED: Solving probably tool issue
I doubt the issue is with SQL Developer tool usage , however last try i would like to make,
Step1:
Drop both the tables used by issuing drop command
drop table STUDENT_INFO;
drop table og_log;
Step2:
Open another SQL worksheet using alt+F10 and do as I have shown in the following image. Please try and let me know.
I have a table where data does not initially exist until an action is taken that stores all settings made by client in one-go. To illustrate this simply, a button click that stores all column values off a (HTML) table into a database table (let's call this dbo.Settings).
So instead of inserting into this dbo.Settings all the default values prior to user making any changes to their individual settings (ever), I kind of created the pseudo data for them that will be returned whenever requested, kind of like SELECT-ing the default values:
SELECT
CanView = ISNULL(CanView, 1),
CanRead = ISNULL(CanRead, 1),
CanWrite = ISNULL(CanWrite, 0)
FROM
dbo.Settings AS s
WHERE
UserId = #id
Rather than doing:
IF NOT EXISTS(SELECT * FROM dbo.Settings WHERE UserId = #id)
BEGIN
INSERT INTO dbo.Settings (UserId, CanView, CanRead, CanWrite)
VALUES (#id, 1, 1, 0)
END
The problem with this is whenever I need to add a new setting column in the future, I now have to note one more procedure to modify/add the default value for this column as well -- which I don't like. Using TRIGGER would be an option but I wonder what the best practice in managing data like this would be. Or would you do something like this:
CREATE PROC Settings_CreateOrModify
#userId INT,
#canView BIT = NULL,
#canRead BIT = NULL,
#canWrite BIT = NULL
AS
BEGIN
IF EXISTS(SELECT * FROM dbo.Settings WHERE UserId = #userId) BEGIN
UPDATE s
SET
CanView = #canView,
CanRead = #canRead,
CanWrite = #canWrite
FROM
dbo.Settings AS s
WHERE
s.UserId = #userId AND
(#canView IS NULL OR #canView <> s.CanView) AND
(#canRead IS NULL OR #canRead <> s.CanRead) AND
(#canWrite IS NULL OR #canWrite <> s.CanWrite)
END
ELSE BEGIN
INSERT INTO
dbo.Settings(UserId, CanView, CanRead, CanWrite)
SELECT
#userId, #canView, #canRead, #canWrite
END
END
How would you handle data structure like this? Any recommendation or correction would be greatly appreciated. Thanks in advance!
Your SP is a good way to go, and doing it like this is commonly called an "UPSERT".
It also looks to me as if the block:
(#canView IS NULL OR #canView <> s.CanView) AND
(#canRead IS NULL OR #canRead <> s.CanRead) AND
(#canWrite IS NULL OR #canWrite <> s.CanWrite)
is problematic since it causes the UPDATE to run only if ALL parameters changed their value. I don't think that's what you wanted to say. Just SET the three values regardless of what's already there.
You still end up with three places to change when you add a new setting: The Table, the Upsert and the Defaults.
One very different approach is this:
Apply the defaults to the columns in the table definition.
Whenever you need the values for a new user, do: INSERT INTO dbo.Settings(UserId) The defaults will fill the rest of the columns.
Now you can retrieve the values for ALL users (new or not) in the same way from the table.
Since you already inserted the user in step 2, you know the userid is there already and you can always save the changes via a simple update.
This eliminates the SP and the need of providing the defaults in one extra place.
I don't know if that is possible, but I want to copy a bunch of records from a temp table to a normal table. The problem is that some records may violate check constraints so I want to insert everything that is possible and generate error logs somewhere else for the invalid records.
If I execute:
INSERT INTO normal_table
SELECT ... FROM temp_table
nothing would be inserted if any record violates any constraint. I could make a loop and manually insert one by one, but I think the performance would be lower.
Ps: if possible, I'd like a solution that works with Oracle 9
From Oracle 10gR2, you can use the log errors clause:
EXECUTE DBMS_ERRLOG.CREATE_ERROR_LOG('NORMAL_TABLE');
INSERT INTO normal_table
SELECT ... FROM temp_table
LOG ERRORS REJECT LIMIT UNLIMITED;
In its simplest form. You can then see what errors you got:
SELECT ora_err_mesg$
FROM err$_normal_table;
More on the CREATE_ERROR_LOG step here.
I think this approach works from 9i, but don't have an instance available to test on, so this is actually run on 11gR2
Update: tested and tweaked (to avoid PLS-00436) in 9i:
declare
type t_temp_table is table of temp_table%rowtype;
l_temp_table t_temp_table;
l_err_code err_table.err_code%type;
l_err_msg err_table.err_msg%type;
l_id err_table.id%type;
cursor c is select * from temp_table;
error_array exception;
pragma exception_init(error_array, -24381);
begin
open c;
loop
fetch c bulk collect into l_temp_table limit 100;
exit when l_temp_table.count = 0;
begin
forall i in 1..l_temp_table.count save exceptions
insert into normal_table
values l_temp_table(i);
exception
when error_array then
for j in 1..sql%bulk_exceptions.count loop
l_id := l_temp_table(sql%bulk_exceptions(j).error_index).id;
l_err_code := sql%bulk_exceptions(j).error_code;
l_err_msg := sqlerrm(-1 * sql%bulk_exceptions(j).error_code);
insert into err_table(id, err_code, err_msg)
values (l_id, l_err_code, l_err_msg);
end loop;
end;
end loop;
end;
/
With all your real columns instead of just id, which I've done just for demo purposes:
create table normal_table(id number primary key);
create table temp_table(id number);
create table err_table(id number, err_code number, err_msg varchar2(2000));
insert into temp_table values(42);
insert into temp_table values(42);
Then run the anonymous block above...
select * from normal_table;
ID
----------
42
column err_msg format a50
select * from err_table;
ID ERR_CODE ERR_MSG
---------- ---------- --------------------------------------------------
42 1 ORA-00001: unique constraint (.) violated
This is less satisfactory on a few levels - more coding, slower if you have a lot of exceptions (because of the individual inserts for those), doesn't show which constraint was violated (or any other error details), and won't retain the errors if you rollback - though you could call an autonomous transaction to log it if that was an issue, which I doubt here.
If you have a small enough volume of data to not want to worry about the limit clause you can simplify it a bit:
declare
type t_temp_table is table of temp_table%rowtype;
l_temp_table t_temp_table;
l_err_code err_table.err_code%type;
l_err_msg err_table.err_msg%type;
l_id err_table.id%type;
error_array exception;
pragma exception_init(error_array, -24381);
begin
select * bulk collect into l_temp_table from temp_table;
forall i in 1..l_temp_table.count save exceptions
insert into normal_table
values l_temp_table(i);
exception
when error_array then
for j in 1..sql%bulk_exceptions.count loop
l_id := l_temp_table(sql%bulk_exceptions(j).error_index).id;
l_err_code := sql%bulk_exceptions(j).error_code;
l_err_msg := sqlerrm(-1 * sql%bulk_exceptions(j).error_code);
insert into err_table(id, err_code, err_msg)
values (l_id, l_err_code, l_err_msg);
end loop;
end;
/
The 9i documentation doesn't seem to be online any more, but this is in a new-features document, and lots of people have written about it - it's been asked about here before too.
If you're specifically interested only in check constraints then one method to think about is to read the definitions of the target check constraints from the data dictionary and apply them as predicates to the query that extracts data from the source table using dynamic sql.
Given:
create table t1 (
col1 number check (col1 between 3 and 10))
You can:
select constraint_name,
search_condition
from user_constraints
where constraint_type = 'C' and
table_name = 'T1'
The result being:
"SYS_C00226681", "col1 between 3 and 10"
From there it's "a simple matter of coding", as they say, and the method will work on just about any version of Oracle. The most efficient method would probably be to use a multitable insert to direct rows to either the intended target table or to an error logging table based on the result of a CASE statement that applies the check constraint predicates.
i just created a java file to parse a csv files and saved them into an oracle database.. but i need a field ID which acts as a primary key.. and i am a bit confused abt looping..
I think all you need to do is utilize a sequence (as suggested by Ronnis)
as such
CREATE SEQUENCE FIELD_ID_SEQ START WITH 1 INCREMENT BY 1 NOCYCLE NOCACHE;
/*NOTE THE SEQUENCE, WHILE INCREMENTING, IS NOT GUARANTEED TO BE 1,2,3,4...N ->expect gaps in the #*/
Now either in your java app where you are saving the data:
"INSERT INTO TABLE_OF_CSV(FIELD_ID, FIELD_COLA, FIELD_COLB) VALUES(FIELD_ID_SEQ.NEXTVAL, ?,?);"
OR
Now if you are using a procedure (or a procedure within a package) you can do this (note this returns the primary key back to the calling app)
create procedure insertIntoCSVTable(pCOLA IN TABLE_OF_CSV.FIELD_COLA%TYPE
, pCOLB IN TABLE_OF_CSV.FIELD_COLB%TYPE
, pFIELD_ID OUT TABLE_OF_CSV.FIELD_ID%TYPE)
AS
BEGIN
INSERT INTO TABLE_OF_CSV(FIELD_ID, FIELD_COLA, FIELD_COLB)
VALUES(FIELD_ID_SEQ.NEXTVAL, pCOLA, pCOLB)
RETURNING FIELD_ID
INTO pFIELD_ID
;
END insertIntoCSVTable;
no looping required assuming you are already looping in your java code (assuming a row-by-row insert)
OR
You may use a trigger to insert a new value into the table:
create or replace
TRIGGER TABLE_OF_CSV_TRG BEFORE INSERT ON TABLE_OF_CSV
FOR EACH ROW
BEGIN
<<COLUMN_SEQUENCES>>
BEGIN
IF :NEW.FIELD_ID IS NULL THEN
SELECT FIELD_ID_SEQ.NEXTVAL INTO :NEW.FIELD_ID FROM DUAL;
END IF;
END COLUMN_SEQUENCES;
END;