I have to write a trigger for the tables I made and in insert update, I have to record a separate log table for those that are updated or inserted.
Columns in the log table will be like;
Done_process (will write update, insert)
Person (student number of the person treated)
Before (previous value for update, blank for insert)
After (new value for update, new value for insert)
This is my student_info table,
CREATE TABLE student_info (
school_id NUMBER,
id_no NUMBER NOT NULL UNIQUE,
name VARCHAR2(50) NOT NULL,
surname VARCHAR2(50) NOT NULL,
city VARCHAR2(50) NOT NULL,
birth_date DATE NOT NULL,
CONSTRAINT student_info_pk PRIMARY KEY(school_id )
);
CREATE TABLE og_log(
done_process VARCHAR2(30),
person VARCHAR2(30),
before VARCHAR2(30),
after VARCHAR2(30)
);
CREATE OR REPLACE TRIGGER og_trigger
BEFORE INSERT OR UPDATE OR DELETE ON student_info
REFERENCING OLD AS OLD NEW AS NEW
FOR EACH ROW
ENABLE
DECLARE
BEGIN
IF INSERTING THEN
INSERT INTO og_log(done_process, person, before, after)
VALUES ('Insert',:new.school_id,:old.name,:new.name);
ELSIF UPDATING THEN
INSERT INTO og_log(done_process, person, before, after)
VALUES ('Update',:new.school_id,:old.name,:new.name);
END IF;
END;
/
When I try to run the code it gave an error as follows;
> Trıgger OG_TRIGGER created.
>
>
> Error starting at line : 280 in command - ELSIF UPDATING THEN Error
> report - Unknown Command
>
> SP2-0552: Bind variable "NEW" not declared.
>
> 0 rows inserted.
>
>
> Error starting at line : 283 in command - END IF Error report -
> Unknown Command
>
> SP2-0044: For a list of known commands enter HELP and to leave enter
> EXIT.
>
> Error starting at line : 284 in command - END Error report - Unknown
> Command
I believe you are creating this trigger for learning purpose and not something a real use case because what you do in trigger doesn't really making any sense.
The trigger you have mentioned is not compiling due to syntactical problems like where v_id := 20201033.
Where clause is used to compare the value and thus you should use = instead := which is an assignment operator.
Besides this problem few points which still needs to be taken care
Give a explicit convention for creating local variables. e.g. you have created a local variable v_id and the same column is also available in student_info table. Though it is not a problem in this case but it's good practice to keep the local variable specific like let's say l_v_id.
You have used a select statement inside trigger which could leads to NO_DATA_FOUND error and you should handle it by either in the exception section or another way would be using aggregate function like max() if obviously v_id is primary key. I doubt why you need this select statement ( you could use between old and new using something like coalesce(:old.school_id,:new_schoold_id) if I understood you) but I would leave it open to you to decide and act accordingly.
Considering above points final code will be,
CREATE OR REPLACE TRIGGER og_trigger
BEFORE INSERT OR UPDATE OR DELETE ON student_info
REFERENCING OLD AS OLD NEW AS NEW
FOR EACH ROW
ENABLE
DECLARE
BEGIN
IF INSERTING THEN
INSERT INTO og_log(done_process, person, before, after)
VALUES ('Insert',:new.school_id,:old.city,:new.city);
ELSIF UPDATING THEN
INSERT INTO og_log(done_process, person, before, after)
VALUES ('Update',:new.school_id,:old.city,:new.city);
END IF;
END;
/
Find demo db<>fiddle
EDITED: Solving probably tool issue
I doubt the issue is with SQL Developer tool usage , however last try i would like to make,
Step1:
Drop both the tables used by issuing drop command
drop table STUDENT_INFO;
drop table og_log;
Step2:
Open another SQL worksheet using alt+F10 and do as I have shown in the following image. Please try and let me know.
I don't know if that is possible, but I want to copy a bunch of records from a temp table to a normal table. The problem is that some records may violate check constraints so I want to insert everything that is possible and generate error logs somewhere else for the invalid records.
If I execute:
INSERT INTO normal_table
SELECT ... FROM temp_table
nothing would be inserted if any record violates any constraint. I could make a loop and manually insert one by one, but I think the performance would be lower.
Ps: if possible, I'd like a solution that works with Oracle 9
From Oracle 10gR2, you can use the log errors clause:
EXECUTE DBMS_ERRLOG.CREATE_ERROR_LOG('NORMAL_TABLE');
INSERT INTO normal_table
SELECT ... FROM temp_table
LOG ERRORS REJECT LIMIT UNLIMITED;
In its simplest form. You can then see what errors you got:
SELECT ora_err_mesg$
FROM err$_normal_table;
More on the CREATE_ERROR_LOG step here.
I think this approach works from 9i, but don't have an instance available to test on, so this is actually run on 11gR2
Update: tested and tweaked (to avoid PLS-00436) in 9i:
declare
type t_temp_table is table of temp_table%rowtype;
l_temp_table t_temp_table;
l_err_code err_table.err_code%type;
l_err_msg err_table.err_msg%type;
l_id err_table.id%type;
cursor c is select * from temp_table;
error_array exception;
pragma exception_init(error_array, -24381);
begin
open c;
loop
fetch c bulk collect into l_temp_table limit 100;
exit when l_temp_table.count = 0;
begin
forall i in 1..l_temp_table.count save exceptions
insert into normal_table
values l_temp_table(i);
exception
when error_array then
for j in 1..sql%bulk_exceptions.count loop
l_id := l_temp_table(sql%bulk_exceptions(j).error_index).id;
l_err_code := sql%bulk_exceptions(j).error_code;
l_err_msg := sqlerrm(-1 * sql%bulk_exceptions(j).error_code);
insert into err_table(id, err_code, err_msg)
values (l_id, l_err_code, l_err_msg);
end loop;
end;
end loop;
end;
/
With all your real columns instead of just id, which I've done just for demo purposes:
create table normal_table(id number primary key);
create table temp_table(id number);
create table err_table(id number, err_code number, err_msg varchar2(2000));
insert into temp_table values(42);
insert into temp_table values(42);
Then run the anonymous block above...
select * from normal_table;
ID
----------
42
column err_msg format a50
select * from err_table;
ID ERR_CODE ERR_MSG
---------- ---------- --------------------------------------------------
42 1 ORA-00001: unique constraint (.) violated
This is less satisfactory on a few levels - more coding, slower if you have a lot of exceptions (because of the individual inserts for those), doesn't show which constraint was violated (or any other error details), and won't retain the errors if you rollback - though you could call an autonomous transaction to log it if that was an issue, which I doubt here.
If you have a small enough volume of data to not want to worry about the limit clause you can simplify it a bit:
declare
type t_temp_table is table of temp_table%rowtype;
l_temp_table t_temp_table;
l_err_code err_table.err_code%type;
l_err_msg err_table.err_msg%type;
l_id err_table.id%type;
error_array exception;
pragma exception_init(error_array, -24381);
begin
select * bulk collect into l_temp_table from temp_table;
forall i in 1..l_temp_table.count save exceptions
insert into normal_table
values l_temp_table(i);
exception
when error_array then
for j in 1..sql%bulk_exceptions.count loop
l_id := l_temp_table(sql%bulk_exceptions(j).error_index).id;
l_err_code := sql%bulk_exceptions(j).error_code;
l_err_msg := sqlerrm(-1 * sql%bulk_exceptions(j).error_code);
insert into err_table(id, err_code, err_msg)
values (l_id, l_err_code, l_err_msg);
end loop;
end;
/
The 9i documentation doesn't seem to be online any more, but this is in a new-features document, and lots of people have written about it - it's been asked about here before too.
If you're specifically interested only in check constraints then one method to think about is to read the definitions of the target check constraints from the data dictionary and apply them as predicates to the query that extracts data from the source table using dynamic sql.
Given:
create table t1 (
col1 number check (col1 between 3 and 10))
You can:
select constraint_name,
search_condition
from user_constraints
where constraint_type = 'C' and
table_name = 'T1'
The result being:
"SYS_C00226681", "col1 between 3 and 10"
From there it's "a simple matter of coding", as they say, and the method will work on just about any version of Oracle. The most efficient method would probably be to use a multitable insert to direct rows to either the intended target table or to an error logging table based on the result of a CASE statement that applies the check constraint predicates.
I have a table called members
CREATE TABLE netcen.mst_member
(
mem_code character varying(8) NOT NULL,
mem_name text NOT NULL,
mem_cnt_code character varying(2) NOT NULL,
mem_brn_code smallint NOT NULL, -- The branch where the member belongs
mem_email character varying(128),
mem_cell character varying(11),
mem_address text,
mem_typ_code smallint NOT NULL,
CONSTRAINT mem_code PRIMARY KEY (mem_code ))
each member type has a different sequence for the member code. i.e for gold members their member codes will be
GLD0091, GLD0092,...
and platinum members codes will be
PLT00020, PLT00021,...
i would like to have the default value for the field mem_code as a dynamic value depending on the member type selected. how can i use a check constraint to implement that??
please help, am using Postgresql 9.1
i have created the following trigger function to construct the string but i still get an error when i insert into the members table as Randy said.
CREATE OR REPLACE FUNCTION netcen.generate_member_code()
RETURNS trigger AS
$BODY$DECLARE
tmp_suffix text :='';
tmp_prefix text :='';
tmp_typecode smallint ;
cur_setting refcursor;
BEGIN
OPEN cur_setting FOR
EXECUTE 'SELECT typ_suffix,typ_prefix,typ_code FROM mst_member_type WHERE type_code =' || NEW.mem_typ_code ;
FETCH cur_setting into tmp_suffix,tmp_prefix,tmp_typecode;
CLOSE cur_setting;
NEW.mem_code:=tmp_prefix || to_char(nextval('seq_members_'|| tmp_typecode), 'FM0000000') || tmp_suffix;
END$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
ALTER FUNCTION netcen.generate_member_code()
OWNER TO mnoma;
where could i be going wrong?
i get the following error
ERROR: relation "mst_member_type" does not exist
LINE 1: SELECT typ_suffix,typ_prefix,typ_code FROM mst_member_type W...
^
QUERY: SELECT typ_suffix,typ_prefix,typ_code FROM mst_member_type WHERE typ_code =1
CONTEXT: PL/pgSQL function "generate_member_code" line 7 at OPEN
i think this is a normalization problem.
the codes you provide are derivable from other information - therefore really do not belong as independent columns.
you could just store the type in one column, and the number in another - then on any query where needed append them together to make this combo-code.
if you want to persist this denormalized solution, then you could make a trigger to construct the string on any insert or update.
I have a table column that uses an enum type. I wish to update that enum type to have an additional possible value. I don't want to delete any existing values, just add the new value. What is the simplest way to do this?
PostgreSQL 9.1 introduces ability to ALTER Enum types:
ALTER TYPE enum_type ADD VALUE 'new_value'; -- appends to list
ALTER TYPE enum_type ADD VALUE 'new_value' BEFORE 'old_value';
ALTER TYPE enum_type ADD VALUE 'new_value' AFTER 'old_value';
NOTE if you're using PostgreSQL 9.1 or later, and you are ok with making changes outside of a transaction, see this answer for a simpler approach.
I had the same problem few days ago and found this post. So my answer can be helpful for someone who is looking for solution :)
If you have only one or two columns which use the enum type you want to change, you can try this. Also you can change the order of values in the new type.
-- 1. rename the enum type you want to change
alter type some_enum_type rename to _some_enum_type;
-- 2. create new type
create type some_enum_type as enum ('old', 'values', 'and', 'new', 'ones');
-- 3. rename column(s) which uses our enum type
alter table some_table rename column some_column to _some_column;
-- 4. add new column of new type
alter table some_table add some_column some_enum_type not null default 'new';
-- 5. copy values to the new column
update some_table set some_column = _some_column::text::some_enum_type;
-- 6. remove old column and type
alter table some_table drop column _some_column;
drop type _some_enum_type;
3-6 should be repeated if there is more than 1 column.
A possible solution is the following; precondition is, that there are not conflicts in the used enum values. (e.g. when removing an enum value, be sure that this value is not used anymore.)
-- rename the old enum
alter type my_enum rename to my_enum__;
-- create the new enum
create type my_enum as enum ('value1', 'value2', 'value3');
-- alter all you enum columns
alter table my_table
alter column my_column type my_enum using my_column::text::my_enum;
-- drop the old enum
drop type my_enum__;
Also in this way the column order will not be changed.
If you are using Postgres 12 (or later) you can just run ALTER TYPE ... ADD VALUE inside of transaction (documentation).
If ALTER TYPE ... ADD VALUE (the form that adds a new value to an enum
type) is executed inside a transaction block, the new value cannot be
used until after the transaction has been committed.
So no hacks needed in migrations.
UPD: here is an example (thanks to Nick for it)
ALTER TYPE enum_type ADD VALUE 'new_value';
If you fall into situation when you should add enum values in transaction, f.e. execute it in flyway migration on ALTER TYPE statement you will be get error ERROR: ALTER TYPE ... ADD cannot run inside a transaction block (see flyway issue #350) you could add such values into pg_enum directly as workaround (type_egais_units is name of target enum):
INSERT INTO pg_enum (enumtypid, enumlabel, enumsortorder)
SELECT 'type_egais_units'::regtype::oid, 'NEW_ENUM_VALUE', ( SELECT MAX(enumsortorder) + 1 FROM pg_enum WHERE enumtypid = 'type_egais_units'::regtype )
Complementing #Dariusz 1
For Rails 4.2.1, there's this doc section:
== Transactional Migrations
If the database adapter supports DDL transactions, all migrations will
automatically be wrapped in a transaction. There are queries that you
can't execute inside a transaction though, and for these situations
you can turn the automatic transactions off.
class ChangeEnum < ActiveRecord::Migration
disable_ddl_transaction!
def up
execute "ALTER TYPE model_size ADD VALUE 'new_value'"
end
end
just in case, if you are using Rails and you have several statements you will need to execute one by one, like:
execute "ALTER TYPE XXX ADD VALUE IF NOT EXISTS 'YYY';"
execute "ALTER TYPE XXX ADD VALUE IF NOT EXISTS 'ZZZ';"
From Postgres 9.1 Documentation:
ALTER TYPE name ADD VALUE new_enum_value [ { BEFORE | AFTER } existing_enum_value ]
Example:
ALTER TYPE user_status ADD VALUE 'PROVISIONAL' AFTER 'NORMAL'
Disclaimer: I haven't tried this solution, so it might not work ;-)
You should be looking at pg_enum. If you only want to change the label of an existing ENUM, a simple UPDATE will do it.
To add a new ENUM values:
First insert the new value into pg_enum. If the new value has to be the last, you're done.
If not (you need to a new ENUM value in between existing ones), you'll have to update each distinct value in your table, going from the uppermost to the lowest...
Then you'll just have to rename them in pg_enum in the opposite order.
Illustration
You have the following set of labels:
ENUM ('enum1', 'enum2', 'enum3')
and you want to obtain:
ENUM ('enum1', 'enum1b', 'enum2', 'enum3')
then:
INSERT INTO pg_enum (OID, 'newenum3');
UPDATE TABLE SET enumvalue TO 'newenum3' WHERE enumvalue='enum3';
UPDATE TABLE SET enumvalue TO 'enum3' WHERE enumvalue='enum2';
then:
UPDATE TABLE pg_enum SET name='enum1b' WHERE name='enum2' AND enumtypid=OID;
And so on...
I can't seem to post a comment, so I'll just say that updating pg_enum works in Postgres 8.4 . For the way our enums are set up, I've added new values to existing enum types via:
INSERT INTO pg_enum (enumtypid, enumlabel)
SELECT typelem, 'NEWENUM' FROM pg_type WHERE
typname = '_ENUMNAME_WITH_LEADING_UNDERSCORE';
It's a little scary, but it makes sense given the way Postgres actually stores its data.
Updating pg_enum works, as does the intermediary column trick highlighted above. One can also use USING magic to change the column's type directly:
CREATE TYPE test AS enum('a', 'b');
CREATE TABLE foo (bar test);
INSERT INTO foo VALUES ('a'), ('b');
ALTER TABLE foo ALTER COLUMN bar TYPE varchar;
DROP TYPE test;
CREATE TYPE test as enum('a', 'b', 'c');
ALTER TABLE foo ALTER COLUMN bar TYPE test
USING CASE
WHEN bar = ANY (enum_range(null::test)::varchar[])
THEN bar::test
WHEN bar = ANY ('{convert, these, values}'::varchar[])
THEN 'c'::test
ELSE NULL
END;
As long as you've no functions that explicitly require or return that enum, you're good. (pgsql will complain when you drop the type if there are.)
Also, note that PG9.1 is introducing an ALTER TYPE statement, which will work on enums:
http://developer.postgresql.org/pgdocs/postgres/release-9-1-alpha.html
Can't add a comment to the appropriate place, but ALTER TABLE foo ALTER COLUMN bar TYPE new_enum_type USING bar::text::new_enum_type with a default on the column failed. I had to:
ALTER table ALTER COLUMN bar DROP DEFAULT;
and then it worked.
Here is a more general but a rather fast-working solution, which apart from changing the type itself updates all columns in the database using it. The method can be applied even if a new version of ENUM is different by more than one label or misses some of the original ones. The code below replaces my_schema.my_type AS ENUM ('a', 'b', 'c') with ENUM ('a', 'b', 'd', 'e'):
CREATE OR REPLACE FUNCTION tmp() RETURNS BOOLEAN AS
$BODY$
DECLARE
item RECORD;
BEGIN
-- 1. create new type in replacement to my_type
CREATE TYPE my_schema.my_type_NEW
AS ENUM ('a', 'b', 'd', 'e');
-- 2. select all columns in the db that have type my_type
FOR item IN
SELECT table_schema, table_name, column_name, udt_schema, udt_name
FROM information_schema.columns
WHERE
udt_schema = 'my_schema'
AND udt_name = 'my_type'
LOOP
-- 3. Change the type of every column using my_type to my_type_NEW
EXECUTE
' ALTER TABLE ' || item.table_schema || '.' || item.table_name
|| ' ALTER COLUMN ' || item.column_name
|| ' TYPE my_schema.my_type_NEW'
|| ' USING ' || item.column_name || '::text::my_schema.my_type_NEW;';
END LOOP;
-- 4. Delete an old version of the type
DROP TYPE my_schema.my_type;
-- 5. Remove _NEW suffix from the new type
ALTER TYPE my_schema.my_type_NEW
RENAME TO my_type;
RETURN true;
END
$BODY$
LANGUAGE 'plpgsql';
SELECT * FROM tmp();
DROP FUNCTION tmp();
The whole process will run fairly quickly, because if the order of labels persists, no actual change of data will happen. I applied the method on 5 tables using my_type and having 50,000−70,000 rows in each, and the whole process took just 10 seconds.
Of course, the function will return an exception in case if labels that are missing in the new version of the ENUM are used somewhere in the data, but in such situation something should be done beforehand anyway.
For those looking for an in-transaction solution, the following seems to work.
Instead of an ENUM, a DOMAIN shall be used on type TEXT with a constraint checking that the value is within the specified list of allowed values (as suggested by some comments). The only problem is that no constraint can be added (and thus neither modified) to a domain if it is used by any composite type (the docs merely says this "should eventually be improved"). Such a restriction may be worked around, however, using a constraint calling a function, as follows.
START TRANSACTION;
CREATE FUNCTION test_is_allowed_label(lbl TEXT) RETURNS BOOL AS $function$
SELECT lbl IN ('one', 'two', 'three');
$function$ LANGUAGE SQL IMMUTABLE;
CREATE DOMAIN test_domain AS TEXT CONSTRAINT val_check CHECK (test_is_allowed_label(value));
CREATE TYPE test_composite AS (num INT, word test_domain);
CREATE TABLE test_table (val test_composite);
INSERT INTO test_table (val) VALUES ((1, 'one')::test_composite), ((3, 'three')::test_composite);
-- INSERT INTO test_table (val) VALUES ((4, 'four')::test_composite); -- restricted by the CHECK constraint
CREATE VIEW test_view AS SELECT * FROM test_table; -- just to show that the views using the type work as expected
CREATE OR REPLACE FUNCTION test_is_allowed_label(lbl TEXT) RETURNS BOOL AS $function$
SELECT lbl IN ('one', 'two', 'three', 'four');
$function$ LANGUAGE SQL IMMUTABLE;
INSERT INTO test_table (val) VALUES ((4, 'four')::test_composite); -- allowed by the new effective definition of the constraint
SELECT * FROM test_view;
CREATE OR REPLACE FUNCTION test_is_allowed_label(lbl TEXT) RETURNS BOOL AS $function$
SELECT lbl IN ('one', 'two', 'three');
$function$ LANGUAGE SQL IMMUTABLE;
-- INSERT INTO test_table (val) VALUES ((4, 'four')::test_composite); -- restricted by the CHECK constraint, again
SELECT * FROM test_view; -- note the view lists the restricted value 'four' as no checks are made on existing data
DROP VIEW test_view;
DROP TABLE test_table;
DROP TYPE test_composite;
DROP DOMAIN test_domain;
DROP FUNCTION test_is_allowed_label(TEXT);
COMMIT;
Previously, I used a solution similar to the accepted answer, but it is far from being good once views or functions or composite types (and especially views using other views using the modified ENUMs...) are considered. The solution proposed in this answer seems to work under any conditions.
The only disadvantage is that no checks are performed on existing data when some allowed values are removed (which might be acceptable, especially for this question). (A call to ALTER DOMAIN test_domain VALIDATE CONSTRAINT val_check ends up with the same error as adding a new constraint to the domain used by a composite type, unfortunately.)
Note that a slight modification such as CHECK (value = ANY(get_allowed_values())), where get_allowed_values() function returned the list of allowed values, would not work - which is quite strange, so I hope the solution proposed above works reliably (it does for me, so far...). (it works, actually - it was my error)
As discussed above, ALTER command cannot be written inside a transaction. The suggested way is to insert into the pg_enum table directly, by retrieving the typelem from pg_type table and calculating the next enumsortorder number;
Following is the code that I use. (Checks if duplicate value exists before inserting (constraint between enumtypid and enumlabel name)
INSERT INTO pg_enum (enumtypid, enumlabel, enumsortorder)
SELECT typelem,
'NEW_ENUM_VALUE',
(SELECT MAX(enumsortorder) + 1
FROM pg_enum e
JOIN pg_type p
ON p.typelem = e.enumtypid
WHERE p.typname = '_mytypename'
)
FROM pg_type p
WHERE p.typname = '_mytypename'
AND NOT EXISTS (
SELECT * FROM
pg_enum e
JOIN pg_type p
ON p.typelem = e.enumtypid
WHERE e.enumlabel = 'NEW_ENUM_VALUE'
AND p.typname = '_mytypename'
)
Note that your type name is prepended with an underscore in the pg_type table. Also, the typname needs to be all lowercase in the where clause.
Now this can be written safely into your db migrate script.
DB::statement("ALTER TABLE users DROP CONSTRAINT users_user_type_check");
$types = ['old_type1', 'old_type1', 'new_type3'];
$result = join( ', ', array_map(function ($value){
return sprintf("'%s'::character varying", $value);
}, $types));
DB::statement("ALTER TABLE users ADD CONSTRAINT users_user_type_check CHECK (user_type::text = ANY (ARRAY[$result]::text[]))");
When using Navicat you can go to types (under view -> others -> types) - get the design view of the type - and click the "add label" button.
I don't know if have other option but we can drop the value using:
select oid from pg_type where typname = 'fase';'
select * from pg_enum where enumtypid = 24773;'
select * from pg_enum where enumtypid = 24773 and enumsortorder = 6;
delete from pg_enum where enumtypid = 24773 and enumsortorder = 6;
Simplest: get rid of enums. They are not easily modifiable, and thus should very rarely be used.