How can I ensure a new record contains values that refer to a schema, table, and column that currently exist in the database?
For example, given a table:
CREATE TEMP TABLE "column_reference" (
"gid" SERIAL PRIMARY KEY
, "val" INTEGER
, "schema" TEXT
, "table" TEXT
, "column" TEXT
);
how can I ensure schema.table.column exists?
I tried a fkey to information_schema.columns, but, of course, foreign keys to views are disallowed.
It also appears from the columns view definition that I need several tables in order to get the schema, table, and column names so I can't create a single foreign key to the source tables.
My current workaround is to manually create a __columns table from the information_schema.columns view and reference it instead. This works given the control I happen to have on this project at this point in time, but am looking for a permanent, dynamic solution.
Is there a different constraint or method I could use?
You can create a trigger function that checks what you want, and associate this function with a trigger which is fired BEFORE an INSERT or an UPDATE of the table:
This could be your trigger function:
CREATE FUNCTION column_reference_check()
RETURNS trigger
LANGUAGE 'plpgsql'
AS
$BODY$
begin
/* Check for the existence of the required column */
if EXISTS (
SELECT *
FROM information_schema.columns
WHERE
table_schema = new.schema
AND table_name = new.table
AND column_name = new.column )
then
/* Everything Ok */
return new ;
else
/* This is approx. what would happen if you had a constraint */
RAISE EXCEPTION 'Trying to insert non-matching (%, %, %)', new.schema, new.table, new.column ;
/* As an alternative, you could also just return NULL
As a result, the row is *not* inserted, but execution continues */
return NULL ;
end if ;
end ;
$BODY$;
To associate this function with a trigger, you'd use:
CREATE TRIGGER column_reference_check_trg
BEFORE INSERT OR UPDATE OF "schema", "table", "column"
ON column_reference
FOR EACH ROW
EXECUTE PROCEDURE column_reference_check();
Now you can try to perform the following INSERT, that should succeed:
INSERT INTO column_reference
VALUES (2, 1, 'pg_catalog', 'pg_statistic', 'starelid');
But if you try this one:
INSERT INTO column_reference
VALUES (-1, 1, 'false_schema', 'false_table', 'false_column');
... you get an exception:
ERROR: Trying to insert non-matching (false_schema, false_table, false_column)
CONTEXT: PL/pgSQL function column_reference_check() line 16 at RAISE
Related
I have a stored procedure that updates a type of Star. The database table, starValList, has a foreign key for a table called galaxyValList. That key is galaxyID.
So I need to create a new galaxyID value if it is null or empty GUID.
So I try this:
IF(#galaxyID IS NULL OR #galaxyID = '00000000-0000-0000-0000-000000000000')
BEGIN
SELECT #galaxyID=NEWID()
END
UPDATE starValList SET
[starIRR]= #starIRR,
[starDesc] = #starDesc,
[starType] = #starType,
[galaxyID]=#galaxyID
WHERE [starID] = #starID;
And it works for the starValList table!
But I think it fails too because of this error:
The UPDATE statement conflicted with the FOREIGN KEY constraint "FK_starValList_galaxyValList". The conflict occurred in database "Astro105", table "dbo.galaxyValList", column 'galaxyID'.
It fails because there may not yet be an entry for that particular galaxy in galaxyValList table.
But I still need the row in galaxyValList because it can be used later.
How can I fix my stored procedure so that it doesn't generate this error?
Thanks!
Use if exists to check if the value exists on the table. If it does then do an update. If it doesn't then have some other logic to maybe create it or whatever your requirements may be so the value can then be used in an update. Basic example below:
IF(#galaxyID IS NULL OR #galaxyID = '00000000-0000-0000-0000-000000000000')
BEGIN
SELECT #galaxyID=NEWID()
END
if not exists ( select top 1 1 from galaxyTable where galaxyId = #galaxyId)
begin
-- the #galaxyId doesnt exist, create it so you can use the value in an update later
insert into galaxyTable ( galaxyId ) select #galaxyId
end
UPDATE starValList SET
[starIRR]= #starIRR,
[starDesc] = #starDesc,
[starType] = #starType,
[galaxyID]=#galaxyID
WHERE [starID] = #starID;
My question is the following:
I have a PostgreSQL database with several tables, what I want to do is the following, I will give you three tables as an example.
The first table is called teachers that has attributes name, id and type where type can only be "IN" or "EX"
I have a second table called internal teachers with the ID of all teachers with the attribute type = "IN" and another table called external teachers with the ID attribute of all teachers where type = "EX"
The thing is that at the moment I have to manually add the ID to the internal teachers and external teachers table, I would like to know if there is any way to that when entering the teacher's ID and its type to the teachers table, the database automatically places its ID in one table or another depending on its type.
Without more to say a greeting and thanks!
If you want automatically insertion in internal and external tables, you can use trigger as below:
Trigger Function
create or replace function trig_fun()
returns trigger AS
$$
begin
if(new.type='IN') then
insert into internal values(new.id);
end if;
if(new.type='EX') then
insert into external values(new.id);
end if;
return new;
end;
$$
language plpgsql
and attach it after insert event of teacher table
create trigger trig_on_insert
after insert on
teachers
for each row
execute procedure trig_fun()
DEMO
You could implement your ‘internal’ and ‘external’ tables as separate views of the teachers table.
create table teachers as
select * from (values ( 1, 'EX')) as z(id, type);
create view external as (select * from teachers where type = 'EX');
create view internal as (select * from teachers where type = 'IN');
The following query:
select * from external;
gives:
(1, 'EX')
Whereas selecting from 'internal' gives an empty result (because there are no internal entries).
http://sqlfiddle.com/#!17/25a5a/1
There is no Postgres functionality that automatically creates row in a child table on insert on the parent table. Foreign key constraints enforce integrity when the data changes and come with various options, but not the one you describe.
One option is to write trigger. You could also be explicit in the query, using Postgres’ returning syntax:
with
t as (
insert into teachers(name, type)
values (?, ?)
returning *
),
x as (
insert into teachers_internal(id)
select id from t where type = 'IN'
)
insert into teachers_external(id)
select id from t where type = 'EX'
I would recommend creating foreign keys too, so the proper action is taken when rows are dropped from the parent table (you probably want to delete the children row too).
I am trying to run the following merge statement to insert a row:
MERGE sales.Widget
USING (
VALUES ('19668651', 4.75))
AS widg (WidgetId, WidgetCost)
ON 1=0
WHEN NOT MATCHED THEN
INSERT (WidgetId, WidgetCost)
VALUES (widg.WidgetId, widg.WidgetCost)
OUTPUT INSERTED.WidgetId
INTO #inserted;
GO
I am confused by the error I am getting:
The column reference "inserted.WidgetId" is not allowed because it refers to a base table that is not being modified in this statement.
I thought that the inserted table was just an in-memory table of the values being passed in to the merge statement.
Why then would it care if I am modifying a "base" table as long as the value was passed in?
I can clearly tell that this is related to the fact that I have a view with an INSTEAD OF INSERT trigger on it (because it works fine against a normal table).
But why does SQL Server not just return the value that was passed in? (WidgetId in this case.)
Here is the script to reproduce the error:
CREATE SCHEMA sales
GO
-- Create the base table
CREATE TABLE sales.Widget_OLD(
WIDGET_ID int NOT NULL,
WIDGET_COST money NOT NULL
CONSTRAINT PK_Widget PRIMARY KEY CLUSTERED (WIDGET_ID ASC)
)
GO
-- Create the overlay view
CREATE VIEW sales.Widget AS
SELECT widg.WIDGET_ID AS WidgetId, widg.WIDGET_COST AS WidgetCost
FROM sales.Widget_OLD widg
GO
-- create the instead of insert trigger
CREATE TRIGGER sales.InsertWidget ON sales.Widget
INSTEAD OF INSERT AS
BEGIN
INSERT INTO sales.Widget_OLD (WIDGET_ID, WIDGET_COST)
SELECT Inserted.WidgetId, inserted.WidgetCost
FROM Inserted
END
GO
DECLARE #inserted TABLE (WidgetId varchar(11) NOT null);
MERGE sales.Widget
USING (
VALUES ('19668651', 4.75))
AS widg (WidgetId, WidgetCost)
ON 1=0
WHEN NOT MATCHED THEN
INSERT (WidgetId, WidgetCost)
VALUES (widg.WidgetId, widg.WidgetCost)
OUTPUT INSERTED.WidgetId
INTO #inserted;
GO
-- Clean up
DROP TRIGGER sales.InsertWidget
DROP VIEW sales.Widget
DROP TABLE sales.Widget_OLD
DROP SCHEMA sales
go
NOTE: This is from my Entity Framework Core application when I try to do 3+ inserts (see this question for more details) That question is about how to stop EF Core from using MERGE. This one is to understand what is happening.
If I execute a procedure that drops a table and then recreate it using 'SELECT INTO'.
IF that procedure raises an exception after dropping the table, does table dropping take place or not?
Unless you wrap them in a transaction,table will be dropped since each statement will be considered as an implicit transaction..
below are some tests
create table t1
(
id int not null primary key
)
drop table t11
insert into t1
select 1 union all select 1
table t11 will be dropped,even though insert will raise an exception..
one more example..
drop table orderstest
print 'dropped table'
waitfor delay '00:00:05'
select * into orderstest
from Orders
now after 2 seconds,kill session and you can still see orderstest being dropped
I checked with some other statements other than select into ,i don't see a reason why select into will behave differently and this applies even if you wrap statements in a stored proc..
IF you want to rollback all,use a transaction or more better use set xact_Abort on
Yes, the dropped table will be gone. I have had this issue when I script a new primary key. Depending on the table, it saves all the data to a table variable in memory, drops the table, creates a new one with the new pk, then loads the data. If the data violates the new pk, the statement fails and the table variable is dropped leaving me with a new table and no data.
My practice is to create the new table with a slightly different name, load the data, change both table names in a statement, then once all the data is confirmed loaded, drop the original table.
I would like to make Postgres choose the first next available id so that no error occurs in the following case:
CREATE TABLE test(
id serial PRIMARY KEY,
name varchar
);
Then:
INSERT INTO test VALUES (2,'dd');
INSERT INTO test (name) VALUES ('aa');
INSERT INTO test (name) VALUES ('bb');
This will give a constraint error since id is primary.
How can I tell Postgres to insert the record with the next free id?
Generally it's best to never overrule the default in a serial column. If you sometimes need to provide id values manually, replace the standard DEFAULT clause nextval('sequence_name') of the serial column with a custom function that omits existing values.
Based on this dummy table:
CREATE TABLE test (test_id serial PRIMARY KEY, test text);
Function:
CREATE OR REPLACE FUNCTION f_test_test_id_seq(OUT nextfree bigint) AS
$func$
BEGIN
LOOP
SELECT INTO nextfree val
FROM nextval('test_test_id_seq'::regclass) val -- use actual name of sequence
WHERE NOT EXISTS (SELECT 1 FROM test WHERE test_id = val);
EXIT WHEN FOUND;
END LOOP;
END
$func$ LANGUAGE plpgsql;
Alter default:
ALTER TABLE test ALTER COLUMN test_id SET DEFAULT f_test_test_id_seq();
It's not strictly a serial any more, but serial is only a convenience feature anyway:
Safely and cleanly rename tables that use serial primary key columns in Postgres?
And if you build this on top of a serial column the SEQUENCE is automatically "owned" by the table column, which is probably a good thing.
This is a slightly faster variant of:
Autoincrement, but omit existing values in the column
Table and sequence name are hard coded here. You could easily parametrize the sequence name (like in the linked answer) and even the table name - and test existence with a dynamic statement using EXECUTE. Would give you a generic function, but the call would be a bit more expensive.
CREATE OR REPLACE FUNCTION f_nextfree(_tbl regclass
, _col text
, _seq regclass
, OUT nextfree bigint) AS
$func$
BEGIN
LOOP
EXECUTE '
SELECT val FROM nextval($1) val WHERE NOT EXISTS (
SELECT 1 FROM ' || _tbl || ' WHERE ' || quote_ident(_col) || ' = val)'
INTO nextfree
USING _seq;
EXIT WHEN nextfree IS NOT NULL;
END LOOP;
END
$func$ LANGUAGE plpgsql;
ALTER TABLE test2 ALTER COLUMN test2_id
SET DEFAULT f_nextfree('test2', 'test2_id', 'test2_test2_id_seq');
SQL Fiddle.