I've been struggling to get DbMetal to process my SQLite database. I finally isolated the problem. It won't allow a table to have two foreign key references to the same column.
For example, a SQLite database with these two tables will fail:
CREATE TABLE Person
(
Id INTEGER PRIMARY KEY,
Name TEXT NOT NULL
);
CREATE TABLE Match
(
Id INTEGER PRIMARY KEY,
WinnerPersonId INTEGER NOT NULL REFERENCES Person(Id),
LoserPersonId INTEGER NOT NULL REFERENCES Person(Id)
);
I get this error:
DbMetal: Sequence contains more than one matching element
If I get rid of the second foreign key reference, no error occurs.
So, this works:
CREATE TABLE Match
(
Id INTEGER PRIMARY KEY,
WinnerPersonId INTEGER NOT NULL REFERENCES Person(Id),
LoserPersonId INTEGER NOT NULL
);
But I really need both "person" columns to reference the Person table.
I submitted a bug report for this, but I could use a workaround in the meantime. Any ideas?
I just had the same problem and created a patch. I've also posted it at your bug report. For others, you can find the patch here: http://pastebin.com/VhNptMqp.
Related
I am talking about the normalization of a primary key. So let's say my primary key column is of type nvarchar, which violates the rules of normalization. After removing the primary key constraint and the identity specification from the desired column. I need to create a new column which will be the new primary key of that table.
My question is, what should happen with the previous primary key?
I've got an answer that sounds like: "the column should became a semantic key", but i can't understand this answer.
It's not unusual when designing a database schema to use a SURROGATE primary key. The idea is to give each record a unique and permanent identifier so it can be easily referenced by applications and foreign keys. This key has no meaning. Knowing the surrogate key gives you no information about the content of the record. The user of your application would never see this value.
On the other hand, your record may have a SEMANTIC primary key. This is a unique value that identifies this data to that makes sense to the user.
For example, let's say you have a table of Employees. The employer assigns each employee a unique Employee ID Number. Let's say you store this value as a string. To the user that value serves as the unique identifier that refers to that employee. Meanwhile, your table may have a numeric column that serves as the unique identifier for that record.
create table Employee ( EmployeeRecordID int identity(1,1) primary key,
EmployerAssignedID nvarchar(12),
EmployeeName nvarchar(60),
Salary money )
insert into Employee ( EmployerAssignedID, EmployeeName, Salary ) values
( '#ABC100', 'Fred', 25000.12 ),
( '#AZZ314', 'Mary', 37700.00 ),
( '#MAA719', 'Fran', 34444.04 ),
( '#MZA977', 'Mary', 36000.00 )
As each record is added, SQL Server generates a unique EmployeeRecordID for each record, starting with 1. This is the SURROGATE key. Within your database and within your application, you would use this value to reference the record.
But when your application is communicating with the users, you would use the EmployerAssignedID. This is the SEMANTIC primary key. It makes sense to your users to use this value to search for a particular employee.
A primary key is no more than a unique index which can't have NULL value as a key. Like any of indexes it can be clustered or nonclustered.
Deleting a clustered index makes table become a heap with changes in structure and behaviour. Deleting a nonclustered index is just deallocation its space and does not affect that table and other indexes on the table as well.
So after deleting you just have a column(s) with unique values and you are able to consider them as a semantic key until some duplicate values are inserted.
I'm having a really, really strange issue with postgres. I'm trying to generate GUIDs for business objects in my database, and I'm using a new schema for this. I've done this with several business objects already; the code I'm using here has been tested and has worked in other scenarios.
Here's the definition for the new table:
CREATE TABLE guid.public_obj
(
guid uuid NOT NULL DEFAULT uuid_generate_v4(),
id integer NOT NULL,
CONSTRAINT obj_guid_pkey PRIMARY KEY (guid),
CONSTRAINT obj_id_fkey FOREIGN KEY (id)
REFERENCES obj (obj_id)
ON UPDATE CASCADE ON DELETE CASCADE
)
However when I try to backfill this using the following code, I get a SQL state 23503 claiming that I'm violating the foreign key constraint.
INSERT INTO guid.public_obj (guid, id)
SELECT uuid_generate_v4(), o.obj_id
FROM obj o;
ERROR: insert or update on table "public_obj" violates foreign key constraint "obj_id_fkey"
SQL state: 23503
Detail: Key (id)=(-2) is not present in table "obj".
However, if I do a SELECT on the source table, the value is definitely present:
SELECT uuid_generate_v4(), o.obj_id
FROM obj o
WHERE obj_id = -2;
"0f218286-5b55-4836-8d70-54cfb117d836";-2
I'm baffled as to why postgres might think I'm violating the fkey constraint when I'm pulling the value directly out of the corresponding table. The only constraint on obj_id in the source table definition is that it's the primary key. It's defined as a serial; the select returns it as an integer. Please help!
Okay, apparently the reason this is failing is because unbeknownst to me the table (which, I stress, does not contain many elements) is partitioned. If I do a SELECT COUNT(*) FROM obj; it returns 348, but if I do a SELECT COUNT(*) FROM ONLY obj; it returns 44. Thus, there are two problems: first, some of the data in the table has not been partitioned correctly (there exists unpartitioned data in the parent table), and second, the data I'm interested in is split out across multiple child tables and the fkey constraint on the parent table fails because the data isn't actually in the parent table. (As a note, this is not my architecture; I'm having to work with something that's been around for quite some time.)
The partitioning is by implicit type (there are three partitions, each of which contains rows relating to a specific subtype of obj) and I think the eventual solution is going to be creating GUID tables for each of the subtypes. I'm going to have to handle the stuff that's actually in the obj table probably by selecting it into a temp table, dropping the rows from the obj table, then reinserting them so that they can be partitioned properly.
The following code creates a table without raising any errors:
CREATE TABLE test(
ID INTEGER NULL,
CONSTRAINT PK_test PRIMARY KEY(ID)
)
Note that I cannot insert a NULL, as expected:
INSERT INTO test
VALUES(1),(NULL)
ERROR: null value in column "id" violates not-null constraint
DETAIL: Failing row contains (null).
********** Error **********
ERROR: null value in column "id" violates not-null constraint
SQL state: 23502
Detail: Failing row contains (null).
Why can I create a table with a self-contradictory definition? ID column is explicitly declared as NULLable, and it is not implicitly nullable, as a part of the PRIMARY KEY. Does it make sense?
Edit: would it not be better if this self-contradictory CREATE TABLE just failed right there?
Because the PRIMARY KEY makes the included column(s) NOT NULL automatically. I quote the manual here:
The primary key constraint specifies that a column or columns of a
table can contain only unique (non-duplicate), nonnull values.
Technically, PRIMARY KEY is merely a combination of UNIQUE and NOT NULL.
Bold emphasis mine.
I ran a test to confirm that NOT NULL is completely redundant in combination with a PRIMARY KEY constraint (in the current implementation, retested in version 13). The NOT NULL constraint stays even after dropping the PK constraint, irregardless of an explicit NOT NULL clause at creation time.
CREATE TABLE foo (foo_id int PRIMARY KEY);
ALTER TABLE foo DROP CONSTRAINT foo_pkey;
db=# \d foo
table »public.foo«
column | type | attribute
--------+---------+-----------
foo_id | integer | not null -- stays
db<>fiddle here
Identical behavior if NULL is included in the CREATE TABLE statement.
It still won't hurt to keep NOT NULL redundantly in code repositories if the column is supposed to be NOT NULL. If you later decide to alter the PK constraint, you might forget to mark the column NOT NULL - or whether it even was supposed to be NOT NULL.
There is an item in the Postgres TODO wiki to decouple NOT NULL from the PK constraint. So this might change in future versions:
Move NOT NULL constraint information to pg_constraint
Currently NOT NULL constraints are stored in pg_attribute without any designation of their origins, e.g. primary keys. One manifest
problem is that dropping a PRIMARY KEY constraint does not remove the
NOT NULL constraint designation. Another issue is that we should
probably force NOT NULL to be propagated from parent tables to
children, just as CHECK constraints are. (But then does dropping
PRIMARY KEY affect children?)
Answer to added question
Would it not be better if this self-contradictory CREATE TABLE just
failed right there?
As explained above, this
foo_id INTEGER NULL PRIMARY KEY
is (currently) 100 % equivalent to:
foo_id INTEGER PRIMARY KEY
Since NULL is treated as noise word in this context.
And we wouldn't want the latter to fail. So this is not an option.
If memory serves, the docs mention that:
the null in create table statements is basically a noise word that gets ignored
the primary key forces a not null and a unique constraint
See:
# create table test (id int null primary key);
CREATE TABLE
# \d test
Table "public.test"
Column | Type | Modifiers
--------+---------+-----------
id | integer | not null
Indexes:
"test_pkey" PRIMARY KEY, btree (id)
If as #ErwinBrandstetter said, PRIMARY KEY is merely a combination of UNIQUE and NOT NULL, you can use an UNIQUE constraint without NOT NULL instead of PRIMARY KEY. Example:
CREATE TABLE test(
id integer,
CONSTRAINT test_id_key UNIQUE(id)
);
This way you can do things like:
INSERT INTO test (id) VALUES (NULL);
INSERT INTO test (id) VALUES (NULL);
INSERT INTO test (id) VALUES (NULL);
Speaking about NOT NULL, there are many ways to ensure it.
Not speaking only about PostgreSQL as a relational database engine:
Column constraint.
Table constraint (single NOT NULL or complex Boolean expression)
Index definition,
Triggers that change any NULL to "something else".
There may be even other methods.
One suffices. Not having the others means we have a contradiction? I do not think so.
A PRIMARY KEY column is forced to be NOT NULL.
The documentation says as shown below:
Adding a primary key will automatically create a unique B-tree index
on the column or group of columns listed in the primary key, and will
force the column(s) to be marked NOT NULL.
I am having a bit of trouble creating a foreign key in my DB. Here is a paraphrased model of what my tables look like:
NOTE
* (PK) NOTE_ID BIGINT
* TITLE VARCHAR(200)
* DATE DATETIME
* SERIES_ID BIGINT
SERIES
* (PK) SERIES_ID BIGINT
* TITLE VARCHAR(200)
* DESCR VARCHAR(1000)
I am trying to create a "has a" relationship between NOTE and SERIES by SERIES_ID. I thought that setting up a foreign key between the two tables by SERIES_ID would be the solution, but when I attempt to create it I get the following error:
ERROR: There are no primary or candidate keys in the referenced table 'dbo.SERIES' that match the referencing column list in the
foreign key 'FK_SERIES_NOTE'. Could not create constraint
I'm using the web database manager that comes with the GoDaddy SQL Server I set up, so I'm not sure what the underlying query it's trying to use or I would post it.
At the end of the day, this is all to create a relationship so that the NHibernate mappings for my Note object will contain a one-to-one relationship to a Series object. I may not even be trying to tackle this the correct way with the foreign key, though.
Am I going about this the correct way?
EDIT:
In an attempt to pair down the tables to a more simple example, I removed what I thought to be several non-critical columns. However, I ended up leaving a field that was actually a part of the composite primary key on the series table. So, because I was trying to assign the foreign key to only one part of the composite key, it was not allowing me to do so.
In the end, I have taken another look at the structure of my table and found that I don't actually need the other piece of the composite key - and after removing, the assignment of the foreign key works great now.
If you can, you may try running the following statement in a query analyzer and see the resulting error message (I guess #Damien_The_Unbeliever is right ) :
ALTER TABLE NOTE ADD CONSTRAINT FK_SERIES_NOTE
FOREIGN KEY (SERIES_ID) REFERENCES SERIES(SERIES_ID)
--ON DELETE CASCADE
-- uncomment the preceding line if you want a delete on a serie
-- to automatically delete all notes on this serie
Hope this will help
I'm having trouble figuring out how to create a foreign key constraint. My data model is fixed and out of my control, it looks like this:
CREATE TABLE Enquiry
(Enquiry_Ref INTEGER PRIMARY KEY CLUSTERED, Join_Ref INTEGER, EnquiryDate, EnquiryType...)
CREATE TABLE Contact
(Contact_Ref INTEGER PRIMARY KEY CLUSTERED, Surname, Forenames ....)
CREATE TABLE UniversalJoin
(Join_Ref INTEGER, Contact_Ref INTEGER, Rel_Type INTEGER)
Each Enquiry has exactly one Contact. The link between the two is the UniversalJoin table where
Enquiry.Join_Ref = UniversalJoin.Join_Ref AND
Rel_Type = 1 AND
UniversalJoin.Contact_Ref = Contact.Contact_Ref
The Rel_Type differs depending on what the source table is, so in the case of Enquiry, Rel_Type is 1 but for another table it would set to N.
My question is how do I create a foreign key constraint to enforce the integrity of this relationship? What I want to say, but can't, is:
CREATE TABLE Enquiry
...
CONSTRAINT FK_Foo
FOREIGN KEY (Join_Ref)
REFERENCES UniversalJoin (JoinRef WHERE Rel_Type=1)
You can't use conditional or filtered foreign keys in SQL Server
In these cases, you could have a multiple column FK between (JoinRef, Rel_Type) and set a check constraint on Rel_Type in UniversalJoin to make it 1.
However, I think you are trying to have a row with multiple parents which can't be done.
You might rather want to have a look at CHECK Constraints
CHECK constraints enforce domain
integrity by limiting the values that
are accepted by a column. They are
similar to FOREIGN KEY constraints in
that they control the values that are
put in a column. The difference is in
how they determine which values are
valid: FOREIGN KEY constraints obtain
the list of valid values from another
table, and CHECK constraints determine
the valid values from a logical
expression that is not based on data
in another column.
You could use a table trigger with INSERT and Update to layer the equivalent as a FK.
This way you are able to apply conditions i.e. if column value =1 check exists in table a if column value = 2 then check another table.