I have a table of identifiers, IntervalFrom and IntervalTo:
Identifier
IntervalFrom
IntervalTo
1
0
2
1
2
4
2
0
2
2
2
4
I already have a trigger to NOT allow the intervals to overlap.
I am looking for a trigger or constraint that will not allow data gaps.
I have search and the information I found relates to gaps in queries and data rather than not allowing them in the first place.
I am unable to find anything in relation to this as a trigger or constraint.
Is this possible using T-SQL?
Thanks in advance.
You can construct a table that automatically is immune from overlaps and gaps:
create table T (
ID int not null,
IntervalFrom int null,
IntervalTo int null,
constraint UQ_T_Previous_XRef UNIQUE (ID, IntervalTo),
constraint UQ_T_Next_XRef UNIQUE (ID, IntervalFrom),
constraint FK_T_Previous FOREIGN KEY (ID, IntervalFrom) references T (ID, IntervalTo),
constraint FK_T_Next FOREIGN KEY (ID, IntervalTo) references T (ID, IntervalFrom)
)
go
create unique index UQ_T_Start on T (ID) where IntervalFrom is null
go
create unique index UQ_T_End on T(ID) where IntervalTo is null
go
Note, this does require a slightly different convention for you first and last intervals - they need to use null rather than 0 or the (somewhat arbitrary) 4.
Note also that modifying data in such a table can be a challenge - if you're inserting a new interval, you also need to update other intervals to accommodate the new one. MERGE is your friend here.
Given the above, we can insert your (modified) sample data:
insert into T (ID, IntervalFrom, IntervalTo) values
(1,null,2),
(1,2,null),
(2,null,2),
(2,2,null)
go
But we cannot insert an overlapping value (this errors):
insert into T(ID, IntervalFrom, IntervalTo) values (1,1,3)
You should also see that the foreign keys prevent gaps from existing in a sequence
Related
I have a database in which i have two tables:
CREATE TABLE Transactions (
ID BIGINT IDENTITY(1,1) NOT NULL,
AccountID BIGINT NOT NULL,
Amount BIGINT NOT NULL,
CONSTRAINT PK_Transactions PRIMARY KEY CLUSTERED (ID ASC,AccountID ASC),
CONSTRAINT FK_Transaction_Account FOREIGN KEY (AccountID) REFERENCES Accounts(ID)
);
CREATE TABLE Accounts (
ID BIGINT IDENTITY(1,11) NOT NULL,
Balance BIGINT NOT NULL,
CONSTRAINT PK_Accounts PRIMARY KEY (ID)
);
Transactions are inserted to their table by a stored procedure i wrote, so that two rows are generated when Account 1 transfers 25 "coins" to Account 21:
ID | AccountID | Amount
-------------------------
1 | 1 | -25
-------------------------
1 | 21 | 25
In the above schema, i want the first row to reference the bottom row based on ID and the AccountID being unequal to the AccountID of the bottom row.
And vica versa.
What i want to do would look something like this:
CONSTRAINT FK_Transaction_Counterpart FOREIGN KEY (ID) REFERENCES Transactions(ID) WHERE thisRow.AccountID != referencedRow.AccountID
I haven't found this possibility in the documentation on the table constraints.
So both out of curiosity and intent to use this i ask, is this possible? And if yes, how?
Edit:
Answers reflect that this is not possible, and i should adjust my design or intentions.
I think i will settle with assigning the two transaction rows to each other in the functional code.
A traditional foreign key can't be conditional (i.e. no WHERE clause attached). In your case, I'd probably just make sure that the inserts are atomic (in the same transaction) so that there'd be no possibility of only one of them inserting.
If the data model you are trying to implement is:
One transaction (ID) has two and only two entries in table Transactions
For the two rows of a given Transaction ID, the AccountIDs cannot be the same
Then one perhaps overly-complex way you could enforce this business rule within the database table structures would be as follows:
Table Accounts, as you have defined
Table Transactions, as you have defined
New table TransactionPair with:
Columns (all are NOT NULL)
ID
LowAccountID
HighAccountID
Constraints
Primary key on ID (only one entry per Transaction ID)
Foreign key on (ID, LowAccountID) into Transactions
Foreign key on (ID, HighAccountID) into Transactions
Check constraint on the row such that LowAccountID < HighAccountID
Process:
Add pair of rows to Transactions table
Add single row to TransactionPair referencing the rows just added
If that row cannot be added, something failed, roll everything back
Seems neat and tidy, but quite possibly overly complex. Your mileage may vary.
we have an Oracle Database and we have a table where we store a lot of data in.
This table has a primary key and usually those primary keys are just created upon insertion of a new row.
But now we need to manually insert data into this table with certain fixed primary keys. There is no way to change those primary keys.
So for example:
Our table has already 20 entries with the primary keys 1 to 20.
Now we need to add data manually with the primary keys 21 to 23.
When someone wants to enter a row using our standard approach, the insert process will fail because of:
Caused by: java.sql.BatchUpdateException: ORA-00001: Unique Constraint (VDMA.SYS_C0013552) verletzt
at oracle.jdbc.driver.OraclePreparedStatement.executeBatch(OraclePreparedStatement.java:10500)
at oracle.jdbc.driver.OracleStatementWrapper.executeBatch(OracleStatementWrapper.java:230)
at org.hibernate.jdbc.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:70)
at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:268)
I totally understand this: The database routine (sequence) that is creating the next primary key fails because the next primary key is already taken.
But: How do I tell my sequence to look at the table again and to realize that the next primary key is 24 and not 21 ?
UPDATE
The reason why the IDs need to stay the same is because is accessing the records using a Web Interface using links that contain the ID.
So either we change the implementation mapping the old IDs to new IDs or we keep the IDs in the database.
UPDATE2
Found a solution: Since we are using hibernate, only one sequence is populating all the tables. Thus the primary keys in those 4 days where I was looking for an answer went so high that I can savely import all the data.
How do I tell my sequence to look at the table again and to realize that the next primary key is 24 and not 21 ?
In Oracle, a sequence doesn't know that you intend to use it for any particular table. All the sequence knows is its current value, its increment, its maxval and so on. So, you can't tell the sequence to look at a table, but you can tell your stored procedure to check the table and then increment the sequence beyond the maximum val of the primary key. In other words, if you really insist on manually updating the primary key with non sequence values, then your code needs to check for non sequence values in the PK and get the sequence up to speed before it uses the sequence to generate a new PK.
Here is something simple you can use to bring the sequence up to where it needs to be:
select testseq.nextval from dual;
Each time you run it the sequence increments by 1. Stick it in a for loop and run it until testseq.currval is where you need it to be.
Having said that, I agree with #a_horse_with_no_name and #EdStevens. If you have to insert rows manually, at least use sequence_name.nextval in the insert instead of a literal like '21'. Like this:
create table testtab (testpk number primary key, testval number);
create sequence testseq start with 1 increment by 1;
insert into testtab values (testseq.nextval, '12');
insert into testtab values (testseq.nextval, '123');
insert into testtab values (testseq.nextval, '1234');
insert into testtab values (testseq.nextval, '12345');
insert into testtab values (testseq.nextval, '123456');
select * from testtab;
testpk testval
2 12
3 123
4 1234
5 12345
6 123456
I am working on a project using DB2 in attempt to create a small inventory system. Currently my team and I are looking for answers to a roadblock we have encountered where we are creating tables with auto generated fields. I have included a sample create statement to give you an understanding of what we are working with:
"CREATE TABLE INVENTORY (
PRODUCT_ID INT NOT NULL Generated always as identity (start with 1 increment by 1 minvalue 1 no maxvalue no cycle no cache order),
PRODUCT_NAME CHAR(100) NOT NULL,
DESCRIPTION CHAR(100),
UNIT_PRICE DECIMAL(6 , 2),
QUANTITY INTEGER
)
DATA CAPTURE NONE;
The issue we are coming across is when we insert test data into our database (no errors when we run the ddl; test data has no errors when we insert it) is that we can view the values of an auto generated field in the main table, but NOT in an intersection table where that value is used as a foreign key. An example of an intersection table that we are using in relation to our INVENTORY table is below:
CREATE TABLE RECEIVABLES (
ITEM_LINE INT NOT NULL Generated always as identity (start with 10 increment by 1 minvalue 10 no maxvalue no cycle no cache order),
PAYMENT_TYPE CHAR(15),
REMARKS CHAR(70),
RECEIVED_QUANTITY INTEGER,
VENDOR_ID CHAR(4) NOT NULL,
PRODUCT_ID INT,
SHIPMENT_ID INT
)
DATA CAPTURE NONE;
INSERT INTO RECEIVEABLES VALUES (DEFAULT,'DISCOVER','Success',9,'1000',DEFAULT,DEFAULT);
When we view the data, we are not seeing any NULL values; we simply aren't seeing ANY values for any of the foreign key fields in the intersection table (in this case, PRODUCT_ID and SHIPMENT_ID).
Is there a particular method to inserting data into an intersection table that will allow the values of the Primary keys to be represented in the intersection table as well?
Thank you very much for your time. I will be actively responding to any questions you may have
It takes more than naming a column the same to make it a foreign key. In fact, the name doesn't matter. What matters is defining a FK constraint over it:
CREATE TABLE RECEIVABLES (
ITEM_LINE INT NOT NULL Generated always as identity
(start with 10
increment by 1
minvalue 10
no maxvalue
no cycle
no cache order),
PAYMENT_TYPE CHAR(15),
REMARKS CHAR(70),
RECEIVED_QUANTITY INTEGER,
VENDOR_ID CHAR(4) NOT NULL,
PRODUCT_ID INT,
SHIPMENT_ID INT,
CONSTRAINT inv_fk FOREIGN KEY (product_id)
REFERENCES inventory (product_id),
CONSTRAINT shp_fk FOREIGN KEY (shipment_id)
REFERENCES shimpments (shipment_id)
) DATA CAPTURE NONE;
I would have expected to see null values in original table after your insert; since neither product_id nor shimpment_id in recieables is defined with NOT NULL. Running the same insert over the table I've posted you still give you NULL in the FK columns.
The generated ID is only generated in the primary table. You have to specify the value when inserting into into RECEIVABLES.
Now let's say you've just insert a row into INVENTORY and you want to insert a row referencing it into RECEIVABLES. Before the insert into RECEIVABLES, you have to ask the DB what the ID it generated for the row in INVENTORY.
The IDENTITY_VAL_LOCAL() function is one such way to do so...
I want to store a single row in a configuration table for my application. I would like to enforce that this table can contain only one row.
What is the simplest way to enforce the single row constraint ?
You make sure one of the columns can only contain one value, and then make that the primary key (or apply a uniqueness constraint).
CREATE TABLE T1(
Lock char(1) not null,
/* Other columns */,
constraint PK_T1 PRIMARY KEY (Lock),
constraint CK_T1_Locked CHECK (Lock='X')
)
I have a number of these tables in various databases, mostly for storing config. It's a lot nicer knowing that, if the config item should be an int, you'll only ever read an int from the DB.
I usually use Damien's approach, which has always worked great for me, but I also add one thing:
CREATE TABLE T1(
Lock char(1) not null DEFAULT 'X',
/* Other columns */,
constraint PK_T1 PRIMARY KEY (Lock),
constraint CK_T1_Locked CHECK (Lock='X')
)
Adding the "DEFAULT 'X'", you will never have to deal with the Lock column, and won't have to remember which was the lock value when loading the table for the first time.
You may want to rethink this strategy. In similar situations, I've often found it invaluable to leave the old configuration rows lying around for historical information.
To do that, you actually have an extra column creation_date_time (date/time of insertion or update) and an insert or insert/update trigger which will populate it correctly with the current date/time.
Then, in order to get your current configuration, you use something like:
select * from config_table order by creation_date_time desc fetch first row only
(depending on your DBMS flavour).
That way, you still get to maintain the history for recovery purposes (you can institute cleanup procedures if the table gets too big but this is unlikely) and you still get to work with the latest configuration.
You can implement an INSTEAD OF Trigger to enforce this type of business logic within the database.
The trigger can contain logic to check if a record already exists in the table and if so, ROLLBACK the Insert.
Now, taking a step back to look at the bigger picture, I wonder if perhaps there is an alternative and more suitable way for you to store this information, perhaps in a configuration file or environment variable for example?
I know this is very old but instead of thinking BIG sometimes better think small use an identity integer like this:
Create Table TableWhatever
(
keycol int primary key not null identity(1,1)
check(keycol =1),
Col2 varchar(7)
)
This way each time you try to insert another row the check constraint will raise preventing you from inserting any row since the identity p key won't accept any value but 1
Here's a solution I came up with for a lock-type table which can contain only one row, holding a Y or N (an application lock state, for example).
Create the table with one column. I put a check constraint on the one column so that only a Y or N can be put in it. (Or 1 or 0, or whatever)
Insert one row in the table, with the "normal" state (e.g. N means not locked)
Then create an INSERT trigger on the table that only has a SIGNAL (DB2) or RAISERROR (SQL Server) or RAISE_APPLICATION_ERROR (Oracle). This makes it so application code can update the table, but any INSERT fails.
DB2 example:
create table PRICE_LIST_LOCK
(
LOCKED_YN char(1) not null
constraint PRICE_LIST_LOCK_YN_CK check (LOCKED_YN in ('Y', 'N') )
);
--- do this insert when creating the table
insert into PRICE_LIST_LOCK
values ('N');
--- once there is one row in the table, create this trigger
CREATE TRIGGER ONLY_ONE_ROW_IN_PRICE_LIST_LOCK
NO CASCADE
BEFORE INSERT ON PRICE_LIST_LOCK
FOR EACH ROW
SIGNAL SQLSTATE '81000' -- arbitrary user-defined value
SET MESSAGE_TEXT='Only one row is allowed in this table';
Works for me.
I use a bit field for primary key with name IsActive.
So there can be 2 rows at most and and the sql to get the valid row is:
select * from Settings where IsActive = 1
if the table is named Settings.
The easiest way is to define the ID field as a computed column by value 1 (or any number ,....), then consider a unique index for the ID.
CREATE TABLE [dbo].[SingleRowTable](
[ID] AS ((1)),
[Title] [varchar](50) NOT NULL,
CONSTRAINT [IX_SingleRowTable] UNIQUE NONCLUSTERED
(
[ID] ASC
)
) ON [PRIMARY]
You can write a trigger on the insert action on the table. Whenever someone tries to insert a new row in the table, fire away the logic of removing the latest row in the insert trigger code.
Old question but how about using IDENTITY(MAX,1) of a small column type?
CREATE TABLE [dbo].[Config](
[ID] [tinyint] IDENTITY(255,1) NOT NULL,
[Config1] [nvarchar](max) NOT NULL,
[Config2] [nvarchar](max) NOT NULL
IF NOT EXISTS ( select * from table )
BEGIN
///Your insert statement
END
Here we can also make an invisible value which will be the same after first entry in the database.Example:
Student Table:
Id:int
firstname:char
Here in the entry box,we have to specify the same value for id column which will restrict as after first entry other than writing lock bla bla due to primary key constraint thus having only one row forever.
Hope this helps!
I would like your advice regarding Data Base design.
I have 4 different data elements (tables A,B,C,D) example:
A - Contents
B - Categories
C - Authors
and
D - Images
Every record in tables A,B,C could have associated 1 or more different Images in Table D,
BUT for every image in D must be uniquely associated only a record in A,B,C.
This means that images cannot be shared (between others tables).
My idea was to create different Image tables for every data elements, using ONE to MANY association type.
Example:
Content --> Image-Contents
and
Categories --> Image-Categories
Questions?
My database design is a good one?
Since Tables "Image-Contents" and "Image-Categories", could have similar property like "File-Url" or "Image-Title", I was concerning if could be exist a most suitable database design solution.
Thanks for your time
I think you would want a table that maps each of ABC to an image. For example:
Content -> ContentImages -> Images
--------- ------------- ------
ContentId ImageId ImageId
ContentId
Categories -> CategoryImages -> Images
---------- ---------------- ------
CategoryId ImageId ImageId
CategoryId
Authors -> AuthorImages -> Images
---------- ---------------- ------
AuthorId ImageId ImageId
AuthorId
It may seem a little cumbersome but i think this is the normal form.
Perhaps the most common way to implement this design is with the "one table per owner type" scheme you mentioned (Tables for Images, "Owner A", "Owner A Images", and repeat for owners B, C, etc). Another common way to implement this is with one "central" table for Images, with the single owner's Id stored within that table. Your criteria are particularly limiting, in that an image may be associated with one and only one owner, but there are multiple types of owner. Implementing such constraints inside the database is tricky, but implementing them outside of the database is much more difficult and problematic for all the usual reasons (application doing the databases work, and what happens when someone modifies the database outside of the dedicated application?)
The following is an example of how these structures and constraints might be implemented within the database. It may appear fussy, detailed, and overly-complex, but it will do the job, and once properly implemented you would never have to worry whether or not your data was consistant and valid.
First off, all images are stored in the following table. It must be known what "type" of owner an image may be assigned to; set that in ImageType, and (as per the constraints in the later tables) the image can not be assigned to any other kind of owner. Ever. (You could also put a CHECK constraint on ImageType to ensure that only valid image types could be loaded in the table.)
CREATE TABLE Image
(
ImageId int not null
,ImageType char(1) not null
,constraint PK_Image
primary key clustered (ImageId, ImageType)
)
Next, build some owner tables. You could have any number of these, I'm just making two for sake of the example.
CREATE TABLE A
(
AId int not null
constraint PK_A
primary key clustered
)
CREATE TABLE B
(
BId int not null
constraint PK_B
primary key clustered
)
Build the association tables, noting the comments next to the constraint definitions. (This is the overly-fussy part...)
CREATE TABLE Image_A
(
ImageId int not null
constraint PK_Image_A
primary key clustered -- An image can only be assigned to one owner
,AId int not null
,ImageType char(1) not null
constraint DF_Image_A
default 'A'
constraint CK_Image_A__ImageType
check (ImageType in ('A')) -- Always have this set to the type of the owner for this table
,constraint FK_Image_A__A
foreign key (AId) references A (AId) -- Owner must exist
,constraint FK_Image_A__Image
foreign key (ImageId, ImageType) references Image (ImageId, ImageType) -- Image must exist *for this type of owner*
)
-- Same comments for this table
CREATE TABLE Image_B
(
ImageId int not null
constraint PK_Image_B
primary key clustered
,BId int not null
,ImageType char(1) not null
constraint DF_Image_B
default 'B'
constraint CK_Image_B__ImageType
check (ImageType in ('B'))
,constraint FK_Image_B__B
foreign key (BId) references B (BId)
,constraint FK_Image_B__Image
foreign key (ImageId, ImageType) references Image (ImageId, ImageType)
)
Load some sample data
INSERT Image values (1, 'A')
INSERT Image values (2, 'A')
INSERT Image values (3, 'B')
INSERT Image values (4, 'B')
INSERT A values (101)
INSERT A values (102)
INSERT B values (201)
INSERT B values (102)
View the current contents of the tables:
SELECT * from A
SELECT * from B
SELECT * from Image
SELECT * from Image_A
SELECT * from Image_B
And do some tests:
-- Proper fit
INSERT Image_A (ImageId, AId) values (1, 101)
-- Run it again, can only assign once
-- Cannot assign the same image to a second owner of the proper type
INSERT Image_A (ImageId, AId) values (1, 102)
-- Can't assign image to an invalid owner type
INSERT Image_B (ImageId, BId) values (1, 201)
-- Owner can be assigned multiple images
INSERT Image_A (ImageId, AId) values (2, 101)
(This drops the testing tables)
drop table Image
drop table A
drop table B
drop table Image_A
drop table Image_B
(Techincally, this is a good example of a variant on the exclusive type/subtype data modelling "problem".)
create table A (IDA int not null, primary key(IDA));
create table B (IDB int not null, primary key(IDB));
create table C (IDC int not null, primary key(IDC));
create table Image(IDI int, A int null, B int null, C int null, Contents image,
foreign key (A) references A(IDA),
foreign key (B) references B(IDB),
foreign key (C) references C(IDC),
check (
(A is not null and B is null and C is null) or
(A is null and B is not null and C is null) or
(A is null and B is null and C is not null)
));
Yes, you're looking in the right direction.
Keep your current setup of the four tables and then create 3 more that hold only metadata that tells you the linking between, for example, the content table and the image tables.
For example, the images-content table will have columns: id, content-id, image-id
And so on.