I've set up my database and application to soft delete rows. Every table has an is_active column where the values should be either TRUE or NULL. The problem I have right now is that my data is out of sync because unlike a DELETE statement, setting a value to NULL doesn't cascade to rows in separate tables for which the "deleted" row in another table is a foreign key.
I have already taken measures to correct the data by finding inactive rows from the source table and manually setting related rows in other tables to be inactive as well. I recognize that I could do this at the application level (I'm using Django/Python for this project), but I feel like this should be a database process. Is there a way to utilize something like PostgreSQL's ON UPDATE constraint so that when a row has is_active set to NULL, all rows in separate tables referencing the updated row as a foreign key automatically have is_active set to NULL as well?
Here's an example:
An assessment has many submissions. If the assessment is marked inactive, all submissions related to it should also be marked inactive.
To my mind, it doesn't make sense to use NULL to represent a Boolean value. The semantics of "is_active" suggest that the only sensible values are True and False. Also, NULL interferes with cascading updates.
So I'm not using NULL.
First, create the "parent" table with both a primary key and a unique constraint on the primary key and "is_active".
create table parent (
p_id integer primary key,
other_columns char(1) default 'x',
is_active boolean not null default true,
unique (p_id, is_deleted)
);
insert into parent (p_id) values
(1), (2), (3);
Create the child table with an "is_active" column. Declare a foreign key constraint referencing the columns in the parent table's unique constraint (last line in the CREATE TABLE statement above), and cascade updates.
create table child (
p_id integer not null,
is_active boolean not null default true,
foreign key (p_id, is_active) references parent (p_id, is_active)
on update cascade,
some_other_key_col char(1) not null default '!',
primary key (p_id, some_other_key_col)
);
insert into child (p_id, some_other_key_col) values
(1, 'a'), (1, 'b'), (2, 'a'), (2, 'c'), (2, 'd'), (3, '!');
Now you can set the "parent" to false, and that will cascade to all referencing tables.
update parent
set is_active = false
where p_id = 1;
select *
from child
order by p_id;
p_id is_active some_other_key_col
--
1 f a
1 f b
2 t a
2 t c
2 t d
3 t !
Soft deletes are a lot simpler and have much better semantics if you implement them as valid-time state tables. FWIW, I think the terms soft delete, undelete, and undo are all misleading in this context, and I think you should avoid them.
PostgreSQL's range data types are particularly useful for this kind of work. I'm using date ranges, but timestamp ranges work the same way.
For this example, I'm treating only "parent" as a valid-time state table. That means that invalidating a particular row (soft deleting a particular row) also invalidates all the rows that reference it through foreign keys. It doesn't matter whether they reference it directly or indirectly.
I'm not implementing soft deletes on "child". I can do that, but I think that would make the essential technique unreasonably hard to understand.
create extension btree_gist; -- Necessary for the kind of exclusion
-- constraint below.
create table parent (
p_id integer not null,
other_columns char(1) not null default 'x',
valid_from_to daterange not null,
primary key (p_id, valid_from_to),
-- No overlapping date ranges for a given value of p_id.
exclude using gist (p_id with =, valid_from_to with &&)
);
create table child (
p_id integer not null,
valid_from_to daterange not null,
foreign key (p_id, valid_from_to) references parent on update cascade,
other_key_columns char(1) not null default 'x',
primary key (p_id, valid_from_to, other_key_columns),
other_columns char(1) not null default 'x'
);
Insert some sample data. In PostgreSQL, the daterange data type has a special value 'infinity'. In this context, it means that the row that has the value 1 for "parent"."p_id" is valid from '2015-01-01' until forever.
insert into parent values
(1, 'x', daterange('2015-01-01', 'infinity'));
insert into child values
(1, daterange('2015-01-01', 'infinity'), 'a', 'x'),
(1, daterange('2015-01-01', 'infinity'), 'b', 'y');
This query will show you the joined rows.
select *
from parent p
left join child c
on p.p_id = c.p_id
and p.valid_from_to = c.valid_from_to;
To invalidate a row, update the date range. This row (below) was valid from '2015-01-01' to '2015-01-31'. That is, it was soft deleted on 2015-01-31.
update parent
set valid_from_to = daterange('2015-01-01', '2015-01-31')
where p_id = 1 and valid_from_to = daterange('2015-01-01', 'infinity');
Insert a new valid row for p_id 1, and pick up the child rows that were invalidated on Jan 31.
insert into parent values (1, 'r', daterange(current_date, 'infinity'));
update child set valid_from_to = daterange(current_date, 'infinity')
where p_id = 1 and valid_from_to = daterange('2015-01-01', '2015-01-31');
Richard T Snodgrass's seminal book Developing Time-Oriented Database Applications in SQL is available free from his university web page.
You can use a trigger:
CREATE OR REPLACE FUNCTION trg_upaft_upd_trip()
RETURNS TRIGGER AS
$func$
BEGIN
UPDATE submission s
SET is_active = NULL
WHERE s.assessment_id = NEW.assessment_id
AND NEW.is_active IS NULL; -- recheck to be sure
RETURN NEW; -- call this BEFORE UPDATE
END
$func$ LANGUAGE plpgsql;
CREATE TRIGGER upaft_upd_trip
BEFORE UPDATE ON assessment
FOR EACH ROW
WHEN (OLD.is_active AND NEW.is_active IS NULL)
EXECUTE PROCEDURE trg_upaft_upd_trip();
Related:
How do I make a trigger to update a column in another table?
Be aware that a trigger has more possible points of failure than a FK constraints with ON UPDATE CASCADE ON DELETE CASCADE.
#Mike added a solution with a multi-column FK constraint I would consider as alternative.
Related answer on dba.SE:
Enforcing constraints “two tables away”
Related answer one week later:
Cross table constraints in PostgreSQL
This is more a schematic problem than a procedural one.
You may have dodged creating a solid definition of "what constitutes a record". At the moment you have object A that may be referenced by object B, and when A is "deleted" (has its is_active column set to FALSE, or NULL, in your current case) B is not reflecting that. It sounds like this is a single table (you only mention rows, not separate classes or tables...) and you have a hierarchical model formed by self-reference. If that is the case you can think of the problem in a few ways:
Recursive lineage
In this model you have one table that contains all the data in one place, whether its a parent, a child, etc. and you check the table for recursive references to traverse the tree.
It is tricky to do this properly in an ORM that lacks explicit support for this without accidentally writing routines that either:
iteratively pound the crap out of your DB by making at least one query per node or
pulling the entire table at once and traversing it in application code
It is, however, straightforward to do this in Postgres and let Django access it via a model over an unmanaged view on the lineage query you build. (I wrote a little about this once.) Under this model your query will descend the tree until it hits the first row of the current branch that is marked as not active and stop, thus effectively truncating all the rows below associated with that one (no need for propagating the is_active column!).
If this were, say, a blog entry + comments within the same structure (a fairly common CMS schema) then any row that is its own parent is a primary entity and anything that has a parent that is not itself is a comment. To remove a whole blog post + its children you mark just the blog post's row as inactive; to remove a thread within the comments mark as inactive the comment that begins that thread.
For a blog + comments type feature this is usually the most straightforward way to do things -- though most CMS systems get it wrong (but usually only in ways that matter if you start doing serious data stuff later, if you're just setting up some place for people to argue on the internet then Worse is Better).
Recursive lineage + External "record" definition
In this model you have your tree of nodes and your primary entities separated. The primary entities are marked as being active or not, and that attribute is common to all the elements that are related to it within the context of that primary entity (they exist and have a meaning independent of it). This means two tables, one for primary entities, and one for your tree of nodes.
Use this when you have something more interesting going on than simply threaded discussion. For example, a model of components where a tree of things may be aggregated separately into other larger things, and you need to have a way to mark those "other larger things" as active or not independently of the components themselves.
Further down the rabbit hole...
There are other takes on this idea, but they get increasingly non-trivial, which is probably not suitable. For example, consider a third basic take on this model where the hierarchy structure, the node bodies, and the primary entities are all separated into different tables. One node body might appear in multiple trees by reference, and multiple trees may be considered active or inactive in the context of a single primary entity, etc.
Consider heading this direction if your data is more complex. If you wind up really needing models this far decomposed ("normalized") then I would caution that any ORM is probably going to wind up being a lot more trouble than its worth -- you will start running headlong into the problem that ORMs are fundamentally leaky abstractions (1 object can never really equate to 1 table...).
Related
I have a table LotTable that has a PK= LotID, Name, rate.
I have another table LotTranslate that has a PK=TranslateLotID and FK=MasterLotID
Before insert into LotTable I need to make sure enforce the PK inserted is NOT already the PK in LotTranslate.
My question is do I do a trigger instead of insert or Delete it after? What is the most clean way, speedy way to check this other table and stop the insert in LotTable if the PK is found there in LotTranslate?
My direction I am not sure if this is the right SQL Server way...
CREATE TRIGGER tr_LotsInsert ON LotTable
INSTEAD OF INSERT
AS
BEGIN
SET NOCOUNT ON
INSERT INTO dbo.LotTable
SELECT *
FROM INSERTED
WHERE INSERTED.LotID not in (select TranslateLotID from LotTranslate)
END
I don't recommend using a trigger to enforce this.
What you are describing is actually inheritance, where different objects share a base type. In this case, you have the base concept of a Lot (called the supertype), and two mutually exclusive subtypes, LotTable and LotTranslate. (And for the record, I think it unfortunate that your database has a table with the name Table in it, unless it actually deals with some kind of tables that aren't database objects).
There is a reasonably well-established database design pattern to deal with subtypes and supertypes: creating a parent table that is used as the "base object" in the inheritance pattern, and making the subtype tables all have an FK relationship to it. To additionally enforce the mutual exclusivity, you can add a Type column to all the tables and involve it in the foreign key.
Then, your base table participates with the two tables in a 1-to-zero-or-one relationship. The most important concept to get here is that the LotID is always the same in all the tables and you do not create separate surrogate keys for any table: the base/supertype table contains the same values that are in the child/subtype tables.
Before I show you how to accomplish this, let me mention that in this case it's possible your two tables should really be combined into one, with a simple Type column indicating which it is which would of course prevent a single Lot from being two types at once. I'm assuming, however, that your two tables have enough columns different between them that it would be a big waste of NULL values to do so (if there are only a few columns different, it may be better to just combine them).
CREATE TABLE dbo.LotBase (
LotID int NOT NULL CONSTRAINT PK_LotBase PRIMARY KEY CLUSTERED,
LotTypeID tinyint NOT NULL
CONSTRAINT FK_LotBase_LotTypeID FOREIGN KEY
REFERENCES dbo.LotType (LotTypeID),
-- A unique constraint needed for FK purposes
CONSTRAINT UQ_LotBase_LotID_LotTypeID
UNIQUE (LotID, LotTypeID)
);
-- Include script here to create a LotType table and populate it with two rows
-- 1 = `Standard Lot` and 2 = `TranslateLot`
INSERT dbo.LotBase (LotID, LotTypeID)
SELECT LotID, 1
FROM dbo.LotTable;
INSERT dbo.LotBase (LotID, LotTypeID)
SELECT TranslateLotID, 2
FROM dbo.LotTranslate;
ALTER TABLE dbo.LotTable ADD LotTypeID tinyint NOT NULL
CONSTRAINT DF_LotTable_LotTypeID DEFAULT (1);
ALTER TABLE dbo.LotTranslate ADD LotTypeID tinyint NOT NULL
CONSTRAINT DF_LotTranslate_LotTypeID DEFAULT (2);
ALTER TABLE dbo.LotTable ADD CONSTRAINT FK_LotTable_LotBase
FOREIGN KEY (LotID, LotTypeID)
REFERENCES dbo.LotBase (LotID, LotTypeID);
ALTER TABLE dbo.LotTable ADD CONSTRAINT FK_LotTable_LotBase
FOREIGN KEY (LotID, LotTypeID)
REFERENCES dbo.LotBase (LotID, LotTypeID);
Note that you might want to do the work to get the new LotTypeID columns in the child tables to be situated immediately after the LotID columns, but it is up to you--just be careful because it will require table recreation and you can harm your database if you are not knowledgeable and careful (take backups first!).
One huge benefit of this pattern to not miss is that anywhere in your database you want an FK to a Lot, you can choose to either use one of the child tables or to use the parent table. This constrains your other tables to allow either both or just one of the subtypes. Another benefit to not miss is that you can put common columns between the two tables into the parent table instead of repeated in the children. Finally, you can create a view for each child that exposes the combined parent + child columns just like the original child table.
Finally, if you persist in going on with the trigger method, you don't have to use an INSTEAD OF trigger. You can just ROLLBACK any transaction that isn't appropriate:
CREATE TRIGGER TR_LotTable_I ON dbo.LotTable FOR INSERT
AS
SET NOCOUNT ON;
SET XACT_ABORT ON;
IF EXISTS (
SELECT *
FROM
Inserted I
INNER JOIN dbo.LotTranslate LT
ON I.LotID = LT.TranslateLotID
) ROLLBACK TRAN;
That's a far better way to handle it (for one thing, you won't have to modify it every time you add a column to your LotTable table. Also, I would recommend that you learn to use (and then consistently use) JOIN syntax instead of the IN syntax you showed. While there is some controversy over this recommendation I'm making, in my experience people who use IN instead of JOINs miss some key conceptual learning that goes on in the process of figuring out how to make them into JOINs. There are other practical benefits such as the fact that nested IN queries get abominably hard to understand and maintain, while adding 5 more JOINs doesn't really make a query much harder to understand when formatted well.
This question already has answers here:
How can I do a BEFORE UPDATED trigger with sql server?
(9 answers)
Closed 2 years ago.
This is on Azure.
I have a supertype entity and several subtype entities, the latter of which needs to obtain their foreign keys from the primary key of the super type entity on each insert. In Oracle, I use a BEFORE INSERT trigger to accomplish this. How would one accomplish this in SQL Server / T-SQL?
DDL
CREATE TABLE super (
super_id int IDENTITY(1,1)
,subtype_discriminator char(4) CHECK (subtype_discriminator IN ('SUB1', 'SUB2')
,CONSTRAINT super_id_pk PRIMARY KEY (super_id)
);
CREATE TABLE sub1 (
sub_id int IDENTITY(1,1)
,super_id int NOT NULL
,CONSTRAINT sub_id_pk PRIMARY KEY (sub_id)
,CONSTRAINT sub_super_id_fk FOREIGN KEY (super_id) REFERENCES super (super_id)
);
I wish for an insert into sub1 to fire a trigger that actually inserts a value into super and uses the super_id generated to put into sub1.
In Oracle, this would be accomplished by the following:
CREATE TRIGGER sub_trg
BEFORE INSERT ON sub1
FOR EACH ROW
DECLARE
v_super_id int; //Ignore the fact that I could have used super_id_seq.CURRVAL
BEGIN
INSERT INTO super (super_id, subtype_discriminator)
VALUES (super_id_seq.NEXTVAL, 'SUB1')
RETURNING super_id INTO v_super_id;
:NEW.super_id := v_super_id;
END;
Please advise on how I would simulate this in T-SQL, given that T-SQL lacks the BEFORE INSERT capability?
Sometimes a BEFORE trigger can be replaced with an AFTER one, but this doesn't appear to be the case in your situation, for you clearly need to provide a value before the insert takes place. So, for that purpose, the closest functionality would seem to be the INSTEAD OF trigger one, as #marc_s has suggested in his comment.
Note, however, that, as the names of these two trigger types suggest, there's a fundamental difference between a BEFORE trigger and an INSTEAD OF one. While in both cases the trigger is executed at the time when the action determined by the statement that's invoked the trigger hasn't taken place, in case of the INSTEAD OF trigger the action is never supposed to take place at all. The real action that you need to be done must be done by the trigger itself. This is very unlike the BEFORE trigger functionality, where the statement is always due to execute, unless, of course, you explicitly roll it back.
But there's one other issue to address actually. As your Oracle script reveals, the trigger you need to convert uses another feature unsupported by SQL Server, which is that of FOR EACH ROW. There are no per-row triggers in SQL Server either, only per-statement ones. That means that you need to always keep in mind that the inserted data are a row set, not just a single row. That adds more complexity, although that'll probably conclude the list of things you need to account for.
So, it's really two things to solve then:
replace the BEFORE functionality;
replace the FOR EACH ROW functionality.
My attempt at solving these is below:
CREATE TRIGGER sub_trg
ON sub1
INSTEAD OF INSERT
AS
BEGIN
DECLARE #new_super TABLE (
super_id int
);
INSERT INTO super (subtype_discriminator)
OUTPUT INSERTED.super_id INTO #new_super (super_id)
SELECT 'SUB1' FROM INSERTED;
INSERT INTO sub (super_id)
SELECT super_id FROM #new_super;
END;
This is how the above works:
The same number of rows as being inserted into sub1 is first added to super. The generated super_id values are stored in a temporary storage (a table variable called #new_super).
The newly inserted super_ids are now inserted into sub1.
Nothing too difficult really, but the above will only work if you have no other columns in sub1 than those you've specified in your question. If there are other columns, the above trigger will need to be a bit more complex.
The problem is to assign the new super_ids to every inserted row individually. One way to implement the mapping could be like below:
CREATE TRIGGER sub_trg
ON sub1
INSTEAD OF INSERT
AS
BEGIN
DECLARE #new_super TABLE (
rownum int IDENTITY (1, 1),
super_id int
);
INSERT INTO super (subtype_discriminator)
OUTPUT INSERTED.super_id INTO #new_super (super_id)
SELECT 'SUB1' FROM INSERTED;
WITH enumerated AS (
SELECT *, ROW_NUMBER() OVER (ORDER BY (SELECT 1)) AS rownum
FROM inserted
)
INSERT INTO sub1 (super_id, other columns)
SELECT n.super_id, i.other columns
FROM enumerated AS i
INNER JOIN #new_super AS n
ON i.rownum = n.rownum;
END;
As you can see, an IDENTIY(1,1) column is added to #new_user, so the temporarily inserted super_id values will additionally be enumerated starting from 1. To provide the mapping between the new super_ids and the new data rows, the ROW_NUMBER function is used to enumerate the INSERTED rows as well. As a result, every row in the INSERTED set can now be linked to a single super_id and thus complemented to a full data row to be inserted into sub1.
Note that the order in which the new super_ids are inserted may not match the order in which they are assigned. I considered that a no-issue. All the new super rows generated are identical save for the IDs. So, all you need here is just to take one new super_id per new sub1 row.
If, however, the logic of inserting into super is more complex and for some reason you need to remember precisely which new super_id has been generated for which new sub row, you'll probably want to consider the mapping method discussed in this Stack Overflow question:
Using merge..output to get mapping between source.id and target.id
While Andriy's proposal will work well for INSERTs of a small number of records, full table scans will be done on the final join as both 'enumerated' and '#new_super' are not indexed, resulting in poor performance for large inserts.
This can be resolved by specifying a primary key on the #new_super table, as follows:
DECLARE #new_super TABLE (
row_num INT IDENTITY(1,1) PRIMARY KEY CLUSTERED,
super_id int
);
This will result in the SQL optimizer scanning through the 'enumerated' table but doing an indexed join on #new_super to get the new key.
I will do my best to lay this out in text. Essentially, we have an application that tracks actions performed by users. Each action has it's own table since each action has different parameters that need to be stored. This allows us to store those extra attributes and run analytics on the actions across multiple or single users rather easily. The actions are not really associated with each other other than by what user performed these actions.
Example:
ActionTableA Id | UserId | AttributeA | AttributeB
ActionTableB Id | UserId | AttributeC | AttributeD | AttributeE
ActionTableC Id | UserId | AttributeF
Next, we need to allocate a value to each action performed by the user and keep a running total of those values.
Example:
ValueTable: Id | UserId | Value | ActionType | ActionId
What would be the best way to link the value in the value table to the actual action performed? We know the action type (A, B, C) - but from a SQL design perspective, I cannot see a good way to have an indexed relationship between the Values of the actions in the ActionsTable and the actual actions themselves. The only thing that makes sense would be to modify the ValueTable to the following:
ValueTable
Id | UserId | Value | ActionType | ActionAId(FK Nullable) | ActionBId(FK Nullable) | ActionCId(FK Nullable)
But the problem I have with this that only one of the 3 actionTableId columns would have a value, the rest would be Null. Additionally, as action types are added, the columns in the value table would too. Lastly, to programatically find the data, I would either a) have to check the ActionType to get the appropriate column for the Id or b) scan the row and pick the non-null actionId.
Is there a better way/design or is this just 'the way it is' for this particular issue.
EDIT
Attached is a diagram of the above setup:
Sorry for the clarity issues, typing SQL questions is always challenging. So I think your comment gave me an idea of something... I could have an SystemActionId table that essentially has an auto-generated value
SystemActions: Id | Type
Then, each ActionTable would have an additional FK to the SystemAction table. Lastly, in the ValueTable - associate it to the SystemActions table as well. This would allow us to tie values to specific actions. I would need to join the action tables to the system actions table where
JOIN (((SystemActions.Id = ActionTableA.Id) JOIN (SystemActions.Id = ActionTableB.Id)) JOIN (SystemActions.Id = ActionTableC.Id)
crappy quick sql syntax
Is this what you were alluding to in the answer below? A snapshot of what that could potentially look like:
Your question is a little unclear, but it looks like you should either have a (denormalized) value column in each action table, or have an artificial key in the value table that is keyed to by each of the seperate action tables.
You have essentially a supertype/subtype structure, or an exclusive arc. Attributes common to all actions bubble "up" into the supertype (the table "actions"). Columns unique to each subtype bubble "down" into the distinct subtypes.
create temp table actions (
action_id integer not null,
action_type char(1) not null check (action_type in ('a', 'b', 'c')),
user_id integer not null, -- references users, not shown.
primary key (action_id),
-- Required for data integrity. See below.
unique (action_id, action_type)
);
create temp table ActionTableA (
action_id integer primary key,
-- default and check constraint guarantee that only an 'a' row goes
-- in this table.
action_type char(1) not null default 'a' check (action_type = 'a'),
-- FK guarantees that this row matches only an 'a' row in actions.
-- To make this work, you need a UNIQUE constraint on these two columns
-- in the table "actions".
foreign key (action_id, action_type)
references actions (action_id, action_type),
attributeA char(1) not null,
attributeB char(1) not null
);
-- ActionTableB and ActionTableC are similar.
create temp table ValueTable (
action_id integer primary key,
action_type char(1) not null,
-- Since this applies to all actions, FK should reference the supertype,
-- which is the table "actions". You can reference either action_id alone,
-- which has a PRIMARY KEY constraint, or the pair {action_id, action_type},
-- which has a UNIQUE constraint. Using the pair makes some kinds of
-- accounting queries easier and faster.
foreign key (action_id, action_type)
references actions (action_id, action_type),
value integer not null default 0 check (value >= 0)
);
To round this out, build updatable views that join the supertype to each subtype, and have all users use the views instead of the base tables.
I would just have a single table for actions, to be honest. Is there a reason (other than denormalization) for having multiple tables? Especially when it will increase the complexity of your business logic?
Are the attribute columns significant in the context of the schema? Could you compress it into an object storage column "attributes"?
Actions: actionID, type, attributes
I think you need something similar to an Audit Trail. Can we have a simple design so that all the actions will be captured in a singe table ?
If the way you want it to work is that for every time a user performs action A you insert a new row in table ActionTableA and a row in ValueTable, and having them both linked, why not have a value column in each action table? This would work only if you want to insert a new row each time the user performs the action rather than if you want to update the value if the user performs the same action again. It seems overly complicated to have a separate table for values if it can be stored in a column. On the other hand if a "value" is a set of different pieces of data (or if you want to have all values in one place) then you do need an extra table but I would still have a foreign key column pointing from the action tables to the value table.
I want to store a single row in a configuration table for my application. I would like to enforce that this table can contain only one row.
What is the simplest way to enforce the single row constraint ?
You make sure one of the columns can only contain one value, and then make that the primary key (or apply a uniqueness constraint).
CREATE TABLE T1(
Lock char(1) not null,
/* Other columns */,
constraint PK_T1 PRIMARY KEY (Lock),
constraint CK_T1_Locked CHECK (Lock='X')
)
I have a number of these tables in various databases, mostly for storing config. It's a lot nicer knowing that, if the config item should be an int, you'll only ever read an int from the DB.
I usually use Damien's approach, which has always worked great for me, but I also add one thing:
CREATE TABLE T1(
Lock char(1) not null DEFAULT 'X',
/* Other columns */,
constraint PK_T1 PRIMARY KEY (Lock),
constraint CK_T1_Locked CHECK (Lock='X')
)
Adding the "DEFAULT 'X'", you will never have to deal with the Lock column, and won't have to remember which was the lock value when loading the table for the first time.
You may want to rethink this strategy. In similar situations, I've often found it invaluable to leave the old configuration rows lying around for historical information.
To do that, you actually have an extra column creation_date_time (date/time of insertion or update) and an insert or insert/update trigger which will populate it correctly with the current date/time.
Then, in order to get your current configuration, you use something like:
select * from config_table order by creation_date_time desc fetch first row only
(depending on your DBMS flavour).
That way, you still get to maintain the history for recovery purposes (you can institute cleanup procedures if the table gets too big but this is unlikely) and you still get to work with the latest configuration.
You can implement an INSTEAD OF Trigger to enforce this type of business logic within the database.
The trigger can contain logic to check if a record already exists in the table and if so, ROLLBACK the Insert.
Now, taking a step back to look at the bigger picture, I wonder if perhaps there is an alternative and more suitable way for you to store this information, perhaps in a configuration file or environment variable for example?
I know this is very old but instead of thinking BIG sometimes better think small use an identity integer like this:
Create Table TableWhatever
(
keycol int primary key not null identity(1,1)
check(keycol =1),
Col2 varchar(7)
)
This way each time you try to insert another row the check constraint will raise preventing you from inserting any row since the identity p key won't accept any value but 1
Here's a solution I came up with for a lock-type table which can contain only one row, holding a Y or N (an application lock state, for example).
Create the table with one column. I put a check constraint on the one column so that only a Y or N can be put in it. (Or 1 or 0, or whatever)
Insert one row in the table, with the "normal" state (e.g. N means not locked)
Then create an INSERT trigger on the table that only has a SIGNAL (DB2) or RAISERROR (SQL Server) or RAISE_APPLICATION_ERROR (Oracle). This makes it so application code can update the table, but any INSERT fails.
DB2 example:
create table PRICE_LIST_LOCK
(
LOCKED_YN char(1) not null
constraint PRICE_LIST_LOCK_YN_CK check (LOCKED_YN in ('Y', 'N') )
);
--- do this insert when creating the table
insert into PRICE_LIST_LOCK
values ('N');
--- once there is one row in the table, create this trigger
CREATE TRIGGER ONLY_ONE_ROW_IN_PRICE_LIST_LOCK
NO CASCADE
BEFORE INSERT ON PRICE_LIST_LOCK
FOR EACH ROW
SIGNAL SQLSTATE '81000' -- arbitrary user-defined value
SET MESSAGE_TEXT='Only one row is allowed in this table';
Works for me.
I use a bit field for primary key with name IsActive.
So there can be 2 rows at most and and the sql to get the valid row is:
select * from Settings where IsActive = 1
if the table is named Settings.
The easiest way is to define the ID field as a computed column by value 1 (or any number ,....), then consider a unique index for the ID.
CREATE TABLE [dbo].[SingleRowTable](
[ID] AS ((1)),
[Title] [varchar](50) NOT NULL,
CONSTRAINT [IX_SingleRowTable] UNIQUE NONCLUSTERED
(
[ID] ASC
)
) ON [PRIMARY]
You can write a trigger on the insert action on the table. Whenever someone tries to insert a new row in the table, fire away the logic of removing the latest row in the insert trigger code.
Old question but how about using IDENTITY(MAX,1) of a small column type?
CREATE TABLE [dbo].[Config](
[ID] [tinyint] IDENTITY(255,1) NOT NULL,
[Config1] [nvarchar](max) NOT NULL,
[Config2] [nvarchar](max) NOT NULL
IF NOT EXISTS ( select * from table )
BEGIN
///Your insert statement
END
Here we can also make an invisible value which will be the same after first entry in the database.Example:
Student Table:
Id:int
firstname:char
Here in the entry box,we have to specify the same value for id column which will restrict as after first entry other than writing lock bla bla due to primary key constraint thus having only one row forever.
Hope this helps!
I have a table my_table with these fields: id_a, id_b.
So this table basically can reference either an row from table_a with id_a, or an row from table_b with id_b. If I reference a row from table_a, id_b is NULL. If I reference a row from table_b, id_a is NULL.
Currently I feel this is my only/best option I have, so in my table (which has a lot more other fields, btw) I will live with the fact that always one field is NULL.
If you care what this is for: If id_a is specified, I'm linking to a "data type" record set in my meta database, that specifies a particular data type. like varchar(40), for example. But if id_b is specified, I'm linking to a relationship definition recordset that specifies details about an relationship (wheather it's 1:1, 1:n, linking what, with which constraints, etc.). The fields are called a little bit better, of course ;) ...just try to simplify it to the problem.
Edit: If it matters: MySQL, latest version. But don't want to constrain my design to MySQL specific code, as much as possible.
Are there better solutions?
A and B are disjoint subtypes in your model.
This can be implemented like this:
refs (
type CHAR(1) NOT NULL, ref INT NOT NULL,
PRIMARY KEY (type, ref),
CHECK (type IN ('A', 'B'))
)
table_a (
type CHAR(1) NOT NULL, id INT NOT NULL,
PRIMARY KEY (id),
FOREIGN KEY (type, id) REFERENCES refs (type, id),
CHECK (type = 'A'),
…)
table_b (
type CHAR(1) NOT NULL, id INT NOT NULL,
PRIMARY KEY (id),
FOREIGN KEY (type, id) REFERENCES refs (type, id) ON DELETE CASCADE,
CHECK (type = 'B'),
…)
mytable (
type CHAR(1) NOT NULL, ref INT NOT NULL,
FOREIGN KEY (type, ref) REFERENCES refs (type, id) ON DELETE CASCADE,
CHECK (type IN ('A', 'B')),
…)
Table refs constains all instances of both A and B. It serves no other purpose except policing referential integrity, it won't even participate in the joins.
Note that MySQL accepts CHECK constraints but does not enforce them. You will need to watch your types.
You also should not delete the records from table_a and table_b directly: instead, delete the records from refs which will trigger ON DELETE CASCADE.
Create a parent "super-type" table for both A and B. Reference that table in my_table.
Yes, there are better solutions.
However, since you didn't describe what you're allowed to change, it's difficult to know which alternatives could be used.
Principally, this "exclusive-or" kind of key reference means that A and B are actually two subclasses of a common superclass. You have several ways to changing the A and B tables to unify them into a single table.
One of which is to simply merge the A and B table into a big table.
Another of which is to have a superclass table with the common features of A and B as well as a subtype flag that says which subtype it is. This still involves a join with the subclass table, but the join has an explicit discriminator, and can be done "lazily" by the application rather than in the SQL.
I see no problem with your solution. However, I think you should add CHECK constraints to make sure that exactly one of the fields is null.
you know, it's hard to tell if there are any better solutions since you've stripped the question of all vital information. with the tiny amount that's still there i'd say that most better solutions involve getting rid of my_table.