Please imagine this small database...
Diagram
removed dead ImageShack link - volunteer database diagram
Tables
Volunteer Event Shift EventVolunteer
========= ===== ===== ==============
Id Id Id EventId
Name Name EventId VolunteerId
Email Location VolunteerId
Phone Day Description
Comment Description Start
End
Associations
Volunteers may sign up for multiple events.
Events may be staffed by multiple volunteers.
An event may have multiple shifts.
A shift belongs to only a single event.
A shift may be staffed by only a single volunteer.
A volunteer may staff multiple shifts.
Check Constraints
Can I create a check constraint to
enforce that no shift is staffed by
a volunteer that's not signed up for
that shift's event?
Can I create a check constraint to
enforce that two overlapping shifts
are never staffed by the same
volunteer?
The best place to enforce data integrity is the database. Rest assured that some developer, intentionally or not, will find a way to sneak inconsistent stuff into the database if you let them!
Here's an example with check constraints:
CREATE FUNCTION dbo.SignupMismatches()
RETURNS int
AS BEGIN RETURN (
SELECT count(*)
FROM Shift s
LEFT JOIN EventVolunteer ev
ON ev.EventId = s.EventId
AND ev.VolunteerId = s.VolunteerId
WHERE ev.Id is null
) END
go
ALTER TABLE Shift ADD CONSTRAINT chkSignup CHECK (dbo.SignupMismatches() = 0);
go
CREATE FUNCTION dbo.OverlapMismatches()
RETURNS int
AS BEGIN RETURN (
SELECT count(*)
FROM Shift a
JOIN Shift b
ON a.id <> b.id
AND a.Start < b.[End]
AND a.[End] > b.Start
AND a.VolunteerId = b.VolunteerId
) END
go
ALTER TABLE Shift ADD CONSTRAINT chkOverlap CHECK (dbo.OverlapMismatches() = 0);
Here's some tests for the new data integrity checks:
insert into Volunteer (name) values ('Dubya')
insert into Event (name) values ('Build Wall Around Texas')
-- Dubya tries to build a wall, but Fails because he's not signed up
insert into Shift (VolunteerID, EventID, Description, Start, [End])
values (1, 1, 'Dunbya Builds Wall', '2010-01-01', '2010-01-02')
-- Properly signed up? Good
insert into EventVolunteer (VolunteerID, EventID)
values (1, 1)
insert into Shift (VolunteerID, EventID, Description, Start, [End])
values (1, 1, 'Dunbya Builds Wall', '2010-01-01', '2010-01-03')
-- Fails, you can't start the 2nd wall before you finished the 1st
insert into Shift (VolunteerID, EventID, Description, Start, [End])
values (1, 1, 'Dunbya Builds Second Wall', '2010-01-02', '2010-01-03')
Here are the table definitions:
set nocount on
if OBJECT_ID('Shift') is not null
drop table Shift
if OBJECT_ID('EventVolunteer') is not null
drop table EventVolunteer
if OBJECT_ID('Volunteer') is not null
drop table Volunteer
if OBJECT_ID('Event') is not null
drop table Event
if OBJECT_ID('SignupMismatches') is not null
drop function SignupMismatches
if OBJECT_ID('OverlapMismatches') is not null
drop function OverlapMismatches
create table Volunteer (
id int identity primary key
, name varchar(50)
)
create table Event (
Id int identity primary key
, name varchar(50)
)
create table Shift (
Id int identity primary key
, VolunteerId int foreign key references Volunteer(id)
, EventId int foreign key references Event(id)
, Description varchar(250)
, Start datetime
, [End] datetime
)
create table EventVolunteer (
Id int identity primary key
, VolunteerId int foreign key references Volunteer(id)
, EventId int foreign key references Event(id)
, Location varchar(250)
, [Day] datetime
, Description varchar(250)
)
Question 1 is easy. Just have your Shift table refer directly to EventVolunteer table and you are all set
What I would do is have an Identity column on the EventVolunteer table that auto-increments, with a unique constraint on the EventId, VolunteerId pair. Use the EventVolunteerId (identity) as the foreign key to the Shift table. This enforces the constraint you'd like fairly simply, whilst normalizing your data somewhat.
I understand this is not the answer to your general question, however I'd see this as the best solution to your specific problem.
Edit:
I should have read the question fully. This solution will prevent one volunteer from doing two shifts at the same event, even if they don't overlap. Perhaps moving the shift start and end times to the EventVolunteer and having the check constraint on times on that table would suffice, though then you have shift data outside the Shift table which does not sound intuitive to me.
There is a way to do it by using triggers, which i wouldn't recommend. I would recommend not putting your buisness logic at the database level. The db doesn't need to know who, is staffing a certain shift at which time. That logic should be put in your buisness layer. I would recommend using a repository construction pattern. Scott gutherie has a very good chapter in his mvc 1.0 book which describes this (Link below).
http://weblogs.asp.net/scottgu/archive/2009/03/10/free-asp-net-mvc-ebook-tutorial.aspx
Related
I have a table of identifiers, IntervalFrom and IntervalTo:
Identifier
IntervalFrom
IntervalTo
1
0
2
1
2
4
2
0
2
2
2
4
I already have a trigger to NOT allow the intervals to overlap.
I am looking for a trigger or constraint that will not allow data gaps.
I have search and the information I found relates to gaps in queries and data rather than not allowing them in the first place.
I am unable to find anything in relation to this as a trigger or constraint.
Is this possible using T-SQL?
Thanks in advance.
You can construct a table that automatically is immune from overlaps and gaps:
create table T (
ID int not null,
IntervalFrom int null,
IntervalTo int null,
constraint UQ_T_Previous_XRef UNIQUE (ID, IntervalTo),
constraint UQ_T_Next_XRef UNIQUE (ID, IntervalFrom),
constraint FK_T_Previous FOREIGN KEY (ID, IntervalFrom) references T (ID, IntervalTo),
constraint FK_T_Next FOREIGN KEY (ID, IntervalTo) references T (ID, IntervalFrom)
)
go
create unique index UQ_T_Start on T (ID) where IntervalFrom is null
go
create unique index UQ_T_End on T(ID) where IntervalTo is null
go
Note, this does require a slightly different convention for you first and last intervals - they need to use null rather than 0 or the (somewhat arbitrary) 4.
Note also that modifying data in such a table can be a challenge - if you're inserting a new interval, you also need to update other intervals to accommodate the new one. MERGE is your friend here.
Given the above, we can insert your (modified) sample data:
insert into T (ID, IntervalFrom, IntervalTo) values
(1,null,2),
(1,2,null),
(2,null,2),
(2,2,null)
go
But we cannot insert an overlapping value (this errors):
insert into T(ID, IntervalFrom, IntervalTo) values (1,1,3)
You should also see that the foreign keys prevent gaps from existing in a sequence
I am learning to write unit tests for work. I was advised to use TSQLT FakeTable to test some aspects of a table created by a stored procedure.
In other unit tests, we create a temp table for the stored procedure and then test the temp table. I'm not sure how to work the FakeTable into the test.
EXEC tSQLt.NewTestClass 'TestThing';
GO
CREATE OR ALTER PROCEDURE TestThing.[test API_StoredProc to make sure parameters work]
AS
BEGIN
DROP TABLE IF EXISTS #Actual;
CREATE TABLE #Actual ----Do I need to create the temp table and the Fake table? I thought I might need to because I'm testing a table created by a stored procedure.
(
ISO_3166_Alpha2 NVARCHAR(5),
ISO_3166_Alpha3 NVARCHAR(5),
CountryName NVARCHAR(100),
OfficialStateName NVARCHAR(300),
sovereigny NVARCHAR(75),
icon NVARCHAR(100)
);
INSERT #Actual
(
ISO_3166_Alpha2,
ISO_3166_Alpha3,
CountryName,
OfficialStateName,
sovereigny,
icon
)
EXEC Marketing.API_StoredProc #Username = 'AnyValue', -- varchar(100)
#FundId = 0, -- int
#IncludeSalesForceInvestorCountry = NULL, -- bit
#IncludeRegisteredSalesJurisdictions = NULL, -- bit
#IncludeALLCountryForSSRS = NULL, -- bit
#WHATIF = NULL, -- bit
#OUTPUT_DEBUG = NULL -- bit
EXEC tsqlt.FakeTable #TableName = N'#Actual', -- nvarchar(max) -- How do I differentiate between the faketable and the temp table now?
#SchemaName = N'', -- nvarchar(max)
#Identity = NULL, -- bit
#ComputedColumns = NULL, -- bit
#Defaults = NULL -- bit
INSERT INTO #Actual
(
ISO_3166_Alpha2,
ISO_3166_Alpha3,
CountryName,
OfficialStateName,
sovereigny,
icon
)
VALUES
('AF', 'AFG', 'Afghanistan', 'The Islamic Republic of Afghanistan', 'UN MEMBER STATE', 'test')
SELECT * FROM #actual
END;
GO
EXEC tSQLt.Run 'TestThing';
What I'm trying to do with the code above is basically just to get FakeTable working. I get an error: "FakeTable couold not resolve the object name #Actual"
What I ultimately want to test is the paramaters in the stored procedure. Only certain entries should be returned if, say, IncludeSalesForceInvestorCountry is set to 1. What should be returned may change over time, so that's why I was advised to use FakeTable.
In your scenario, you don’t need to fake any temp tables, just fake the table that is referenced by Marketing.API_StoredProc and populate it with values that you expect to be returned, and some you don’t. Add what you expect to see in an #expected table, call Marketing.API_StoredProc dumping the results into an #actual table and compare the results with tSQLt.AssertEqualsTable.
A good starting point might be to review how tSQLT.FakeTable works and a real world use case.
As you know, each unit test runs within its own transaction started and rolled back by the tSQLT framework. When you call tSQLt.FakeTable within a unit test, it temporarily renames the specified table then creates an exactly named facsimile of that table. The temporary copy allows NULL in every column, has no primary or foreign keys, identity column, check, default or unique constraints (although some of those can be included in the facsimile table depending on parameters passed to tSQLt.FakeTable). For the duration of the test transaction, any object that references the name table will use the fake rather than the real table. At the end of the test, tSQLt rolls back the transaction, the fake table is dropped and the original table returned to its former state (this all happens automatically). You might ask, what is the point of that?
Imagine you have an [OrderDetail] table which has columns including OrderId and ProductId as the primary key, an OrderStatusId column plus a bunch of other NOT NULL columns. The DDL for this table might look something like this.
CREATE TABLE [dbo].[OrderDetail]
(
OrderDetailId int IDENTITY(1,1) NOT NULL
, OrderId int NOT NULL
, ProductId int NOT NULL
, OrderStatusId int NOT NULL
, Quantity int NOT NULL
, CostPrice decimal(18,4) NOT NULL
, Discount decimal(6,4) NOT NULL
, DeliveryPreferenceId int NOT NULL
, PromisedDeliveryDate datetime NOT NULL
, DespatchDate datetime NULL
, ActualDeliveryDate datetime NULL
, DeliveryDelayReason varchar(500) NOT NULL
/* ... other NULL and NOT NULL columns */
, CONSTRAINT PK_OrderDetail PRIMARY KEY CLUSTERED (OrderId, ProductId)
, CONSTRAINT AK_OrderDetail_AutoIncrementingId UNIQUE NONCLUSTERED (OrderDetailId)
, CONSTRAINT FK_OrderDetail_Order FOREIGN KEY (OrderId) REFERENCES [dbo].[Orders] (OrderId)
, CONSTRAINT FK_OrderDetail_Product FOREIGN KEY (ProductId) REFERENCES [dbo].[Product] (ProductId)
, CONSTRAINT FK_OrderDetail_OrderStatus FOREIGN KEY (OrderStatusId) REFERENCES [dbo].[OrderStatus] (OrderStatusId)
, CONSTRAINT FK_OrderDetail_DeliveryPreference FOREIGN KEY (DeliveryPreferenceId) REFERENCES [dbo].[DeliveryPreference] (DeliveryPreferenceId)
);
As you can see, this table has foreign key dependencies on the Orders, Product, DeliveryPreference and OrderStatus table. Product may in turn have foreign keys that reference ProductType, BrandCategory, Supplier among others. The Orders table has foreign key references to Customer, Address and SalesPerson among others. All of the tables in this chain have numerous columns defined as NOT NULL and/or are constrained by CHECK and other constraints. Some of these tables themselves have more foreign keys.
Now imagine you want to write a stored procedure (OrderDetailStatusUpdate) whose job it is to update the order status for a single row on the OrderDetail table. It has three input parameters #OrderId, #ProductId and #OrderStatusId. Think about what you would need to do to set up a test for this procedure. You would need to add at least two rows to the OrderDetail table including all the NOT NULL columns. You would also need to add parent records to all the FK-referenced tables, and also to any tables above that in the hierarchy, ensuring that all your inserts comply with all the nullability and other constraints on those tables too. By my count that is at least 11 tables that need to be populated, all for one simple test. And even if you bite the bullet and do all that set-up, at some point in the future someone may (probably will) come along and add a new NOT NULL column to one of those tables or change a constraint that will cause your test to fail - and that failure actually has nothing to do with your test or the stored procedure you are testing. One of the basic tenets of test-driven development is that a test should have only on reason to fail, I count dozens.
tSQLT.FakeTable to the rescue.
What is the minimum you actually need to do to in order to set up a test for that procedure? You need two rows to the OrderDetail table (one that gets updated, one that doesn’t) and the only columns you actually “need” to consider are OrderId and ProductId (the identifying key) plus OrderStatusId - the column being updated. The rest of the columns whilst important in the overall design, have no relevance to the object under test. In your test for OrderDetailStatusUpdate, you would follow these steps:
Call tSQLt.FakeTable ‘dbo.OrderDetail’
Create an #expected table (with OrderId, ProductId and OrderStatusId
columns) and populate it with the two rows you expect to end up with
(one will have the expected OrderStatusId the other can be NULL)
Add two rows to the now mocked OrderDetail table (OrderId and
ProductId only)
Call the procedure under test OrderDetailStatusUpdate passing the
OrderID and ProductID for one of the rows inserted plus the
OrderStatusId you are changing to.
Use tSQLt.AssertEqualsTable to compare the #expected table with the
OrderDetail table. This assertion will only compare the columns on
the #expected table, the other columns on OrderDetail will be ignored
Creating this test is really quick and the only reason it is ever likely to fail is because something pertinent to the code under test has changed in the underlying schema. Changes to any other columns on the OrderDetail table or any of the parent/grand-parent tables will not cause this test to break.
So the reason for using tSQLt.FakeTable (or any other kind of mock object) is to provide really robust test isolation and simply test data preparation.
How could I set a constraint on a table so that only one of the records has its isDefault bit field set to 1?
The constraint is not table scope, but one default per set of rows, specified by a FormID.
Use a unique filtered index
On SQL Server 2008 or higher you can simply use a unique filtered index
CREATE UNIQUE INDEX IX_TableName_FormID_isDefault
ON TableName(FormID)
WHERE isDefault = 1
Where the table is
CREATE TABLE TableName(
FormID INT NOT NULL,
isDefault BIT NOT NULL
)
For example if you try to insert many rows with the same FormID and isDefault set to 1 you will have this error:
Cannot insert duplicate key row in object 'dbo.TableName' with unique
index 'IX_TableName_FormID_isDefault'. The duplicate key value is (1).
Source: http://technet.microsoft.com/en-us/library/cc280372.aspx
Here's a modification of Damien_The_Unbeliever's solution that allows one default per FormID.
CREATE VIEW form_defaults
AS
SELECT FormID
FROM whatever
WHERE isDefault = 1
GO
CREATE UNIQUE CLUSTERED INDEX ix_form_defaults on form_defaults (FormID)
GO
But the serious relational folks will tell you this information should just be in another table.
CREATE TABLE form
FormID int NOT NULL PRIMARY KEY
DefaultWhateverID int FOREIGN KEY REFERENCES Whatever(ID)
From a normalization perspective, this would be an inefficient way of storing a single fact.
I would opt to hold this information at a higher level, by storing (in a different table) a foreign key to the identifier of the row which is considered to be the default.
CREATE TABLE [dbo].[Foo](
[Id] [int] NOT NULL,
CONSTRAINT [PK_Foo] PRIMARY KEY CLUSTERED
(
[Id] ASC
) ON [PRIMARY]
) ON [PRIMARY]
GO
CREATE TABLE [dbo].[DefaultSettings](
[DefaultFoo] [int] NULL
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[DefaultSettings] WITH CHECK ADD CONSTRAINT [FK_DefaultSettings_Foo] FOREIGN KEY([DefaultFoo])
REFERENCES [dbo].[Foo] ([Id])
GO
ALTER TABLE [dbo].[DefaultSettings] CHECK CONSTRAINT [FK_DefaultSettings_Foo]
GO
You could use an insert/update trigger.
Within the trigger after an insert or update, if the count of rows with isDefault = 1 is more than 1, then rollback the transaction.
CREATE VIEW vOnlyOneDefault
AS
SELECT 1 as Lock
FROM <underlying table>
WHERE Default = 1
GO
CREATE UNIQUE CLUSTERED INDEX IX_vOnlyOneDefault on vOnlyOneDefault (Lock)
GO
You'll need to have the right ANSI settings turned on for this.
I don't know about SQLServer.But if it supports Function-Based Indexes like in Oracle, I hope this can be translated, if not, sorry.
You can do an index like this on suposed that default value is 1234, the column is DEFAULT_COLUMN and ID_COLUMN is the primary key:
CREATE
UNIQUE
INDEX only_one_default
ON my_table
( DECODE(DEFAULT_COLUMN, 1234, -1, ID_COLUMN) )
This DDL creates an unique index indexing -1 if the value of DEFAULT_COLUMN is 1234 and ID_COLUMN in any other case. Then, if two columns have DEFAULT_COLUMN value, it raises an exception.
The question implies to me that you have a primary table that has some child records and one of those child records will be the default record. Using address and a separate default table here is an example of how to make that happen using third normal form. Of course I don't know if it's valuable to answer something that is so old but it struck my fancy.
--drop table dev.defaultAddress;
--drop table dev.addresses;
--drop table dev.people;
CREATE TABLE [dev].[people](
[Id] [int] identity primary key,
name char(20)
)
GO
CREATE TABLE [dev].[Addresses](
id int identity primary key,
peopleId int foreign key references dev.people(id),
address varchar(100)
) ON [PRIMARY]
GO
CREATE TABLE [dev].[defaultAddress](
id int identity primary key,
peopleId int foreign key references dev.people(id),
addressesId int foreign key references dev.addresses(id))
go
create unique index defaultAddress on dev.defaultAddress (peopleId)
go
create unique index idx_addr_id_person on dev.addresses(peopleid,id);
go
ALTER TABLE dev.defaultAddress
ADD CONSTRAINT FK_Def_People_Address
FOREIGN KEY(peopleID, addressesID)
REFERENCES dev.Addresses(peopleId, id)
go
insert into dev.people (name)
select 'Bill' union
select 'John' union
select 'Harry'
insert into dev.Addresses (peopleid, address)
select 1, '123 someplace' union
select 1,'work place' union
select 2,'home address' union
select 3,'some address'
insert into dev.defaultaddress (peopleId, addressesid)
select 1,1 union
select 2,3
-- so two home addresses are default now
-- try adding another default address to Bill and you get an error
select * from dev.people
join dev.addresses on people.id = addresses.peopleid
left join dev.defaultAddress on defaultAddress.peopleid = people.id and defaultaddress.addressesid = addresses.id
insert into dev.defaultaddress (peopleId, addressesId)
select 1,2
GO
You could do it through an instead of trigger, or if you want it as a constraint create a constraint that references a function that checks for a row that has the default set to 1
EDIT oops, needs to be <=
Create table mytable(id1 int, defaultX bit not null default(0))
go
create Function dbo.fx_DefaultExists()
returns int as
Begin
Declare #Ret int
Set #ret = 0
Select #ret = count(1) from mytable
Where defaultX = 1
Return #ret
End
GO
Alter table mytable add
CONSTRAINT [CHK_DEFAULT_SET] CHECK
(([dbo].fx_DefaultExists()<=(1)))
GO
Insert into mytable (id1, defaultX) values (1,1)
Insert into mytable (id1, defaultX) values (2,1)
This is a fairly complex process that cannot be handled through a simple constraint.
We do this through a trigger. However before you write the trigger you need to be able to answer several things:
do we want to fail the insert if a default exists, change it to 0 instead of 1 or change the existing default to 0 and leave this one as 1?
what do we want to do if the default record is deleted and other non default records are still there? Do we make one the default, if so how do we determine which one?
You will also need to be very, very careful to make the trigger handle multiple row processing. For instance a client might decide that all of the records of a particular type should be the default. You wouldn't change a million records one at a time, so this trigger needs to be able to handle that. It also needs to handle that without looping or the use of a cursor (you really don't want the type of transaction discussed above to take hours locking up the table the whole time).
You also need a very extensive tesing scenario for this trigger before it goes live. You need to test:
adding a record with no default and it is the first record for that customer
adding a record with a default and it is the first record for that customer
adding a record with no default and it is the not the first record for that customer
adding a record with a default and it is the not the first record for that customer
Updating a record to have the default when no other record has it (assuming you don't require one record to always be set as the deafault)
Updating a record to remove the default
Deleting the record with the deafult
Deleting a record without the default
Performing a mass insert with multiple situations in the data including two records which both have isdefault set to 1 and all of the situations tested when running individual record inserts
Performing a mass update with multiple situations in the data including two records which both have isdefault set to 1 and all of the situations tested when running individual record updates
Performing a mass delete with multiple situations in the data including two records which both have isdefault set to 1 and all of the situations tested when running individual record deletes
#Andy Jones gave an answer above closest to mine, but bearing in mind the Rule of Three, I placed the logic directly in the stored proc that updates this table. This was my simple solution. If I need to update the table from elsewhere, I will move the logic to a trigger. The one default rule applies to each set of records specified by a FormID and a ConfigID:
ALTER proc [dbo].[cpForm_UpdateLinkedReport]
#reportLinkId int,
#defaultYN bit,
#linkName nvarchar(150)
as
if #defaultYN = 1
begin
declare #formId int, #configId int
select #formId = FormID, #configId = ConfigID from csReportLink where ReportLinkID = #reportLinkId
update csReportLink set DefaultYN = 0 where isnull(ConfigID, #configId) = #configId and FormID = #formId
end
update
csReportLink
set
DefaultYN = #defaultYN,
LinkName = #linkName
where
ReportLinkID = #reportLinkId
I have a model that consists in these 3 tables (among others):
Item
PK id_item
Set
PK id_set
Subset
PK id_subset
Each Item MUST belong to one and just one Set (1..N)
You can define zero or more Subsets for each Set (0..N)
Each Item belongs to zero or one subset (0..N)
Ive modelled the database adding the following FK:
Item
PK id_item
FK id_set
FK id_subset
Set
PK id_set
Subset
PK id_subset
FK id_set
I cannot find a way to forbid the database to accept Items belonging to one Set (A) and to a Subset (B2) that belongs to a different Set (B).
Is there anyway to do so? Or is this just a bad design/modelling?
This is a SQL Server 2008 database
First, if an Item can belong to a subset, you must add a foreign key between the Item table and the subset table.
Second, add a check constraint on the Item table that will make sure that if the subset_id does not belong in the set_id, will raise an exception.
To do that, first you create a user defied function to test the values:
CREATE FUNCTION udf_CheckSubSet
(
#id_set int,
#id_subset int
)
RETURNS int
AS
BEGIN
IF #id_subset IS NULL OR EXISTS (
SELECT 1
FROM Subset
WHERE id_subset = #id_subset
AND id_set = #id_set
)
BEGIN
RETURN 1
END
-- logical else
RETURN 0
END
then you create the check constraint:
ALTER TABLE Item
ADD CONSTRAINT cc_Item_subset CHECK (dbo.udf_CheckSubSet(id_set, id_subset) = 1);
However, I also suggest to create a stored procedure to insert the item, and test inside the stored procedure before inserting the item.
The reason for this is that it's much more expensive (performance-wise) to handle exceptions then to simply test the input before inserting it to the table.
you might be wondering why do you even need the check constraint, if you already handle the problem with the stored procedure. The answer to this question is that the check constraint will not allow inserting updating the data in the table even if someone tries to do it directly from SSMS, or just write an insert or update statement.
Disclaimer: while it is possible to implement this kind of constraint using database schema alone, I strongly advise you against using the approach explained below in any real life project.
Academically speaking, in order to do what you want you have to migrate the identifying key from Set via both Set and Subset foreign keys. The schema will look like this:
use master;
go
if db_id('SampleDB') is not null
set noexec on;
go
create database SampleDB;
go
use SampleDB;
go
/*==============================================================*/
/* Table: Sets */
/*==============================================================*/
create table dbo.[Sets] (
[Id] int not null,
[Name] varchar(50) not null,
constraint [PK_Sets] primary key (Id)
)
go
/*==============================================================*/
/* Table: SubSets */
/*==============================================================*/
create table dbo.[SubSets] (
[SetId] int not null,
[SubsetId] int not null,
[Name] varchar(50) not null,
constraint [PK_SubSets] primary key (SetId, SubsetId)
)
go
alter table dbo.SubSets
add constraint FK_SubSets_Sets_SetId foreign key (SetId)
references dbo.Sets (Id)
go
/*==============================================================*/
/* Table: Items */
/*==============================================================*/
create table dbo.[Items] (
[Id] int not null,
[SetId] int not null,
[SubsetId] int null,
[Name] varchar(50) not null,
constraint [PK_Items] primary key (Id)
)
go
alter table dbo.Items
add constraint FK_Items_Sets_SetId foreign key (SetId)
references dbo.Sets (Id)
go
alter table dbo.Items
add constraint FK_Items_SubSets_SetIdSubsetId foreign key (SetId, SubsetId)
references dbo.SubSets (SetId, SubsetId)
go
set noexec off;
go
use master;
go
As you can see, the PK on the dbo.Subset table is somewhat lame. It serves its purpose, of course, but it could have been made simpler. Another unusual thing is that SubsetId column in dbo.Items table participates in 2 foreign keys that point to different tables.
You can insert some data into this schema, and it will be perfectly fine:
insert into dbo.Sets (Id, Name)
values
(1, 'Set 1'),
(2, 'Set 2');
go
insert into dbo.SubSets (SetId, SubsetId, Name)
values
(1, 1, 'Subset 1-1'),
(1, 2, 'Subset 1-2');
go
insert into dbo.Items (Id, SetId, SubsetId, Name)
values
(1, 1, 1, 'Banana'),
(2, 1, 1, 'Plate'),
(3, 1, 2, 'Charger'),
(4, 1, null, 'Toothpick'),
(5, 2, null, 'Cup');
And you will be hit with FK constraint violation when you try to add contradictory data, such as this:
insert into dbo.Items (Id, SetId, SubsetId, Name)
values
(6, 2, 1, 'Fake t-shirt');
The subset 1 does not belong to the set 2, so the command above will not succeed.
Now - why you should never use this design approach, unless being forced to do so at the gunpoint:
Not every business constraint can and should be implemented on the
schema level. Actually, writing it down in the stored procedure will
be easier to understand, maintain and work with, in most cases;
It contains rarely used tricks which are very confusing and
unexpected for most people, even seasoned database professionals. All
of this add up to the cost of maintenance;
Last but not least - queries that will work correctly with this kind
of schema will be, how shall I put this, awkward and difficult to write. Also, you will most probably encounter a lot of problems it you will try to combine this schema with any kind of ORM. Or maybe not; or maybe they will only manifest themselves once being put in production, etc.
I just need some confirmation is database designed like this is fine or not. And if not am I doing something wrong here.
I have following tables:
TableA{TableAID,...}
TableB{TableBID,...}
TableC{TableCID,...}
etc.
And I have one table that I use like some kind of 'news feed'. When I add something in any table A,B,C I also add row in this table.
Feed{FeedID, TypeID, ReferenceID,...}
FeedID is PK auto increment
TypeID is number that reference types table and based on this ID I know is row in this table from table A,B,C.
ReferenceId is ID of item in tables A,B,C.
A,B,C tables all have different fields.
Now when I want to get feed data I also need to grab some data from each of this table to use it in application. In my query to get this I use a lot SELECT CASE CLAUSE like:
I first join to all tables in query (A,B,C)
...
CASE Feed.TypeId
WHEN 1 THEN tableA.someData
WHEN 2 THEN tableB.someData
WHEN 3 THEN tableC.someData
END AS Data,
...
Without getting into suitability of this for a specific purpose, your supertype-subtype model is "reversed".
So DDL looks something like
CREATE TABLE Feed (
FeedID integer IDENTITY(1,1) not null
, FeedType char(1) not null
-- Common_Columns_Here
, Common_Column varchar(20)
);
ALTER TABLE Feed ADD CONSTRAINT pk_Feed PRIMARY KEY (FeedID) ;
CREATE TABLE Feed_A (
FeedID integer not null
-- A_Specific_Columns_Here
, A_Specific_Column varchar(20)
);
ALTER TABLE Feed_A ADD
CONSTRAINT pk_Feed_A PRIMARY KEY (FeedID)
, CONSTRAINT fk1_Feed_A FOREIGN KEY (FeedID) REFERENCES Feed(FeedID) ;
CREATE TABLE Feed_B (
FeedID integer not null
-- B_Specific_Columns_Here
, B_Specific_Column varchar(20)
);
ALTER TABLE Feed_B ADD
CONSTRAINT pk_Feed_B PRIMARY KEY (FeedID)
, CONSTRAINT fk1_Feed_B FOREIGN KEY (FeedID) REFERENCES Feed(FeedID) ;
CREATE TABLE Feed_C (
FeedID integer not null
-- C_Specific_Columns_Here
, C_Specific_Column varchar(20)
);
ALTER TABLE Feed_C ADD
CONSTRAINT pk_Feed_C PRIMARY KEY (FeedID)
, CONSTRAINT fk1_Feed_C FOREIGN KEY (FeedID) REFERENCES Feed(FeedID) ;
Now, in order to read from this structure, create a view first
create view vFeed as
select
f.FeedID
, FeedType
, Common_Column
, A_Specific_Column
, B_Specific_Column
, C_Specific_Column
from Feed as f
left join Feed_A as a on (a.FeedID = f.FeedID and f.FeedType = 'A')
left join Feed_B as b on (b.FeedID = f.FeedID and f.FeedType = 'B')
left join Feed_C as c on (c.FeedID = f.FeedID and f.FeedType = 'C')
;
Look what happens when I want to select data which I know is from feed A. Note that FeedType is not specified in this query, only column name which belongs to Feed_A (and common column).
select
FeedID
, Common_Column
, A_Specific_Column
from vFeed;
Notice that execution plan shows only Feed and Feed_A tables, query optimizer eliminated tables _B and _C; no need to touch those two.
In other words, you can ask for a specific feed data by simply using only specific columns in a query, and let the optimizer sort everything else out -- no need for CASE ... WHEN .. acrobatics from your example.
As I suggested in my comment (and along with #Andomar's wisdom), I think something like this would work better:
CREATE TABLE dbo.FeedTypes
(
FeedTypeID INT IDENTITY(1,1) PRIMARY KEY,
SomedataA INT,
SomedataB VARCHAR(32),
SomedataC DATETIME
--, ... other columns
);
CREATE TABLE dbo.Feeds
(
FeedID INT IDENTITY(1,1) PRIMARY KEY,
FeedTypeID INT NOT NULL FOREIGN KEY
REFERENCES dbo.FeedTypes(FeedTypeID)
--, ... other columns
);
You could enforce the presence/absence of data in the relevant columns for a given type using complex check constraints or triggers. But you'd have to have pretty complex logic (as you would in your current model) if a feed can change types easily.
Add all the data you wish to display in the "News Feed" in the Feed table. It is duplicate data, but it will make your life a lot easier in the long run.
It also ensures that your newsfeed stays historically correct. This means that when I update a record in one of the three tables, the "old" feed data stays intact instead of being updated with the new values.