I would like your advice regarding Data Base design.
I have 4 different data elements (tables A,B,C,D) example:
A - Contents
B - Categories
C - Authors
and
D - Images
Every record in tables A,B,C could have associated 1 or more different Images in Table D,
BUT for every image in D must be uniquely associated only a record in A,B,C.
This means that images cannot be shared (between others tables).
My idea was to create different Image tables for every data elements, using ONE to MANY association type.
Example:
Content --> Image-Contents
and
Categories --> Image-Categories
Questions?
My database design is a good one?
Since Tables "Image-Contents" and "Image-Categories", could have similar property like "File-Url" or "Image-Title", I was concerning if could be exist a most suitable database design solution.
Thanks for your time
I think you would want a table that maps each of ABC to an image. For example:
Content -> ContentImages -> Images
--------- ------------- ------
ContentId ImageId ImageId
ContentId
Categories -> CategoryImages -> Images
---------- ---------------- ------
CategoryId ImageId ImageId
CategoryId
Authors -> AuthorImages -> Images
---------- ---------------- ------
AuthorId ImageId ImageId
AuthorId
It may seem a little cumbersome but i think this is the normal form.
Perhaps the most common way to implement this design is with the "one table per owner type" scheme you mentioned (Tables for Images, "Owner A", "Owner A Images", and repeat for owners B, C, etc). Another common way to implement this is with one "central" table for Images, with the single owner's Id stored within that table. Your criteria are particularly limiting, in that an image may be associated with one and only one owner, but there are multiple types of owner. Implementing such constraints inside the database is tricky, but implementing them outside of the database is much more difficult and problematic for all the usual reasons (application doing the databases work, and what happens when someone modifies the database outside of the dedicated application?)
The following is an example of how these structures and constraints might be implemented within the database. It may appear fussy, detailed, and overly-complex, but it will do the job, and once properly implemented you would never have to worry whether or not your data was consistant and valid.
First off, all images are stored in the following table. It must be known what "type" of owner an image may be assigned to; set that in ImageType, and (as per the constraints in the later tables) the image can not be assigned to any other kind of owner. Ever. (You could also put a CHECK constraint on ImageType to ensure that only valid image types could be loaded in the table.)
CREATE TABLE Image
(
ImageId int not null
,ImageType char(1) not null
,constraint PK_Image
primary key clustered (ImageId, ImageType)
)
Next, build some owner tables. You could have any number of these, I'm just making two for sake of the example.
CREATE TABLE A
(
AId int not null
constraint PK_A
primary key clustered
)
CREATE TABLE B
(
BId int not null
constraint PK_B
primary key clustered
)
Build the association tables, noting the comments next to the constraint definitions. (This is the overly-fussy part...)
CREATE TABLE Image_A
(
ImageId int not null
constraint PK_Image_A
primary key clustered -- An image can only be assigned to one owner
,AId int not null
,ImageType char(1) not null
constraint DF_Image_A
default 'A'
constraint CK_Image_A__ImageType
check (ImageType in ('A')) -- Always have this set to the type of the owner for this table
,constraint FK_Image_A__A
foreign key (AId) references A (AId) -- Owner must exist
,constraint FK_Image_A__Image
foreign key (ImageId, ImageType) references Image (ImageId, ImageType) -- Image must exist *for this type of owner*
)
-- Same comments for this table
CREATE TABLE Image_B
(
ImageId int not null
constraint PK_Image_B
primary key clustered
,BId int not null
,ImageType char(1) not null
constraint DF_Image_B
default 'B'
constraint CK_Image_B__ImageType
check (ImageType in ('B'))
,constraint FK_Image_B__B
foreign key (BId) references B (BId)
,constraint FK_Image_B__Image
foreign key (ImageId, ImageType) references Image (ImageId, ImageType)
)
Load some sample data
INSERT Image values (1, 'A')
INSERT Image values (2, 'A')
INSERT Image values (3, 'B')
INSERT Image values (4, 'B')
INSERT A values (101)
INSERT A values (102)
INSERT B values (201)
INSERT B values (102)
View the current contents of the tables:
SELECT * from A
SELECT * from B
SELECT * from Image
SELECT * from Image_A
SELECT * from Image_B
And do some tests:
-- Proper fit
INSERT Image_A (ImageId, AId) values (1, 101)
-- Run it again, can only assign once
-- Cannot assign the same image to a second owner of the proper type
INSERT Image_A (ImageId, AId) values (1, 102)
-- Can't assign image to an invalid owner type
INSERT Image_B (ImageId, BId) values (1, 201)
-- Owner can be assigned multiple images
INSERT Image_A (ImageId, AId) values (2, 101)
(This drops the testing tables)
drop table Image
drop table A
drop table B
drop table Image_A
drop table Image_B
(Techincally, this is a good example of a variant on the exclusive type/subtype data modelling "problem".)
create table A (IDA int not null, primary key(IDA));
create table B (IDB int not null, primary key(IDB));
create table C (IDC int not null, primary key(IDC));
create table Image(IDI int, A int null, B int null, C int null, Contents image,
foreign key (A) references A(IDA),
foreign key (B) references B(IDB),
foreign key (C) references C(IDC),
check (
(A is not null and B is null and C is null) or
(A is null and B is not null and C is null) or
(A is null and B is null and C is not null)
));
Yes, you're looking in the right direction.
Keep your current setup of the four tables and then create 3 more that hold only metadata that tells you the linking between, for example, the content table and the image tables.
For example, the images-content table will have columns: id, content-id, image-id
And so on.
Related
I have a table of identifiers, IntervalFrom and IntervalTo:
Identifier
IntervalFrom
IntervalTo
1
0
2
1
2
4
2
0
2
2
2
4
I already have a trigger to NOT allow the intervals to overlap.
I am looking for a trigger or constraint that will not allow data gaps.
I have search and the information I found relates to gaps in queries and data rather than not allowing them in the first place.
I am unable to find anything in relation to this as a trigger or constraint.
Is this possible using T-SQL?
Thanks in advance.
You can construct a table that automatically is immune from overlaps and gaps:
create table T (
ID int not null,
IntervalFrom int null,
IntervalTo int null,
constraint UQ_T_Previous_XRef UNIQUE (ID, IntervalTo),
constraint UQ_T_Next_XRef UNIQUE (ID, IntervalFrom),
constraint FK_T_Previous FOREIGN KEY (ID, IntervalFrom) references T (ID, IntervalTo),
constraint FK_T_Next FOREIGN KEY (ID, IntervalTo) references T (ID, IntervalFrom)
)
go
create unique index UQ_T_Start on T (ID) where IntervalFrom is null
go
create unique index UQ_T_End on T(ID) where IntervalTo is null
go
Note, this does require a slightly different convention for you first and last intervals - they need to use null rather than 0 or the (somewhat arbitrary) 4.
Note also that modifying data in such a table can be a challenge - if you're inserting a new interval, you also need to update other intervals to accommodate the new one. MERGE is your friend here.
Given the above, we can insert your (modified) sample data:
insert into T (ID, IntervalFrom, IntervalTo) values
(1,null,2),
(1,2,null),
(2,null,2),
(2,2,null)
go
But we cannot insert an overlapping value (this errors):
insert into T(ID, IntervalFrom, IntervalTo) values (1,1,3)
You should also see that the foreign keys prevent gaps from existing in a sequence
I am working on a project using DB2 in attempt to create a small inventory system. Currently my team and I are looking for answers to a roadblock we have encountered where we are creating tables with auto generated fields. I have included a sample create statement to give you an understanding of what we are working with:
"CREATE TABLE INVENTORY (
PRODUCT_ID INT NOT NULL Generated always as identity (start with 1 increment by 1 minvalue 1 no maxvalue no cycle no cache order),
PRODUCT_NAME CHAR(100) NOT NULL,
DESCRIPTION CHAR(100),
UNIT_PRICE DECIMAL(6 , 2),
QUANTITY INTEGER
)
DATA CAPTURE NONE;
The issue we are coming across is when we insert test data into our database (no errors when we run the ddl; test data has no errors when we insert it) is that we can view the values of an auto generated field in the main table, but NOT in an intersection table where that value is used as a foreign key. An example of an intersection table that we are using in relation to our INVENTORY table is below:
CREATE TABLE RECEIVABLES (
ITEM_LINE INT NOT NULL Generated always as identity (start with 10 increment by 1 minvalue 10 no maxvalue no cycle no cache order),
PAYMENT_TYPE CHAR(15),
REMARKS CHAR(70),
RECEIVED_QUANTITY INTEGER,
VENDOR_ID CHAR(4) NOT NULL,
PRODUCT_ID INT,
SHIPMENT_ID INT
)
DATA CAPTURE NONE;
INSERT INTO RECEIVEABLES VALUES (DEFAULT,'DISCOVER','Success',9,'1000',DEFAULT,DEFAULT);
When we view the data, we are not seeing any NULL values; we simply aren't seeing ANY values for any of the foreign key fields in the intersection table (in this case, PRODUCT_ID and SHIPMENT_ID).
Is there a particular method to inserting data into an intersection table that will allow the values of the Primary keys to be represented in the intersection table as well?
Thank you very much for your time. I will be actively responding to any questions you may have
It takes more than naming a column the same to make it a foreign key. In fact, the name doesn't matter. What matters is defining a FK constraint over it:
CREATE TABLE RECEIVABLES (
ITEM_LINE INT NOT NULL Generated always as identity
(start with 10
increment by 1
minvalue 10
no maxvalue
no cycle
no cache order),
PAYMENT_TYPE CHAR(15),
REMARKS CHAR(70),
RECEIVED_QUANTITY INTEGER,
VENDOR_ID CHAR(4) NOT NULL,
PRODUCT_ID INT,
SHIPMENT_ID INT,
CONSTRAINT inv_fk FOREIGN KEY (product_id)
REFERENCES inventory (product_id),
CONSTRAINT shp_fk FOREIGN KEY (shipment_id)
REFERENCES shimpments (shipment_id)
) DATA CAPTURE NONE;
I would have expected to see null values in original table after your insert; since neither product_id nor shimpment_id in recieables is defined with NOT NULL. Running the same insert over the table I've posted you still give you NULL in the FK columns.
The generated ID is only generated in the primary table. You have to specify the value when inserting into into RECEIVABLES.
Now let's say you've just insert a row into INVENTORY and you want to insert a row referencing it into RECEIVABLES. Before the insert into RECEIVABLES, you have to ask the DB what the ID it generated for the row in INVENTORY.
The IDENTITY_VAL_LOCAL() function is one such way to do so...
I have a table that stores information on Vendors called dbo.Vendor. Its has fields like this:
1. VendorID
2. VendorName
3. VendorType
4. AddressLine1
5. EMail
6. Telephone
7. and so on....
This information is common to all vendors. But depending on the type of vendor (VendorType field) I need to collect more specific information. For example a vendor that is a charity will have a Charity Number but a vendor that are Lawyers will have some kind of legal registration number instead. If a vendor is a cinema then I may need to know seating capacity which won't apply to other vendors of course.
Do I really have to create a unique table for each of these different vendors e.g. dbo.VendorLaw, dbo.VendorCinema. Or can I create all possible fields in the main dbo.Vendor table and leave NULL values where the field does not apply to that vendor? This is breaking normalization rules of course.
Depending on the scope of how much additional optional info you need per vendor type, I would create another two tables: one reference table, which stores all the different types of additional info, and one table that stores all the records (and links to the main table).
CREATE TABLE schema.VendorAdditionalInfo (
autoId serial NOT NULL,
vendorId int,
vendorInfoId int,
vendorInfoText varchar
);
Then create your reference table:
CREATE TABLE schema.VendorInfo (
vendorInfoId serial NOT NULL,
vendorType int,
vendorInfoName text
)
This way you can create any amount of records in VendorAdditionalInfo based on what vendor type it is.
EDIT: Example of the info you'd input:
INSERT INTO schema.VendorInfo (vendorType, vendorInfoName)
VALUES
(1, 'Lawyer Registration Number'),
(2, 'Nurse ID Number'),
(3, 'Hot Dog Business License')
Then for your records table you'd enter your info as such:
INSERT INTO schema.VendorAdditionalInfo (vendorId, vendorInfoId, vendorInfoText)
VALUES
(10, 1, 'LAW13245'),
(11, 2, 'NURSE234234'),
(12, 1, 'LAW56156'),
(13, 3, 'HOTDOGBUSINESSLIC23')
Basically - the text field is the field that's unique for each additional info type.
I would create the additional tables. This allows you to enforce null/non-null (and other) constraints easily based on the vendor type - and you can even create a superkey in your existing table on (VendorID,VendorType) and a computed column in each vendor specific column to ensure that e.g. only Cinema vendors have entries in the VendorCinema table.
CREATE TABLE Vendors (
VendorID int IDENTITY(-47,1) not null,
VendorName varchar(19) not null,
VendorType varchar(11) not null,
AddressLine1 varchar(35) not null,
EMail varchar(312) null,
Telephone varchar(15) null,
constraint PK_Vendors PRIMARY KEY (VendorID),
constraint UQ_Vendor_Types UNIQUE (VendorID,VendorType),
constraint CK_Vendor_Types CHECK (VendorType in ('Law','Cinema'))
)
and
CREATE TABLE CinemaVendors (
VendorID int not null,
VendorType as CONVERT(varchar(11),'Cinema') persisted,
Seating int not null,
BruceWillisMovies int not null,
constraint PK_CinemaVendors PRIMARY KEY (VendorID),
constraint FK_CinemaVendors_Vendors FOREIGN KEY
(VendorID,VendorType)
references Vendors (VendorID,VendorType),
constraint CK_BruceWillisMovies CHECK (BruceWillisMovies > 3)
)
This is far easier to do in separate tables than to have a slew of nullable columns in one single table and then trying to enforce all of the actual constraints.
This also addresses the concerns with the EAV model - where we want an int stored for cinema vendors, we're sure that an int has actually been stored.
(It's optional whether you also declare a foreign key between the two above tables based on just the VendorID column. Sometimes I do, sometimes I don't. It's the "real" foreign key, but we use the two column one above to ensure that only Cinema vendors end up in the CinemaVendors table)
I have a database containing different product types. Each type contains fields that differ greatly with each other. The first type of product, is classified in three categories. The second type of product, is classified in three categories. But the third and the fourth one, is not classified in anything.
Each product can have any number of different properties.
I am using database model which is basically like below:
(see the link)
https://www.damirsystems.com/static/img/product_model_01.png
I have a huge database, containing about 500000 products in product table.
So when I am going to fetch a product from database with all its attributes, or going to search product filtering by attributes, it affects performance badly.
Could anyone help me what will be the tables structure in sql or do some more indexing or any feasible solution for this problem. Because different ecommerce sites are using this kind of database and working fine with huge different types of products.
EDIT : The link to the image (on my site) is blocked, so here is the image
The model you link to looks like partial entity–attribute–value (EAV) model. EAV is very flexible, but offers poor data integrity, and is cumbersome and usually inefficient. It's not really in the spirit of the relational model. Having worked on some large e-commerce sites, i can tell you that this is not standard or good database design practice in this field.
If you don't have an enormous number of types of product (up to tens, but not hundreds) then you can handle this using one of a two common approaches.
The first approach is simply to have a single table for products, with columns for all the attributes that might be needed in each different kind of product. You use whichever columns are appropriate to each kind of product, and leave the rest null. Say you sell books, music, and video:
create table Product (
id integer primary key,
name varchar(255) not null,
type char(1) not null check (type in ('B', 'M', 'V')),
number_of_pages integer, -- book only
duration_in_seconds integer, -- music and video only
classification varchar(2) check (classification in ('U', 'PG', '12', '15', '18')) -- video only
);
This has the advantage of being simple, and of not requiring joins. However, it doesn't do a good job of enforcing integrity on your data (you could have a book without a number of pages, for example), and if you have more than a few types of products, the table will get highly unwieldy.
You can plaster over the integrity problem with table-level check constraints that require each type of products to have values certain columns, like this:
check ((case when type = 'B' then (number_of_pages is not null) else true end)))
(hat tip to Joe Celko there - i looked up how to do logical implication in SQL, and found an example where he does it with this construction to construct a very similar check constraint!)
You might even say:
check ((case when type = 'B' then (number_of_pages is not null) else (number_of_pages is null) end)))
To ensure that no row has a value in a column not appropriate to its type.
The second approach is to use multiple tables: one base table holding columns common to all products, and one auxiliary table for each type of product holding columns specific to products of that type. So:
create table Product (
id integer primary key,
type char(1) not null check (type in ('B', 'M', 'V')),
name varchar(255) not null
);
create table Book (
id integer primary key references Product,
number_of_pages integer not null
);
create table Music (
id integer primary key references Product,
duration_in_seconds integer not null
);
create table Video (
id integer primary key references Product,
duration_in_seconds integer not null,
classification varchar(2) not null check (classification in ('U', 'PG', '12', '15', '18'))
);
Note that the auxiliary tables have the same primary key as the main table; their primary key column is also a foreign key to the main table.
This approach is still fairly straightforward, and does a better job of enforcing integrity. Queries will typically involve joins, though:
select
p.id,
p.name
from
Product p
join Book b on p.id = b.id
where
b.number_of_pages > 300;
Integrity is still not perfect, because it's possible to create a row in an auxiliary tables corresponding to a row of the wrong type in the main table, or to create rows in multiple auxiliary tables corresponding to a single row in the main table. You can fix that by refining the model further. If you make the primary key a composite key which includes the type column, then the type of a product is embedded in its primary key (a book would have a primary key like ('B', 1001)). You would need to introduce the type column into the auxiliary tables so that they could have foreign keys pointing to the main table, and that point you could add a check constraint in each auxiliary table that requires the type to be correct. Like this:
create table Product (
type char(1) not null check (type in ('B', 'M', 'V')),
id integer not null,
name varchar(255) not null,
primary key (type, id)
);
create table Book (
type char(1) not null check (type = 'B'),
id integer not null,
number_of_pages integer not null,
primary key (type, id),
foreign key (type, id) references Product
);
This also makes it easier to query the right tables given only a primary key - you can immediately tell what kind of product it refers to without having to query the main table first.
There is still a potential problem of duplication of columns - as in the schema above, where the duration column is duplicated in two tables. You can fix that by introducing intermediate auxiliary tables for the shared columns:
create table Media (
type char(1) not null check (type in ('M', 'V')),
id integer not null,
duration_in_seconds integer not null,
primary key (type, id),
foreign key (type, id) references Product
);
create table Music (
type char(1) not null check (type = 'M'),
id integer not null,
primary key (type, id),
foreign key (type, id) references Product
);
create table Video (
type char(1) not null check (type = 'V'),
id integer not null,
classification varchar(2) not null check (classification in ('U', 'PG', '12', '15', '18')),
primary key (type, id),
foreign key (type, id) references Product
);
You might not think that was worth the extra effort. However, what you might consider doing is mixing the two approaches (single table and auxiliary table) to deal with situations like this, and having a shared table for some similar kinds of products:
create table Media (
type char(1) not null check (type in ('M', 'V')),
id integer not null,
duration_in_seconds integer not null,
classification varchar(2) check (classification in ('U', 'PG', '12', '15', '18')),
primary key (type, id),
foreign key (type, id) references Product,
check ((case when type = 'V' then (classification is not null) else (classification is null) end)))
);
That would be particularly useful if there were similar kinds of products that were lumped together in the application. In this example, if your shopfront presents audio and video together, but separately to books, then this structure could support much more efficient retrieval than having separate auxiliary tables for each kind of media.
All of these approaches share a loophole: it's still possible to create rows in the main table without corresponding rows in any auxiliary table. To fix this, you need a second set of foreign key constraints, this time from the main table to the auxiliary tables. This is particular fun for couple of reasons: you want exactly one of the possible foreign key relationships to be enforced at once, and the relationship creates a circular dependency between rows in the two tables. You can achieve the former using some conditionals in check constraints, and the latter using deferrable constraints. The auxiliary tables can be the same as above, but the main table needs to grow what i will tentatively call 'type flag' columns:
create table Product (
type char(1) not null check (type in ('B', 'M', 'V')),
id integer not null,
is_book char(1) null check (is_book is not distinct from (case type when 'B' then type else null end)),
is_music char(1) null check (is_music is not distinct from (case type when 'M' then type else null end)),
is_video char(1) null check (is_video is not distinct from (case type when 'V' then type else null end)),
name varchar(255) not null,
primary key (type, id)
);
The type flag columns are essentially repetitions of the type column, one for each potential type, which are set if and only if the product is of that type (as enforced by those check constraints). These are real columns, so values will have to be supplied for them when inserting rows, even though the values are completely predictable; this is a bit ugly, but hopefully not a showstopper.
With those in place, then after all the tables are created, you can form foreign keys using the type flags instead of the type, pointing to specific auxiliary tables:
alter table Product add foreign key (is_book, id) references Book deferrable initially deferred;
alter table Product add foreign key (is_music, id) references Music deferrable initially deferred;
alter table Product add foreign key (is_video, id) references Video deferrable initially deferred;
Crucially, for a foreign key relationship to be enforced, all its constituent columns must be non-null. Therefore, for any given row, because only one type flag is non-null, only one relationship will be enforced. Because these constraints are deferrable, it is possible to insert a row into the main table before the required row in the auxiliary table exists. As long as it is inserted before the transaction is committed, it's all above board.
I just need some confirmation is database designed like this is fine or not. And if not am I doing something wrong here.
I have following tables:
TableA{TableAID,...}
TableB{TableBID,...}
TableC{TableCID,...}
etc.
And I have one table that I use like some kind of 'news feed'. When I add something in any table A,B,C I also add row in this table.
Feed{FeedID, TypeID, ReferenceID,...}
FeedID is PK auto increment
TypeID is number that reference types table and based on this ID I know is row in this table from table A,B,C.
ReferenceId is ID of item in tables A,B,C.
A,B,C tables all have different fields.
Now when I want to get feed data I also need to grab some data from each of this table to use it in application. In my query to get this I use a lot SELECT CASE CLAUSE like:
I first join to all tables in query (A,B,C)
...
CASE Feed.TypeId
WHEN 1 THEN tableA.someData
WHEN 2 THEN tableB.someData
WHEN 3 THEN tableC.someData
END AS Data,
...
Without getting into suitability of this for a specific purpose, your supertype-subtype model is "reversed".
So DDL looks something like
CREATE TABLE Feed (
FeedID integer IDENTITY(1,1) not null
, FeedType char(1) not null
-- Common_Columns_Here
, Common_Column varchar(20)
);
ALTER TABLE Feed ADD CONSTRAINT pk_Feed PRIMARY KEY (FeedID) ;
CREATE TABLE Feed_A (
FeedID integer not null
-- A_Specific_Columns_Here
, A_Specific_Column varchar(20)
);
ALTER TABLE Feed_A ADD
CONSTRAINT pk_Feed_A PRIMARY KEY (FeedID)
, CONSTRAINT fk1_Feed_A FOREIGN KEY (FeedID) REFERENCES Feed(FeedID) ;
CREATE TABLE Feed_B (
FeedID integer not null
-- B_Specific_Columns_Here
, B_Specific_Column varchar(20)
);
ALTER TABLE Feed_B ADD
CONSTRAINT pk_Feed_B PRIMARY KEY (FeedID)
, CONSTRAINT fk1_Feed_B FOREIGN KEY (FeedID) REFERENCES Feed(FeedID) ;
CREATE TABLE Feed_C (
FeedID integer not null
-- C_Specific_Columns_Here
, C_Specific_Column varchar(20)
);
ALTER TABLE Feed_C ADD
CONSTRAINT pk_Feed_C PRIMARY KEY (FeedID)
, CONSTRAINT fk1_Feed_C FOREIGN KEY (FeedID) REFERENCES Feed(FeedID) ;
Now, in order to read from this structure, create a view first
create view vFeed as
select
f.FeedID
, FeedType
, Common_Column
, A_Specific_Column
, B_Specific_Column
, C_Specific_Column
from Feed as f
left join Feed_A as a on (a.FeedID = f.FeedID and f.FeedType = 'A')
left join Feed_B as b on (b.FeedID = f.FeedID and f.FeedType = 'B')
left join Feed_C as c on (c.FeedID = f.FeedID and f.FeedType = 'C')
;
Look what happens when I want to select data which I know is from feed A. Note that FeedType is not specified in this query, only column name which belongs to Feed_A (and common column).
select
FeedID
, Common_Column
, A_Specific_Column
from vFeed;
Notice that execution plan shows only Feed and Feed_A tables, query optimizer eliminated tables _B and _C; no need to touch those two.
In other words, you can ask for a specific feed data by simply using only specific columns in a query, and let the optimizer sort everything else out -- no need for CASE ... WHEN .. acrobatics from your example.
As I suggested in my comment (and along with #Andomar's wisdom), I think something like this would work better:
CREATE TABLE dbo.FeedTypes
(
FeedTypeID INT IDENTITY(1,1) PRIMARY KEY,
SomedataA INT,
SomedataB VARCHAR(32),
SomedataC DATETIME
--, ... other columns
);
CREATE TABLE dbo.Feeds
(
FeedID INT IDENTITY(1,1) PRIMARY KEY,
FeedTypeID INT NOT NULL FOREIGN KEY
REFERENCES dbo.FeedTypes(FeedTypeID)
--, ... other columns
);
You could enforce the presence/absence of data in the relevant columns for a given type using complex check constraints or triggers. But you'd have to have pretty complex logic (as you would in your current model) if a feed can change types easily.
Add all the data you wish to display in the "News Feed" in the Feed table. It is duplicate data, but it will make your life a lot easier in the long run.
It also ensures that your newsfeed stays historically correct. This means that when I update a record in one of the three tables, the "old" feed data stays intact instead of being updated with the new values.