Database 1:N tables structure, two approaches (one or multiple tables) - database

Let's assume we have application with pages, posts and events. With each part of this application we want to have comments. Now let's take a look into tables for our DB.
1. One comment table, object and object_id as foreign key
Page/Post/Event has many comments, foreign key object, object_id
comments table
+-------------+-------------+-------------+-------------+
| id | object | object_id | text |
=========================================================
| 1 | Page | 1 | Comment 1 |
+-------------+-------------+-------------+-------------+
| 2 | Post | 1 | Comment 2 |
+-------------+-------------+-------------+-------------+
| 3 | Event | 1 | Comment 3 |
+-------------+-------------+-------------+-------------+
2. Multiple comments tables
Page (Post, Event) has many page comments, foreign key page_id
page_comments table
+-------------+-------------+-------------+
| id | page_id | text |
===========================================
| 1 | 1 | Comment 1 |
+-------------+-------------+-------------+
post_comments table
+-------------+-------------+-------------+
| id | post_id | text |
===========================================
| 1 | 1 | Comment 2 |
+-------------+-------------+-------------+
event_comments table
+-------------+-------------+-------------+
| id | event_id | text |
===========================================
| 1 | 1 | Comment 3 |
+-------------+-------------+-------------+
I have used specific example, but this can apply to any other 1:N tables or even with M:N (tags), but for simple showcase, this should be good.
We should discuss
Performance concerns
Design pros and cons
Initial thoughts
case 1 means less tables in DB, easier to read, reusable application code
case 1 is better when doing query on all comments (would have to use union at case 2)
case 2 is better in regards of normalization (3NF)
case 2 is easier to backup (dump) parts of the system, e.g. pages itself with their comments
case 2 should be better with performance because less rows => faster

Related

Using tables to categorise resources with

I'm trying to design a database that allows for filtering according to if a specific resource fills certain categories. I've gotten to the point where I can input data that seems to be how it should be filled out but I'm not sure how I should pull it out again.
The main resource table looks like this:
Table1 - resources
| resourceID | AutoNum |
| title | short text |
| author | short text |
| publish date | date |
| type | short text |
Table2 - Department Categories
| ID | AutoNum |
| 1 | Yes/No |
| 2 | Yes/No |
| fID| Number |
Table3 - Categories
| ID | AutoNum |
| cat | Yes/No |
| dog | Yes/No |
| bird | Yes/No |
| fID | Number |
I have built a form where you can fill in items to the resource ID, and at the same time check off the Yes/No boxes in tables 2 & 3.
I'm trying to use the primary key ID from table 1 and copy it into the table 2 & 3 with referential integrity to cascade deletes, updates. Which I think is the right way to do this.
Currently, I've learnt that I can implement a search function for the columns in table 1, this seems to work fine. However I am stuck with applying the relevant columns in table 2 and 3 as filters.
apply search>
[X] - Cats
Should only return records from table 1 where in table 3 the relevant column has a tick in the Yes/No box.
I hope I have explained this properly, very new to Access and databases so if you need clarity, don't mind offering.

How to design a database for types and categories in Laravel?

As the questions states, what is the best way when designing a database for types and categories?
Scenario:
I have x amount of database-tables e.g. users, feedback, facts and countries, and all these tables have a type-attribute. What I've found is that a lot of people tend to just create type-tables for each and one of these. E.g. user_types, feedback_types, fact_types and country_types.
I'm currently working on a project where I don't want to create a bunch of extra tables just to handle their individual types. Therefore I'm trying to come up with a database-design-solution that fits all tables.
My best thought of solution:
At first I thought I might just create a polymorphic table that has id, type_id, typable_id and typable_type and a types table. Then i figured that I have to specify in the types table which type-attribute belongs to which table. Then it hit me, I can create a self-referencing table where the parent name is the table name.
E.g.
---------------------------------------------
|id | parent_id | name | description |
---------------------------------------------
| 1 | null | feedback | something |
---------------------------------------------
| 2 | 1 | general | something |
---------------------------------------------
| 3 | 1 | bug | something |
---------------------------------------------
| 4 | 1 | improvement | something |
---------------------------------------------
| 5 | null | countries | something |
---------------------------------------------
| 4 | 5 | europe | something |
---------------------------------------------
| 4 | 5 | asia | something |
---------------------------------------------
| etc... |
---------------------------------------------
Is this a ok design? I'm thinking a lot about the parent names in this table, I haven't seen anyone else use table-names as parents.
If thinking about it in a front-end point of view, it's easier to get the correct types depending on which types you're looking for.
Please give me feedback on this. I'm struggling to find a good design.

What's the fastest way to perform large inserts with foreign key relationships and preprocessing?

I need to regularly import large (hundreds of thousands of lines) tsv files into multiple related SQL Server 2008 R2 tables.
The input file looks something like this (it's actually even more complex and the data is of a different nature, but what I have here is analogous):
January_1_Lunch.tsv
+-------+----------+-------------+---------+
| Diner | Beverage | Food | Dessert |
+-------+----------+-------------+---------+
| Nancy | coffee | salad_steak | pie |
| Joe | milk | soup_steak | cake |
| Pat | coffee | soup_tofu | pie |
+-------+----------+-------------+---------+
Notice that one column contains a character-delimited list that needs preprocessing to split it up.
The schema is highly normalized -- each record has multiple many-to-many foreign key relationships. Nothing too unusual here...
Meals
+----+-----------------+
| id | name |
+----+-----------------+
| 1 | January_1_Lunch |
+----+-----------------+
Beverages
+----+--------+
| id | name |
+----+--------+
| 1 | coffee |
| 2 | milk |
+----+--------+
Food
+----+-------+
| id | name |
+----+-------+
| 1 | salad |
| 2 | soup |
| 3 | steak |
| 4 | tofu |
+----+-------+
Desserts
+----+------+
| id | name |
+----+------+
| 1 | pie |
| 2 | cake |
+----+------+
Each input column is ultimately destined for a separate table.
This might seem an unnecessarily complex schema -- why not just have a single table that matches the input? But consider that a diner may come into the restaurant and order only a drink or a dessert, in which case there would be many null rows. Considering that this DB will ultimately store hundreds of millions of records, that seems like a poor use of storage. I also want to be able to generate reports for just beverages, just desserts, etc., and I figure those will perform much better with separate tables.
The orders are tracked in relationship tables like this:
BeverageOrders
+--------+---------+------------+
| mealId | dinerId | beverageId |
+--------+---------+------------+
| 1 | 1 | 1 |
| 1 | 2 | 2 |
| 1 | 3 | 1 |
+--------+---------+------------+
FoodOrders
+--------+---------+--------+
| mealId | dinerId | foodId |
+--------+---------+--------+
| 1 | 1 | 1 |
| 1 | 1 | 3 |
| 1 | 2 | 2 |
| 1 | 2 | 3 |
| 1 | 3 | 2 |
| 1 | 3 | 4 |
+--------+---------+--------+
DessertOrders
+--------+---------+-----------+
| mealId | dinerId | dessertId |
+--------+---------+-----------+
| 1 | 1 | 1 |
| 1 | 2 | 2 |
| 1 | 3 | 1 |
+--------+---------+-----------+
Note that there are more records for Food because the input contained those nasty little lists that were split into multiple records. This is another reason it helps to have separate tables.
So the question is, what's the most efficient way to get the data from the file into the schema you see above?
Approaches I've considered:
Parse the tsv file line-by-line, performing the inserts as I go. Whether using an ORM or not, this seems like a lot of trips to the database and would be very slow.
Parse the tsv file to data structures in memory, or multiple files on disk, that correspond to the schema. Then use SqlBulkCopy to import each one. While it's fewer transactions, it seems more expensive than simply performing lots of inserts, due to having to either cache a lot of data or perform many writes to disk.
Per How do I bulk insert two datatables that have an Identity relationship and Best practices for inserting/updating large amount of data in SQL Server 2008, import the tsv file into a staging table, then merge into the schema, using DB functions to do the preprocessing. This seems like the best option, but I'd think the validation and preprocessing could be done more efficiently in C# or really anything else.
Are there any other possibilities out there?
The schema is still under development so I can revise it if that ends up being the sticking point.
You can import you file in the table of the following structure: Diner, Beverage, Food, Dessert, ID (identity, primary key NOT CLUSTERED - for performance issues).
After this simply add the following columns: Dinner_ID, Beverage_ID, Dessert_ID and fill them according to your separate tables (it's simple to group each of the columns and to add the missing data to lookup tables as Beverages, Desserts, Meals and, after this, to fix the imported table with the IDs for existent and newly added records).
The situation with Food table is more complex because of ability to combine the foods, but the same trick can be used: you can also add the data to your lookup table and, among this, store the combinations of foods in the additional temp table (with the unique ID) and separation on the single dishes.
When the parcing will be finished, you will have 3 temp tables:
table with all your imported data and IDs for all text columns
table with distinct food lists (with IDs)
table with IDs of food per food combination
From the above tables you can perform the insertion of the parsed values to either structure as you want.
In this case only 1 insert (bulk) will be done to the DB from the code side. All other data manipulations will be performed in the DB.

Relationships Between Tables in MS Access

I'm new in DataBases at all and have some difficulties with setting relationships between 3 tables in MS Access 2013.
The idea is that I have a table with accounts info, a table with calls related to this accounts and also one table with all the possible call responses. I tried different combinations between them but nothing works.
1st table - Accounts : AccountID(PK) | AccountName | Language | Country | Email
2nd table - Calls : CallID(PK) | Account | Response | Comment | Date
3rd table - Responses: ResponseID(PK) | Response
When you have a table, it usually has a Primary Key field that is the main index of the table. In order for you to connect it with other tables, you usually do that by setting Foreign Key on the other table.
Let's say you have your Accounts table, and it has AccountID field as Primary Key. This field is unique (meaning no duplicate value for this field).
Now, you have the other table called Calls and you have a Foreign Key field called AccountID there, which points to the Accounts table.
Essentially you have Accounts with the following data:
AccountID| AccountName | Language | Country | Email
1 | FirstName | EN | US | some#email.com
2 | SecondName | EN | US | some#email.com
Now you have the other table Calls with Many calls
CallID(PK) | AccountID(FK) | ResponseID(FK) | Comment | Date
1 | 1 | 1 | a comment | 26/10
2 | 1 | 1 | a comment | 26/10
3 | 2 | 3 | a comment | 26/10
4 | 2 | 3 | a comment | 26/10
You can see the One to Many relationship: One accountID (in my example AccountID=1) to Many Calls (in my example 2 rows with AccountID=1 as foreign keys, rows 1 & 2) and AccountID=2 has also 2 rows of Calls (rows 3 and 4)
Same goes for the Responses table
Using this table structure:
Accounts : AccountID(PK) | AccountName | Language | Country | Email
Calls : CallID(PK) | AccountID(FK) | ResponseID(FK) | Comment | Date
Responses: ResponseID(PK) | Response
Accounts.AccountID is referenced by Calls.AccountID. 1:n – many calls for one account possible, but each call concerns just one account.
Responses.ResponseID is referenced by Calls.ResponseID. 1:n – many calls can get the same response from the prepared set, but each call gets exactly one of them.
To actually define the Relationships in Access, open the Relationships window...
... then follow the detailed instructions here:
How to define relationships between tables in an Access database

Convert table to 2NF - strange table to convert?

I have the following table below and am supposed to convert it to 2NF.
I have an answer to this where I have gone:
SKILLS: Employee, Skill
LOCATION: Employee, Current Work
Location
I have a feeling I'm wrong with this ^above^ though.
Also can someone explain what the differences are between 1NF, 2NF and 3NF. I know 1 comes first and you have to break it all up into smaller tables but would like a really good description to help me understand better. Thanks
I am new to learning 2NF but I have solved the answer like this. Let me know if this is correct so that I can understand my mistake and practice more.
Only two tables. Thanks
Employee Table
EmployeeID | Name
1 | Jones
2 | Bravo
3 | Ellis
4 | Harrison
Skills Table
SkillId | Skill
1 | Typing
2 | Shorthand
3 | Whittling
4 | Light Cleaning
5 | Alchemy
6 | Juggling
Location Table
LocationId | Name
1 | 114 Main Street
2 | 73 Industrial Way
EmployeeSkill Table
EmployeeId | LocationId | SkillId | SkillName
1 | 1 | 1 | Typing
1 | 1 | 2 | Shorthand
1 | 1 | 3 | Whittling
2 | 2 | 4 | Light Cleaning
3 | 2 | 5 | Alchemy
3 | 2 | 6 | Juggling
4 | 2 | 4 | Light Cleaning
In the EmployeeSkill table the primary key would be EmployeeId + LocationId, this gives you the skill they have at that location. Including the SkillName column violates 3NF in this example.
This practice is actually used sometimes in database design (and called "denormalization") in order to reduce joins to increase performance reading data that is commonly used together.
Ususally this is only done in tables used for reporting.

Resources