Database, Using json field instead of ManyToManyField? - database

Suppose reviews can have zero or more tags.
One could implement this using three tables, Review/Tag/ReviewTagRelation.
ReviewTagRelation would have foreign key to Review and Tag table.
Or using two tables Review/Tag. Review has a json field to hold the list of tag ids.
Traditional approach seems to be the one using the three tables.
I wonder if it is ok to use the two tables approach when there's no need to reference reviews from tags.
i.e. I only need to know what tags are associated with a given review.

In my experience it is always best to keep the data in your database normalized, unless there is a clean and clear cut reason for not doing so that makes sense as per your business requirements.
With normalized data, you know that no matter what, you will always be able to write a query to receive exactly what you are looking for, and if for some reason you want to return data as json, you can do so in your select query.

Related

Data Modeling: Is it bad practice to store IDs from various sources in the same column?

I am attempting to merge data from various sources into an existing data model. Each source uses different types of IDs (such as GUID, Salesforce IDs, etc.). For example, if I were to merge data from two different sources, the table may look like the following (where the first two SalesPersonIDs are GUID IDs and the second two are Salesforce IDs):
Is this a bad practice? I could also imagine a table where each ID type was its own column and could be left blank if it was not applicable. Something like the following:
I apologize, I am a bit new to this. Thanks in advance for any insight, I greatly appreciate it!
The big roles of an ID column are to act as a key connecting data in different tables, and to help indexing - quickly find rows so your queries run fast.
The second solution wouldn't work well for these purposes, and will lead to big headaches in queries: every time you want to group by the ID, you'll have to combine the info from 2 columns in some way, hopefully getting a correct unique result every time.
On the one hand, all you might ever need from an ID is for it to be unique. The first solution might be fine this respect - but are you sure you'll never, ever get data about one SalesPerson from more than one source?
I'd suggest keeping all the IDs in one column, and adding a column to say what kind of ID this is. At least this way, you won't lose any information and can do other things in the future.
One thing you might consider is making a separate table of SalesPerson with all their possible IDs, and have this keyed to other (Sales?) data by a unique ID used only in your database.

Implementing tag system with multiple tables

I am trying to implement tag system similar to one which StackOverflow has. Obviously I've read multiple articles including this answer.
However my scenario is little bit different
there will be limited amount of tags which can be only created by user with higher privilege (anybody can assign a tag there). This excludes option #1 (from SO question I linked above, each tag is inserted directly into the tables tags column and then it's queried with LIKE) I guess
there are also multiple tables in DB which can be tagged (currently five)
Especially second criteria makes it harder so these are my thoughts
I could follow option #3, have table tags and have M:N relationship with each table. However that would make searching harder (imagine that join if the table number grows) and also I need to tell which table (application module) matches the tag in a search result
I could use some kind of polymorphism but I am pretty new to this concept regarding to the databases so is this something which fits to this problem well?
I use newest version of PostgreSQL.
Since you are using PostgreSQL, you have the option of some field types which aren't available for other databases. Particularly, arrays and JSON fields. I did some performance comparisons of the various methods in a blog post. Arrays and JSONB were definitely better options than a tags table for any search which needed to combine multiple tags.
Given that, I would recommend creating a tags column for each table on which you want to have tags, either an array or a JSONB column, depending. If you need to search over multiple tables, I'd suggest a UNION query instead of having a single monolithic tags table which joins to everything.

Parent child design to easily identify child type

In our database design we have a couple of tables that describe different objects but which are of the same basic type. As describing the actual tables and what each column is doing would take a long time I'm going to try to simplify it by using a similar structured example based on a job database.
So say we have following tables:
These tables have no connections between each other but share identical columns. So the first step was to unify the identical columns and introduce a unique personId:
Now we have the "header" columns in person that are then linked to the more specific job tables using a 1 to 1 relation using the personId PK as the FK. In our use case a person can only ever have one job so the personId is also unique across the Taxi driver, Programmer and Construction worker tables.
While this structure works we now have the use case where in our application we get the personId and want to get the data of the respective job table. This gets us to the problem that we can't immediately know what kind of job the person with this personId is doing.
A few options we came up with to solve this issue:
Deal with it in the backend
This means just leaving the architecture as it is and look for the right table in the backend code. This could mean looking through every table present and/or construct a semi-complicated join select in which we have to sift through all columns to find the ones which are filled.
All in all: Possible but means a lot of unecessary selects. We also would like to keep such database oriented logic in the actual database.
Using a Type Field
This means adding a field column in the Person table filled for example with numbers to determine the correct child table like:
So you could add a 0 in Type if it's a taxi driver, a 1 if it's a programmer and so on...
While this greatly reduced the amount of backend logic we then have to make sure that the numbers we use in the Type field are known in the backend and don't ever change.
Use separate IDs for each table
That means every job gets its own ID (has to be nullable) in Person like:
Now it's easy to find out which job each person has due to the others having an empty ID.
So my question is: Which one of these designs is the best practice? Am i missing an obvious solution here?
Bill Karwin made a good explanation on a problem similar to this one. https://stackoverflow.com/a/695860/7451039
We've now decided to go with the second option because it seem to come with the least drawbacks as described by the other commenters and posters. As there was no actual answer portraying the second option as a solution i will try to summarize our reasoning:
Against Option 1:
There is no way to distinguish the type from looking at the parent table. As a result the backend would have to include all logic which includes scanning all tables for the that contains the id. While you can compress most of the logic into a single big Join select it would still be a lot more logic as opposed to the other options.
Against Option 3:
As #yuri-g said this one is technically not possible as the separate IDs could not setup as primary keys. They would have to be nullable and as a result can't be indexed, essentially rendering the parent table useless as one of the reasons for it was to have a unique personID across the tables.
Against a single table containing all columns:
For smaller use cases as the one i described in the question this might me viable but we are talking about a bunch of tables with each having roughly 2-6 columns. This would make this option turn into a column-mess really quickly.
Against a flat design with a key-value table:
Our properties have completly different data types, different constraints and foreign key relations. All of this would not be possible/difficult in this design.
Against custom database objects containt the child specific properties:
While this option that #Matthew McPeak suggested might be a viable option for a lot of people our database design never really used objects so introducing them to the mix would likely cause confusion more than it would help us.
In favor of the second option:
This option is easy to use in our table oriented database structure, makes it easy to distinguish the proper child table and does not need a lot of reworking to introduce. Especially since we already have something similar to a Type table that we can easily use for this purpose.
Third option, as you describe it, is impossible: no RDBMS (at least, of I personally know about) would allow you to use NULLs in PK (even composite).
Second is realistic.
And yes, first would take up to N queries to poll relatives in order to determine the actual type (where N is the number of types).
Although you won't escape with one query in second case either: there would always be two of them, because you cant JOIN unless you know what exactly you should be joining.
So basically there are flaws in your design, and you should consider other options there.
Like, denormalization: line non-shared attributes into the parent table anyway, then fields become nulls for non-correpondent types.
Or flexible, flat list of attribute-value pairs related through primary key (yes, schema enforcement is a trade-off).
Or switch to column-oriented DB: that's a case for it.

Dynamic database structure

I would like some database/programming suggestion on a specific issue.
I have 5 different people (that live in different parts of the world) that provide me with data. This data is given to me in many variety of ways, following a standard structure layout. However it's not always harmonized, the data might have extra things that are not in the standard, so I'd like the structure to be as dynamic as possible to accommodate what the person wants to use.
These 5 data sources are then placed inside a central database I host. So basically I have 5 data sources that are formatted following a standard structure, and they are uploaded to my local database.
I want to automate the upload of this data as much as possible for the person providing the data, so I want them to upload new sets of data that are automatically inserted in my local db.
My questions are:
How should I keep the structure dynamic without having to revisit my standard layout to accommodate new fields of data, or different structure?
How do I make them upload data in a way that is incremental? For example they might be uploading an XML version of their data, my upload code should figure out what already exists.
My final and most important question. Are there better ways of going about this instead of having an upload infrastructure?
How should I keep the structure dynamic without having to revisit my standard layout to accommodate new fields of data, or different structure?
Basically, you pivot the normal database idea of columns and rows.
You have a data name table, which consists of the unique names of the fields of data, and an indicator to tell the import process what type of data is stored, like a date, timestamp, or integer.
You have a data table, which contains the data name id, a sequence number, the data field, and a foreign key to identifying information.
The sequence number is used to differentiate between different values of the same data name.
The data field holds every type of data possible. This would be a VARCHAR(MAX) in most databases. It's up to the upload process to convert dates and numbers to strings.
Your data table will have a foreign key to the rest of the information that identifies who the data field belongs to.
How do I make them upload data in a way that is incremental? For example they might be uploading an XML version of their data, my upload code should figure out what already exists.
The short answer is that you can't.
Your upload process has to identify duplicate data and not store it on the database.
My final and most important question. Are there better ways of going about this instead of having an upload infrastructure?
This is a hard question to answer without knowing more about the type of data you're receiving, but there is software that allows you to load databases without a lot of programming, by defining the input data structure and mapping that structure to your database tables.
This is a very general question, but I think I have a general answer. What I think solves your problem is to construct a new relational calculus where the properties attached to the master record are not pre-determined. Here is an example involving a phone book application.
Common method using a non-relational table:
Table PERSON has columns Name,
HomePhone, OfficePhone.
All well and good, but what do you do if the occasional person shows up with a mobile phone, more than one mobile phone, a fax phone, etc.
Instead what you do is:
Table Person has columns Person_ID,
Name.
Table Phones has columns Person_ID,
Phone_Type, PhoneNumber.
There is a one-to-many relationship between Person and Phones, and there can be any number of them from zero to a zillion. The tables are JOINed by Person_ID. You have to have business and presentation logic that enumerates the Phone_Type column (or just let it be free-form, which is not as useful but easier).
You can do that for any property, and is what relational data bases are all about. I hope this helps.
As others have said, EAV tables can handle dynamic structure. (be aware of performance issues on large tables)
But is it in your interest to have your database fields dictated by the client? You can't write business logic to act upon those new fields because they don't exist yet, they could be anything.
Can you force the client to conform to your model? This allows you to know the fields ahead of time and have business logic act upon the fields. It allows you to write meaningful reports as well, rather than just pivoted data dumps.

Should the descriptive tags associated with an entity be stored in a separate database table?

I have a Questions model, and just like StackOverflow, each question can be tagged with multiple descriptive tags by a user.
What I'm trying to decide is whether it's necessary for the Tags associated with a question to be stored in a separate table in the database.
Or could I store the Tags as a single field of the Questions table as a list of space-separated strings?
I'm not sure which makes more sense - is there any good reason to separate the data?
Using a comma-separated string for a multi-valued attribute is another SQL Antipattern. :-)
How long does the string need to be? Stated another way: how many tags can a given entry have? (It depends on how long the individual tags are.)
How do you account for strings that contain the separator character? What if a character you currently use as a separator becomes a legitimate character in a tag?
How do you insert or delete elements from the list in SQL? (You have to fetch the whole list into the application, explode the list, filter through it, and re-post it to the database.)
How can you do aggregates like COUNT(*) in SQL?
How do you search efficiently for all entries that share a given tag? (You have to use costly pattern-matching queries.)
The solution is to use a separate table, as most other folks on this thread are advising.
Separating tags into their own table, plus a further table with a many:many relationship between Tags and Questions, is what's known in relational land ad "normal form". It makes it easier and faster to perform tasks such as getting all questions tagged with a certain tag, finding the most popular tags, &c.
(Just in case you don't know -- a "many:many relationship" is a table with just two columns [a foreign key into Tags and one into Questions] and no uniqueness constraints).
I would put the questions in 1 table, the tags in 1 table, and have a seperate table to connect the tags to questions. This would be the best way to build that database. It keeps all tags consistant and highly reduces redundency.
By seperating the data like this, your can assure that searching for a specific tag will bring back the same items. You don't have to worry about whether the tag is spelled the same throughout all the questions. Also, you can limit the tag options easier this way.
You should definitely store the tags in a separate table, it makes everything easier, and that's the whole idea of a 'relational' database.

Resources