New to database design (Postgres) seeking advice - database

I am new to designing databases. I come from a front end background. I am looking to design a database that stores performance metrics of wind turbines. I was given an excel file. The list of metrics is nearly 200 and you can see the first few in this image.
I can't think of the best way to represent this data in a database. My first thought was to import this table as is into a database table and add a turbine-id column to it. my second through was to create a table for each metric and add a turbine-id column to each of those tables. What do you guys think. What is the best way for me to store the data that would set me up with a performant database. Thank you for your help and input

One way to do it would be something like this:
TURBINE
ID_TURBINE INTEGER PK
LATITUDE DECIMAL
LONGITUDE DECIMAL
METRIC
ID_METRIC INTEGER PK
METRIC_NAME VARCHAR UNIQUE
VALUE_TYPE VARCHAR
Allowed values = ('BOOLEAN', 'PERCENTAGE', 'INTEGER', 'DOUBLE', 'STRING')
TURBINE_METRIC
ID_TURBINE_METRIC INTEGER PK
ID_TURBINE INTEGER
FOREIGN KEY TO TURBINE
METRIC_NAME VARCHAR
FOREIGN KEY TO METRIC
BOOLEAN_VALUE BOOLEAN
PERCENTAGE_VALUE DOUBLE
INTEGER_VALUE INTEGER
DOUBLE_VALUE DOUBLE
STRING_VALUE VARCHAR
Flesh this out however you need it to be. I have no idea how long your VARCHAR fields should be, etc, but this allows you flexibility in terms of which metrics you store for each turbine. I suppose you could make the LATITUDE and LONGITUDE metrics as well - I just added them to the TURBINE table to show that there may be fixed info which is best stored as part of the TURBINE table.

You want one table to represent turbines (things true of the turbine, like its location) and one or more of turbine metrics that arrive over time. If different groups of metrics arrive at different intervals, put them in different tables.
One goal I would have would be to minimize the number of nullable columns. Ideally, every column is defined NOT NULL, and invalid inputs are set aside for review. What is and is not nullable is controlled by promises made by the system supplying the metrics.
That's how it's done: every table has one or more keys that uniquely identify a row, and all non-key columns are information about the entity defined by the row.
It might seem tempting and "more flexible" to use one table of name-value pairs, so you never have to worry about new properties if the feed changes. That would be a mistake, though (a classic one, which is why I mention it). It's actually not more flexible, because changes upstream will require changes downstream, no matter what. Plus, if the upstream changes in ways that aren't detected by the DBMS, they can subtly corrupt your data and results.
By defining as tight a set of rules about the data as possible in SQL, you guard against missing, malformed, and erroneous inputs. Any verification done by the DBMS is verification that the application can skip, and that no application will be caught by.
For example, you're given min/max values for wind speed and so on. These promises can form constraints in the database. If you get negative wind speed, something is wrong. It might be a sensor problem or (more likely) a data alignment error because a new column was introduced or the input was incorrectly parsed. Rather than put the wind direction mistakenly in the wind speed column, the DBMS rejects the input, and someone can look into what went wrong.
Don't forget to have fun. You have an opportunity to create a new database in a growing industry, and learn about database technology and theory at the same time. Doesn't happen every day!

Related

Prevent continuously adding columns by postgres json data type

I wanted to ask, if there may be a different and better approach than mine.
I have a model entity that can have an arbitrary amount of hyperparameters. Depending on the specific model I want to insert as row into the model table, I may have specific hyperparameters. I do not want to continuously add new columns to my model table for new hyperparameters that I encounter when trying out new models (+ I don't like having a lot of columns that are null for many rows). I also want to easily filter models on specific hyperparameter values, e.g. "select * from models where model.hyperparameter_x.value < 0.5". So, an n-to-n relationship to a hyperparameter table comes to mind. The issue is, that the datatype for hyperparameters can be different, so I cannot define a general value column on the relationship table, with a datatype, that's easily comparable across different models.
So my idea is, to define a json type "value" column in the relationship table to support different value datatypes (float, array, string, ...). What I don't like about that idea and what was legitimately critizised by colleagues is that this can result in chaos within the value column pretty fast, e.g. people inserting data with very different json structures for the same hyperparameters. To mitigate this issue, I would introduce a "json_regex_template" column in the hyperparameter table, so that on API level I can easily validate wheter the json for a value for hyperparameter x is correctly defined by the user. An additional "json_example" column in the hyperaparameter table would further help the user on the other side of the API make correct requests.
This solution would still not guarentee non-chaos on database request level (even though no User should directly insert data without using the API, so I don't think thats a very big deal). And the solution still feels a bit hacky. I would believe, that I'm not the first person with this problem and maybe there is a best practice to solve it?
Is my aversion against continuously adding columns reasonable? It's about probl. 3-5 new columns per month, may saturate at some point to a lower number, but thats speculative.
I'm aware of this post (Storing JSON in database vs. having a new column for each key), but it's pretty old, so my hope is that there may be new stuff I could use. The model-hyperparameter thing is of course just a small part of my full database model. Changing to a non-relational database is not an option.
Opinions are much appreciated

MS SQL: What is more efficient? Using a junction table or storing everything in a varchar?

here is a simple question to which I would like an answer to:
We have a member table. Each member practices one, many or no sports. Initially we (the developers) created a [member] table, a [sports] table and a [member_sports] table, just as we have always done.
However our client here doesn't like this and wants to store all the sports that the member practices in a single varchar column, separated with a special character.
So if:
1 is football
2 is tennis
3 is ping-pong
4 is swimming
and I like swimming and ping-pong, my favourite sports will be stored into the varchar column as:
x3,x4
Now we don't want to just walk up to the client and claim that his system isn't right. We would like to back it up with proof that the operation to fetch the sports from [member_sports] is more efficient than simply storing the fields as a varchar.
Is there any documentation that can back our claims? Help!
Ask your client if they care about storing accurate information1 rather than random strings.
Then set them a series of challenges. First, ensure that the sport information is in the correct "domain". For the member_sports table, that is:
sport_id int not null
^
|--correct type
For their "store everything in a varchar column" solution, I guess you're writing a CHECK constraint. A regex would probably help here but there's no native support for regex in SQL Server - so you're either bodging it or calling out to a CLR function to make sure that only actual int values are stored.
Next, we not only want to make sure that the domain is correct but that the sports are actually defined in your system. For member_sports, that's:
CONSTRAINT FK_Member_Sports_Sports FOREIGN KEY (Sport_ID) references Sports (Sport_ID)
For their "store everything in a varchar column" I guess this is going to be a far more complex CHECK constraint using UDFs to query other tables. It's going to be messy and procedural. Plus if you want to prevent a row from being removed from sports while it's still referenced by any member, you're talking about a trigger on the sports table that has to query every row in members2`.
Finally, let's say that it's meaningless for the same sport to be recorded for a single member multiple times. For member_sports, that is (if it's not the PK):
CONSTRAINT UQ_Member_Sports UNIQUE (Member_ID,Sport_ID)
For their "store everything in a varchar column" it's another horrifically procedural UDF called from a CHECK constraint.
Even if the varchar variant performed better (unlikely since you need to be ripping strings apart and T-SQL's string manipulation functions are notoriously weak (see above re: regex)) for certain values of "performs better", how do they propose that the data is meaningful and not nonsense?
Writing the procedural variants that can also cope with nonsense is an even more challenging endeavour.
In case it's not clear from the above - I am a big fan of Declarative Referential Integrity (DRI). Stating what you want versus focussing on mechanisms is a huge part of why SQL appeals to me. You construct the right DRI and know that your data is always correct (or, at least, as you expect it to be)
1"The application will always do this correctly" isn't a good answer. If you manage to build an application and related database in which nobody ever writes some direct SQL to fix something, I guess you'll be the first.
But in most circumstances, there's always more than one application, and even if the other application is a direct SQL client only employed by developers, you're already beyond being able to trust that the application will always act correctly. And bugs in applications are far more likely than bugs in SQL database engine's implementations of constraints, which have been tested far more times than any individual application's attempt to enforce constraints.
2Let alone the far more likely query - find all members who are associated with a particular sport. A second index on member_sports makes this a trivial query3. No indexes help the "it's somewhere in this string" solution and you're looking at a table scan with no indexing opportunities.
3Any index that has sport_id first should be able to satisfy such a query.

Data Validation with ORM

I have a Grails application that is dependent upon an external dataset that gets sucked in periodically. There is no way to validate the data when it comes into the database, because the program that sucks it in isn't written by us.
Once in a while, we get a bad piece of data in the database. For example, a number is "5,5" instead of "5.5". The datatype on the column (since they defined the table) is VARCHAR even though the field should always contain a number. Our application has this column mapped to a FLOAT in the ORM layer, since that is what we expect it to be.
I want to make sure that the application doesn't crash when we get new data, but I am not sure how. Should I map the column to a VARCHAR and then do a conversion to a transient FLOAT column or something like that?
Short-term, yes, mapping the VARCHAR to String in the domain object instead of a numeric type would solve the problem.
In my opinion, a better solution would be to do the periodic import into a different table, and have a process that moves only valid rows to the domain table. You'd probably also want to put the 'bad' rows somewhere, and notify someone to manually fix them.

is delimiting data in a database field ok

Is delimiting data in a database field something that would be ok to do?
Something like
create table column_names (
id int identity (1,1) PRIMARY KEY,
column_name varchar(5000)
);
and then storing data in it as follows
INSERT INTO column_names (column_name) VALUES ('stocknum|name|price');
No. this is bad:
in order to create new queries you have to track down how things are stored.
queries that join on price or name or stocknum are going to be nasty
the database can't assign data types to the data or validate it
you can't create constraints on any of this data now
Basically you're subverting the RDBMS' scheme for handling things and making up your own, so you're limiting how much the RDBMS tools can help you and you've made the system harder to understand for new people.
The only possible advantage of this kind of system that I can think of is that it can serve as a workaround to avoid dealing with a totally impossible DBA who vetoes all schema changes regardless of merit. Which can happen, unfortunately.
Of course there's an exception to everything. I'm currently on a project with audit-logging requirements that are pretty stringent. the logging is done to a database, we're using delimited fields for storing the fields because the application is never going to interact with this data, it gets written once and left alone.
Almost certainly not.
It violates principles of normalization. The data stored in a particular row of a particular column should be atomic-- you shouldn't be able to parse the data into smaller component parts.
It makes it substantially more difficult to get acceptable performance. Every piece of code that queries this table will need to know how to parse the data which is generally going to mean that more data needs to be read off disk and potentially sent over the network to the client. Every query that has to parse this data is going to have to be more complex which tends to cause grief for the query optimizer. Concatenated data cannot generally be indexed effectively for searches-- you'd have to do something like a full-text index with custom delimiters rather than a nice standard index on a character string. And if you ever have to update one of the delimited values (i.e. because a product name changes), those updates are going to have to scan every row in the table, parse the data, decide whether to actually update the row, and then update a ton of rows.
It makes the application much more brittle. What happens when someone decides to include a | character in the name attribute, for example? Even if you specify an optional enclosure in the spec (i.e. | is allowed if the entire token is enclosed in double quotes), what fraction of the bits of code that actually parse this column are going to implement and test that correctly?

Stringly typed values table in sql, is there a better way to do this? (we're using MSSQL)

We have have a table layout with property names in one table, and values in a second table, and items in a third. (Yes, we're re-implementing tables in SQL.)
We join all three to get a value of a property for a specific item.
Unfortunately the values can have multiple data types double, varchar, bit, etc. Currently the consensus is to stringly type all the values and store the type name in the column next to the value.
tblValues
DataTypeName nvarchar
Is there a better, cleaner way to do this?
Clarifications:
Our requirements state that we must add new "attributes" at run time without modifying the db schema
I would prefer not to use EAV, but that is the direction we are headed right now.
This system currently exists in SQL server using a traditional db design, but I can't see a way to fulfill our requirement of not modifying the db schema without moving to EAV.
There are really only two patterns for implementing an 'EAV model' (assuming that's what you want to do):
Implement it as you've described, where you explicitly store the property value type along with the value, and use that to convert the string values stored into the appropriate 'native' types in the application(s) that access the DB.
Add a separate column for each possible datatype you might store as a property value. You could also include a column that indicates the property value type, but it wouldn't be strictly necessary.
Solution 1 is a simpler design, but it incurs the overhead of converting the string values stored in the table into the appropriate data type as needed.
Solution 2 has the benefit of storing values as the appropriate native type, but it will necessarily require more, though not necessarily much more, space. This may be moot if there aren't a lot of rows in this table. You may want to add a check constraint that only allows one non-NULL value in the different value columns, or if you're including a type column (so as to avoid checking for non-NULL values in the different value columns), prevent mismatches between the value stored in the type column and which value column contains the non-NULL value.
As HLGEM states in her answer, this is less preferred than a standard relational design, but I'm more sympathetic to the use of EAV model table designs for data such as application option settings.
Well don't do that! You lose all the values of having datatypes if you do. You can't properly constrain them (and will, I guarantee it, get bad data eventually) and you have to cast them back to the proper type to use in mathematical or date calculations. All in all a performance loser.
Your whole design will not scale well. Read up on why you don't want to use EAV tables in a relational database. It is not only generally slower but unusually difficult to query especially for reporting.
Perhaps a noSQL database would better suit your needs or a proper relational design and NOT an EAV design. Is it really too hard to figure out what fields each table would really need or are your developers just lazy? Are you sacrificing performance for flexibility - a flexibility that most users will hate? Especially when it means bad performance? Have you ever used a database designed that way to try to do anything?

Resources