Related
I have been looking for more than two hours, and i have found many articles about tables/columns naming and other tips, but any exact answer regarding database naming itself. Can you tell me please the best option? And are there some real cases when it makes sense?
clothing_store
ClothingStore
clothingStore
or maybe clothing-store
MySQL root user has two default databases named as the first version (information_schema, performance_schema, sys). So it means that the first is best?
Generally, go with underscores since that's easy to read in upper and lower case. It's also the most commonly used convention in most reference material and existing db's:
clothing_store
(And CLOTHING_STORE.)
Most, if not all, DB engines treat table names case-insensitively (even if some display them in their original case-sensitive name). So these two are the same:
ClothingStore
clothingStore
And so are "Clothingstore", "clothingstore", "CLotHinGstOre", etc.
Table names can't have a hypen, since that's an expression, like a - b:
or maybe clothing-store
You could just call it "store", unless you've got multiple tables of different stores.
I have a legacy in-house human resources web app that I'd like to rebuild using more modern technologies. Doctrine 2 is looking good. But I've not been able to find articles or documentation on how best to organise the Entities for a large-ish database (120 tables). Can you help?
My main problem is the Person table (of course! it's an HR system!). It currently has 70 columns. I want to refactor that to extract several subsets into one-to-one sub tables, which will leave me with about 30 columns. There are about 50 other supporting one-to-many tables called person_address, person_medical, person_status, person_travel, person_education, person_profession etc. More will be added later.
If I put all the doctrine associations (http://docs.doctrine-project.org/projects/doctrine-orm/en/latest/reference/working-with-associations.html) in the Person entity class along with the set/get/add/remove methods for each, along with the original 30 columns and their methods, and some supporting utility functions then the Person entity is going to be 1000+ lines long and a nightmare to test.
FWIW i plan to create a PersonRepository to handle the common bulk queries, a PersonProfessionRepository for the bulk queries / reports on that sub table etc, and Person*Service s which will contain some of the more complex business logic where needed. So organising the rest of the app logic is fine: this is a question about how to correctly organise lots of sub-table Entities with Doctrine that all have relationships / associations back to one primary table. How do I avoid bloating out the Person entity class?
Identifying types of objects
It sounds like you have a nicely normalized database and I suggest you keep it that way. Removing columns from the people table to create separate tables for one-to-one relations isn't going to help in performance nor maintainability.
The fact that you recognize several groups of properties in the Person entity might indicate you have found cases for a Value Object. Even some of the one-to-many tables (like person_address) sound more like Value Objects than Entities.
Starting with Doctrine 2.5 (which is not yet stable at the time of this writing) it will support embedding single Value Objects. Unfortunately we will have to wait for a future version for support of collections of Value objects.
Putting that aside, you can mimic embedding Value Objects, Ross Tuck has blogged about this.
Lasagna Code
Your plan of implementing an entity, repository, service (and maybe controller?) for Person, PersonProfession, etc sounds like a road to Lasagna Code.
Without extensive knowledge about your domain, I'd say you want to have an aggregate Person, of which the Person entity is the aggregate root. That aggregate needs a single repository. (But maybe I'm off here and being simplistic, as I said, I don't know your domain.)
Creating a service for Person (and other entities / value objects) indicates data-minded thinking. For services it's better to think of behavior. Think of what kind of tasks you want to perform, and group coherent sets of tasks into services. I suspect that for a HR system you'll end up with many services that evolve around your Person aggregate.
Is Doctrine 2 suitable?
I would say: yes. Doctrine itself has no problems with large amounts of tables and large amounts of columns. But performance highly depends on how you use it.
OLTP vs OLAP
For OLTP systems an ORM can be very helpful. OLTP involves many short transactions, writing a single (or short list) of aggregates to the database.
For OLAP systems an ORM is not suited. OLAP involves many complex analytical queries, usually resulting in large object-graphs. For these kind of operations, native SQL is much more convenient.
Even in case of OLAP systems Doctrine 2 can be of help:
You can use DQL queries (in stead of native SQL) to use the power of your mapping metadata. Then use scalar or array hydration to fetch the data.
Doctrine also support arbitrary joins, which means you can join entities that are not associated to each other according by mapping metadata.
And you can make use of the NativeQuery object with which you can map the results to whatever you want.
I think a HR system is a perfect example of where you have both OLTP and OLAP. OLTP when it comes to adding a new Person to the system for example. OLAP when it comes to various reports and analytics.
So there's nothing wrong with using an ORM for transactional operations, while using plain SQL for analytical operations.
Choose wisely
I think the key is to carefully choose when to use what, on a case by case basis.
Hydrating entities is great for transactional operations. Make use of lazy loading associations which can prevent fetching data you're not going to use. But also choose to eager load certain associations (using DQL) where it makes sense.
Use scalar or array hydration when working with large data sets. Data sets usually grow where you're doing analytical operations, where you don't really need full blown entities anyway.
#Quicker makes a valid point by saying you can create specialized View objects. You can fetch only the data you need in specific cases and manually mold that data into objects. This is accompanied by his point to don't bloat the user interface with options a user with a certain role doesn't need.
A technique you might want to look into is Command Query Responsibility Segregation (CQRS).
I understood that you have a fully normalized table persons and now you are asking for how to denormalize that best.
As long as you do not hit any technical constaints (such as max 64 K Byte) I find 70 columns definitly not overloaded for a persons table in a HR system. Do yourself a favour to not segment that information for following reasons:
selects potentially become more complex
each extract table needs (an) extra index/indeces, which increases your overall memory utilization -> this sounds to be a minor issue as disk is cheap. However keep in mind that via caching the RAM to disk space utilization ratio determines your performance to a huge extend
changes become more complex as extra relations demand for extra care
as any edit/update/read view can be restricted to deal with slices of your physical data from the tables only no "cosmetics" pressure arises from end user (or even admin) perspective
In summary your the table subsetting causes lots of issues and effort but does add low if not no value.
Btw. databases are optimized for data storage. Millions of rows and some dozens of columns are no brainers at that end.
Here is something I've wondered for quite some time, and have not seen a real (good) solution for yet. It's a problem I imagine many games having, and that I can't easily think of how to solve (well). Ideas are welcome, but since this is not a concrete problem, don't bother asking for more details - just make them up! (and explain what you made up).
Ok, so, many games have the concept of (inventory) items, and often, there are hundreds of different kinds of items, all with often very varying data structures - some items are very simple ("a rock"), others can have insane complexity or data behind them ("a book", "a programmed computer chip", "a container with more items"), etc.
Now, programming something like that is easy - just have everything implement an interface, or maybe extend an abstract root item. Since objects in the programming world don't have to look the same on the inside as on the outside, there is really no issue with how much and what kind of private fields any type of item has.
But when it comes to database serialization (binary serialization is of course no problem), you are facing a dilemma: how would you represent that in, say, a typical SQL database ?
Some attempts at a solution that I have seen, none of which I find satisfying:
Binary serialization of the items, the database just holds an ID and a blob.
Pro's: takes like 10 seconds to implement.
Con's: Basically sacrifices every database feature, hard to maintain, near impossible to refactor.
A table per item type.
Pro's: Clean, flexible.
Con's: With a wide variety come hundreds of tables, and every search for an item has to query them all since SQL doesn't have the concept of table/type 'reference'.
One table with a lot of fields that aren't used by every item.
Pro's: takes like 10 seconds to implement, still searchable.
Con's: Waste of space, performance, confusing from the database to tell what fields are in use.
A few tables with a few 'base profiles' for storage where similar items get thrown together and use the same fields for different data.
Pro's: I've got nothing.
Con's: Waste of space, performance, confusing from the database to tell what fields are in use.
What ideas do you have? Have you seen another design that works better or worse?
It depends if you need to sort, filter, count, or analyze those attribute.
If you use EAV, then you will screw yourself nicely. Try doing reports on an EAV schema.
The best option is to use Table Inheritance:
PRODUCT
id pk
type
att1
PRODUCT_X
id pk fk PRODUCT
att2
att3
PRODUCT_Y
id pk fk PRODUCT
att4
att 5
For attributes that you don't need to search/sort/analyze, then use a blob or xml
I have two alternatives for you:
One table for the base type and supplemental tables for each “class” of specialized types.
In this schema, properties common to all “objects” are stored in one table, so you have a unique record for every object in the game. For special types like books, containers, usable items, etc, you have another table for each unique set of properties or relationships those items need. Every special type will therefore be represented by two records: the base object record and the supplemental record in a particular special type table.
PROS: You can use column-based features of your database like custom domains, checks, and xml processing; you can have simpler triggers on certain types; your queries differ exactly at the point of diverging concerns.
CONS: You need two inserts for many objects.
Use a “kind” enum field and a JSONB-like field for the special type data.
This is kind of like your #1 or #3, except with some database help. Postgres added JSONB, giving you an improvement over the old EAV pattern. Other databases have a similar complex field type. In this strategy you roll your own mini schema that you stash in the JSONB field. The kind field declares what you expect to find in that JSONB field.
PROS: You can extract special type data in your queries; can add check constraints and have a simple schema to deal with; you can benefit from indexing even though your data is heterogenous; your queries and inserts are simple.
CONS: Your data types within JSONB-like fields are pretty limited and you have to roll your own validation.
Yes, it is a pain to design database formats like this. I'm designing a notification system and reached the same problem. My notification system is however less complex than yours - the data it holds is at most ids and usernames. My current solution is a mix of 1 and 3 - I serialize data that is different from every notification, and use a column for the 2 usernames (some may have 2 or 1). I shy away from method 2 because I hate that design, but it's probably just me.
However, if you can afford it, I would suggest thinking outside the realm of RDBMS - it sounds like Non-RDBMS (especially key/value storage ones) may be a better fit to store these data, especially if item 1 and item 2 differ from each item a lot.
I'm sure this has been asked here a million times before, but in addition to the options which you have discussed in your question, you can look at EAV schema which is very flexible, but which has its own sets of cons.
Another alternative is database systems which are not relational. There are object databases as well as various key/value stores and document databases.
Typically all these things break down to some extent when you need to query against the flexible attributes. This is kind of an intrinsic problem, however. Conceptually, what does it really mean to query things accurately which are unstructured?
First of all, do you actually need the concurrency, scalability and ACID transactions of a real database? Unless you are building a MMO, your game structures will likely fit in memory anyway, so you can search and otherwise manipulate them there directly. In a scenario like this, the "database" is just a store for serialized objects, and you can replace it with the file system.
If you conclude that you do (need a database), then the key is in figuring out what "atomicity" means from the perspective of the data management.
For example, if a game item has a bunch of attributes, but none of these attributes are manipulated individually at the database level (even though they could well be at the application level), then it can be considered as "atomic" from the data management perspective. OTOH, if the item needs to be searched on some of these attributes, then you'll need a good way to index them in the database, which typically means they'll have to be separate fields.
Once you have identified attributes that should be "visible" versus the attributes that should be "invisible" from the database perspective, serialize the latter to BLOBs (or whatever), then forget about them and concentrate on structuring the former.
That's where the fun starts and you'll probably need to use "all of the above" strategy for reasonable results.
BTW, some databases support "deep" indexes that can go into heterogeneous data structures. For example, take a look at Oracle's XMLIndex, though I doubt you'll use Oracle for a game.
You seem to be trying to solve this for a gaming context, so maybe you could consider a component-based approach.
I have to say that I personally haven't tried this yet, but I've been looking into it for a while and it seems to me something similar could be applied.
The idea would be that all the entities in your game would basically be a bag of components. These components can be Position, Energy or for your inventory case, Collectable, for example. Then, for this Collectable component you can add custom fields such as category, numItems, etc.
When you're going to render the inventory, you can simply query your entity system for items that have the Collectable component.
How can you save this into a DB? You can define the components independently in their own table and then for the entities (each in their own table as well) you would add a "Components" column which would hold an array of IDs referencing these components. These IDs would effectively be like foreign keys, though I'm aware that this is not exactly how you can model things in relational databases, but you get the idea.
Then, when you load the entities and their components at runtime, based on the component being loaded you can set the corresponding flag in their bag of components so that you know which components this entity has, and they'll then become queryable.
Here's an interesting read about component-based entity systems.
We are going to develop a new system over a legacy database(using .NET C#).There are approximately 500 tables. We have choosen to use an ORM tool for our data access layer. The problem is with namig convention for the entities.
The tables in the database have names like TB_CUST - table with customer data, or TP_COMP_CARS - company cars. The first letter of prefix defines the modul and the second letter defines its relations to other tables.
I would like to name the entities more meaningful. Like TB_CUST just Customer or CustomerEntity. Of course there would be a comment pointing to its table name.
But the DBA and programer in one person, dont want names like this. He wants to have the entities names exactly the same to the table names. He is saying that he would have to remember two names and it would be difficult and messy. I have to say his not really familiar with the principles of OOP.
But in case of an entity name like TP_COMP_CARS there should be methods names like Get TP_COMP_CARS or SaveTP_COMP_CARS..I think this is unreadable and ugly.
So please tell me your opinion. Who is right and why.
Thank you in advance
The whole idea of ORM tools, is to avoid the need of remembering database objects.
We usually create a database class with all the table and column names, so no one needs to remember anything, and the ORM should map database "names" to normal entities.
Although it is subjective, in my opinion you are right and he is wrong....
Who is going to work mostly with the new code? That person should decide the naming convention IMHO.
Personally of course I would go for your solution because as has already been mentioned, if you use ORM you don't need to hit the DB directly often.
As a compromise you could use names like TB_CUST where act directly with the database but then use names like Customer for your Data Transfer Objects. Writing good code involves creating an abstraction of any datasources you might be working with. Have GetTB_CUST() throughout your code is a little like having GetTB_CUSTFromThatSQLDatabaseWeHave() dotted around the place.
I personally hate table names like that, but it's a legacy system and I'm sure the DBA doesn't feel like renaming the tables. Renaming the tables could be an option. You would just have to create views representing the old table names so that your legacy system keeps running while you develop your new system. If this is not an option you can use the ORM to map table names to entity names. Or you can abstract your ORM away in a data access layer and define nice entity names in your domain model, having your DAL do the name conversion.
The naming conventions used in two different domains simply don't align. Java, for example, hasa very well defined rules/conventions for Class names and field names, where capitalisation is significant. In general, your application may be ported to a completely different Database with a different naming standard, it's not reasonable to demand alignment of names in Business Logic with names in Database. Consider a slightly more complex mapping, one Entity may not correspond to one Table.
And, really, come on ...
Customer == TB_CUST
That is just not so hard! I'm with you, makes the names meaningful in the code and map in the ORM. The learning for the DBA/Programmer should not be that painful, my guess is that it's one of those things that feels much worse in the anticipation than the reality.
If there are 500 tables in the database - you've already got a challenge keeping those names straight. Hopefully, you've got metadata and some graphical models that describe them more meaningfully.
When you create the next 500 ORM objects - you'll have another challenge. Even if you give them meaningful names it's still too many to really hope that all will be obvious. So, now you've got 2 problems.
If there's no way to link those two sets of 500 tables together - then you've got 3 problems. Think about debugging performance of queries in the ORM, and going to the DBA with names that he doesn't recognize. Think about your carefully crafted names - that then must be ignored when you create reports that hit the database directly.
So, I'd try very hard to use the database names in the ORM. But I would tweak a few things:
if a name is too cryptic to understand - I'd work with the DBA to improve its name. An easy way to transition to better names is through views. Ideally you get rid of the original confusing name eventually tho.
changing from underscores to camelcase, etc shouldn't be considered a change to the name - it's just a change to the separator. So, use the appropriate name for your language.
the database prefixes can probably be dropped - they're actually just attributes of the table name that have been "denormalized" and grafted onto the name. They may be necessary to avoid duplication if they indicate a subsection of the model, but in general may be be as important.
"I have to say his not really familiar with the principles of OOP.
But in case of an entity name like TP_COMP_CARS there should be methods names like Get TP_COMP_CARS or SaveTP_COMP_CARS..I think this is unreadable and ugly.
So please tell me your opinion. Who is right and why."
Which names are given to the things your IT systems manages has absolutely nothing to do with "the principles of OOP".
The same holds for which names are given to "standard" "getter and setter" methods : those are just agreements and conventions, not "principles of OOP".
The issue is a certain kind of "ergonomics" (and thus also the self-documenting value) of the code.
It is true that getTP_COMP_CARS looks ugly (though not, as you claim, "unreadable"). It is also true that if you start adhering to "more modern" naming conventions, then there will have to be someone somewhere who will have to maintain a mapping between the names that are synonymous. (And it is untrue that names such as TP_COMP_CARS are less self-documenting than full "natural-language-words" names : usually such names are constructed from a VERY SMALL set of mnemonic words that are used over and over again with the same meaning, making it more than easy enough for anyone to remember them.)
There is no right and wrong about this. Names like that were the usual convention in the days before the ones we live in now. At least, those names usually had the benefit of being case-insensitive, as opposed to the braindead (because case-sensitive) naming rules that are imposed upon us by so-called "more modern" systems.
Twenty years from now, people will call the naming conventions we use these days "braindead" too.
Given a collection of files which will have associated metadata, what are the recommended methods for storing this metadata?
Some files formats support storing metadata internally (EXIF,ID3,etc), but not all file formats support this, so what are more general options?
Some of the metadata would almost certainly be unique (titles/descriptions/etc), whilst some would be repetitive to varying degrees (categories/tags/etc).
It may also be useful to group the metadata, if different types of attribute are required.
Ideally, solutions should cover concepts, rather than specific language implementations.
To store metadata in database has some advantages but main problem with database is that metadata are not directly connected to your data. It is more robust if metada stay with data - like special file in the directory or something like that.
Some filesystems offer special functionality that can be used for metadata - like NTFS Alternate streams. Unfortunately, this can be used for metadata storage in special cases only, because those streams can be easily lost when copying data to storage system that does not support it. I believe that linux filesystems have also similar storage mechanism.
Anyway, most common solutions are :
separate hidden file(s) (per directory) that hold metadata
some application use special hidden directory with metadata (like subversion, cvs etc).
or database (of various kinds) for all application specific metada - this database can be used also for caching purposes in most cases
IMO there is no general purpose solution. I would choose storage of metadata in hidden file (robustness) with use of the database for fast access and caching.
One option might be a relational database, structured like this:
FILE
f_id
f_location
f_title
f_description
ATTRIBUTE
a_id
a_label
VALUE
v_id
v_label
METADATA
md_file
md_attribute
md_value
This implementation has some unique information (title/description),
but is primarily targetted at repetitive groups of data.
For some requirements, other less generic tables may be more useful.
This has advantages of this being that relational databases are very common,
and obviously very good at handling relationships and storing lots of data.
However, for some uses a database server brings an overhead which might not be desirable.
Also, the database server is distinct from the files - they do not sit together, and require different methods of interaction.
Databases do not (easily) sit under version control - which may be a good or bad thing, depending on your point of view and specific needs.
I think the "solution" depends greatly upon what you're going to be doing with the metadata.
For example, almost all of the metadata we store (Multiple datasets of scientific data) are all chopped up and stored in a database. This allows us to create datasets to preserve the common metadata between the files (as you say, categories and tags) while we have file specific structures (title, start/stop time, min/max values etc.) While we could keep these in hidden files, we do a lot of searching and open our interface to outside consumers via web services.
If you're storing metadata that isn't going to be searched on, hidden files or a dedicated .xml file per "real" file isn't a bad route to take. It's readable by basically anything, can be converted to different formats easily, and won't be lost if you decide to change your storage mechanism.
Metadata should help you, not hinder you. I've seen (and been a part of) systems where metadata storage has become more burdensome than storing the actual data, and became a liability. Just keep in mind what you are trying to do with it, and don't over extend yourself with "what ifs."
Plain text has some obvious advantages over anything else. Something like
FileName = 'ferrari.gif'
Title = 'My brand new car'
Tags = 'cars', 'cool'
Related = 'michaelknight.mp3'
Picasa's Picasa.ini files are a good example for this kind of metadata. Also, instead of inventing your own format, XML might be worth considering. There are plenty of readily available DOM processors to deal with this format.
Then again, if the amount of files and relations between them is huge, databases may be better.
I would basically make a metadata DB which held this information:
RESOURCE_TABLE
RESOURCE_ID
RESOURCE_TYPE (folder, doctype, web link, other)
RESOURCE_URL (any URL)
NOTES_TABLE
NOTE_ID
RESOURCE_NO
RESOURCE_NOTE (long text)
TAGS_TABLE
TAG_ID
RESOURCE_NO
TAG_TEXT
Then I would use the note field textual notes to the file/folder/resource. Choose if you would use 1:1 or 1:N for this.
The tags field I would use to store any number of searchable parameters like YEAR, PROJECT, and other values that will describe and group your content.
Then you could add tables for owner, stakeholders, and other organisation info etc.