Related
I've been hearing a lot of talk about schema-less (often distributed) database systems like MongoDB, CouchDB, SimpleDB, etc...
While I can understand they might be valuable for some purposes, in most of my applications I'm trying to persist objects that have a specific number of fields of a specific type, and I just automatically think in the relational model. I'm always thinking in terms of rows with unique integer ids, null/not null fields, SQL datatypes, and select queries to find sets.
While I'm attracted to the distributed nature and easy JSON/RESTful interfaces of these new systems, I don't understand how loosely typed key/value hashes will help me with my development. Why would a loose typed, schema-less system be good for keeping clean data sets? How can I for example, find all items with dates between x and y when they might not have dates? Is there any concept of a join?
I understand many systems have their own differences and strengths, but I'm wondering at the difference in paradigm. I suppose this is an open-ended question, but perhaps the community's answers and ways they have personally seen the advantages of these systems will help enlighten me and others about when I would want to make use of these (admittedly more hip) systems instead of the traditional RDBMS.
I'll just call out one or two common reasons (I'm sure people will be writing essay answers)
With highly distributed systems, any given data set may be spread across multiple servers. When that happens, the relational constraints which the DB engine can guarantee are greatly reduced. Some of your referential integrity will need to be handled in application code. When doing so, you will quickly discover several pain points:
your logic is spread across multiple layers (app and db)
your logic is spread across multiple languages (SQL and your app language of choice)
The outcome is that the logic is less encapsulated, less portable, and MUCH more expensive to change. Many devs find themselves writing more logic in app code and less in the database. Taken to the extreme, the database schema becomes irrelevant.
Schema management—especially on systems where downtime is not an option—is difficult. reducing the schema complexity reduces that difficulty.
ACID doesn't work very well for distributed systems (BASE, CAP, etc). The SQL language (and the entire relational model to a certain extent) is optimized for a transactional ACID world. So some of the SQL language features and best practices are useless while others are actually harmful. Some developers feel uncomfortable about "against the grain" and prefer to drop SQL entirely in favor of a language which was designed from the ground up for their requirements.
Cost: most RDBMS systems aren't free. The leaders in scaling (Oracle, Sybase, SQL Server) are all commercial products. When dealing with large ("web scale") systems, database licensing costs can meet or exceed the hardware costs! The costs are high enough to change the normal build/buy considerations drastically towards building a custom solution on top of an OSS offering (all the significant NOSQL offerings are OSS)
The primary concern should be what do you need to do with your data. If you have a huge data set and are finding a traditional RDBMS to be a bottleneck then you may want to experiment with a schemaless or a a NOSQL solution.
Most environments that I am aware of using NOSQL solutions also use an RDBMS solution in some form or fashion. RDBMS based solutions are the norm where data integrity is extremely important and you need ACID transactions. However if your system is not highly transaction based but you need to scale up or scale out real quick, a NOSQL solution may be desirable.
Schemaless is great for two reasons:
Brain optimising intuitiveness of document storage
Resolves Sparse-Matrix and Entity-Attribute-Value storage problems.
I've used both SQL and No-SQL for production applications in Ruby on Rails. I'm not a database expert and I have to confess to googling ACID and similar terms as they're not familiar to me.
"Ah ha! Another know-nothing trend follower jumping on the latest bandwagon" you may say. But, actually, I'm really pleased with my decision to use MongoDB on our most recent 2 year old app and here's why...
The flip-side of brain-optimising intuitiveness was my experience with the Magento e-commerce system. I don't want to bash it because it served me well at the time but it really hit the processor hard trying to calculate the attributes for each product. The underlying reason was the Entity-Attribute-Value store of product data. Cache or be damned was the solution.
The major advantage to me is the optimisation in the only place that really matters - your own brain. So many technologies are critiqued on their efficiency in memory, processors, hardware and yet having a DB that's extremely intuitive to understand brings its own merits. We've found it quick to add features to our code because the database simply looks a lot like the real world we're modelling. When I've asked e-commerce clients to present me with their product list they will naturally tend to use Excel (think table store). The first columns are easy:
Product Name
Price
Product Type (
Then it gets harder and covered in notes, colour coding and links to other tables (yep.. relationships)
Colour (Only some products)
Size (X Large, Large, Small) - only for products 8'9'10, golf clubs use a different scale
Colour 2. The cat collars have two colour choices.
Wattage
Fixing type (Male, Female)
So it ends in a terrible mess of Excel tables that make no sense to me and not much sense to the people who work with the products day in and day out. We throw our arms in the air and decide to go through the catalogue and then it hits me! Wouldn't it be great if you could store the data as it appears in the catalogue!? Just collections of records on each product that just lists the attribute of that product. You can then pick out common attributes to index for retrieval at a later date. Of course, that's a document store.
In summary, document stores are great when you have a sparse matrix problem or objects that mutate their attributes over time. Having lived in a No-SQL world for 2 years, I can't think of a real world application that doesn't have those features because the world itself looks like a document store.
I've only played with MongoDB but one thing that really interested me was how you could nest documents. In MongoDB a document is basically like a record. This is really nice because traditionally, in a RDBMS, if you needed to pull a "Person" record and get the associated address, employer info, etc. you'd frequently have to go to multiple tables, join them up, make multiple database calls. In a NoSQL solution like MongoDB, you can just nest the associated records (documents) and not have to mess with foreign keys, joining, multiple database calls. Everything associated with that one record is pulled.
This is especially handy when dealing with objects. You can in many cases just store an object as a series of nested documents.
NoSQL databases are not schemaless; the schema is embedded in the data. They are properly called semistructured. In some KV data stores, however, the schema may even be embedded in code. The advantage of the semi-structured approach is two fold: flexibility in which columns are part of a row (one row could have 5 columns and another have 5 different columns, and flexibility in the characteristics of the columns (e.g., variable lengths)
Normally the attraction is that of snake oil - most people favourising them have no clue about the relational theorem and speak SQL on a level making professionals puke. No idea what ACID conditions are, ehy they are important etc.
Not saying they do not have valid uses.... just saying that mostly the attraction is people not knowing what they should know and making stupid conclusions. Again, not everyone is like that, but most developers favouring them are - not good in their understanding what a database system acutally is responsible for.
oracle has a good reputation for handling large-scale applications and it's also flexible to extend to cluster environment. Why everyone wants NOSQL?
because nosql db is much cheaper?
why not swith object-oriented db?
Firstly, not everyone does want NoSQL. Packaged software (eg ERP) is all pretty much mainstream RDBMS stuff. Don't confuse the amount of development effort with the usage.
What has happened is that NoSQL has opened up a whole range of applications that simply didn't suit relational technologies, and so there's been a rush of applications that CAN be developed. Most of the stuff that could be developed on RDBMS platforms already has been and is in more of a maintenance/upgrade phase. Probably with less upgrading than usual because of the whole global financial climate.
So in ten/fifteen years, as those NoSQL apps move into the same level of maturity, the frenzy will have died down and there's be less excitement.
Oracle also have NoSQL solution: http://www.oracle.com/technetwork/database/nosqldb/overview/index.html
A SQL solution isn't necessarily what you want regardless of scale.
There are situations where you can't easily predict the schema of your model, or worst still your data is schemaless. In those situations you want a data model which doesn't limit you but rather allows you the flexibilty you need to evolve your data while still maintaining core abilities such as fast indexes.
Another reason is that SQL doesn't represent the natural way in which you want to look at your data, predominantly graph DBs such as Neo4J or GraphDB allow developers or users to approach a linked graph model in a more intuitive way.
Of course there is a way to address all these issues in Oracle RDBMS, but it feels more like hacking the DB to fit your needs as opposed to using a DB that fits you. This sounds like a perk but it actually goes a long way into the ease of development and analysis of your application.
Now if we are talking about scale, Oracle can probably beat column based DBs such as HBase or Hypertable, but it is important to note that Oracle RDBMS isn't just more expensive it's way way more expensive. In today's world even small time startups have Terabytes of data they need to analyze daily. Even small companies can use clusters of 100 machines in the cloud to store their data, in such a company Oracle isn't a viable option the annual licensing cost and the hiring of DBAs will prevent startups from using it.
Finally the last reason why you would start off with NoSQL is speed, bringing up a MongoDB and starting development can be done in 5 minutes and sometimes you wanna handle problems as they come up and avoid premature optimization
If you are willing to let go consistency, there are no theorical limits to how mouch you can scale out some NoSQL solutions.
Some RDBMS can scale a lot, and Oracle is among the best of them, but no RDBMS let you cut in consistency, and therefor even the best has a pretty clear theorical limit to how much it can scale, not to mention the real world limits.
Some big names on the net just can't rely only on RDBMS anymore, and many others follows just to be like the big ones. Finally, some solutions are realy best fitted on a scheam-less structure, but those don't account for the most part of NoSQL users I would guess. The main point is to scale the web 2.0.
This is a very general question you are asking. Are you comparing the relational database to NOSQL database, or are you comparing commercial or open source database? We need to figure out as it seems we are comparing apples to oranges, and you will not get a straight answer.
Here are the breakdown from my perspective.
DB type: If you are comparing relational db vs NOSQL db, you should refer to this link
instead.
Cost: If you are comparing from a cost perspective, each has its own cost. Oracle will charge for the license, while the NOSQL db (using MongoDB as an example) are open source, and you don't have to pay the license. But you will need someone to have a good understanding of NOSQL to administer and maintain the site, which is a lot harder to find.
Application: What kind of application are you writing? You need to understand the database requirement for your application. If you need to store a lot of unstructured data as a key value entry, you are better off using NOSQL. On the other side of coin, you would prefer SQL db if you will join a lot of tables and execute complex SQL.
You mentioned Object Oriented DB, and that is another type of database from NoSQL or relational DB. At the end, it depends on your need. To expand the choice of horizon, some might heard of hierarchical database, which mainly live in mainframe environment. :-)
I would like to understand database scalability so I've just heard a talk about Habits of Highly Scalable Web Applications
http://techportal.inviqa.com/2010/03/02/habits-of-highly-scalable-web-applications/
On it, the presenter mainly talk about relational database scalability.
I also have read something about MapReduce and Column oriented tables, big tables, hypertable etc... trying to understand which are the most up to date methods to scale web application data. But the second group, to me, is being hard to understand where it fits.
It serves as transactional, reliable data store? or not, its just for large access and processing and to handle fine graned operations we will ever need to rely on RDBMSs?
Could someone give a comprehensive landscape for those new technologies and how to use it?
Basically it's about using the right tool for the job. Relational databases have been around for decades, which means they are very good at solving the problems that haven't changed in that time - things like keeping track of sales for example. Although they have become the default data store for just about everything, they are not so good at handling the problems that didn't exist twenty years ago - particularly scalability and data without a clearly defined, unchanging schema.
NOSQL is a class of tools designed to solve the problems that are not perfectly suited to relational databases. Scalability is the best known, though unlikely to be a relevant to most developers. I think the other key use case that we don't see so much of yet is for small projects that don't need to worry about the data storage characteristics at all, and can just use the default - being able to skip database design, ORM and database maintenance is quite attractive.
For Ecommerce specifically you're probably better off using sql at least in part - You might use NOSQL for product details or a recommendation engine, but you want your sales data in an easily queried sql table.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I've been hearing things about NoSQL and that it may eventually become the replacement for SQL DB storage methods due to the fact that DB interaction is often a bottle neck for speed on the web.
So I just have a few questions:
What exactly is it?
How does it work?
Why would it be better than using a SQL Database? And how much better is it?
Is the technology too new to start implementing yet or is it worth taking a look into?
There is no such thing as NoSQL!
NoSQL is a buzzword.
For decades, when people were talking about databases, they meant relational databases. And when people were talking about relational databases, they meant those you control with Edgar F. Codd's Structured Query Language. Storing data in some other way? Madness! Anything else is just flatfiles.
But in the past few years, people started to question this dogma. People wondered if tables with rows and columns are really the only way to represent data. People started thinking and coding, and came up with many new concepts how data could be organized. And they started to create new database systems designed for these new ways of working with data.
The philosophies of all these databases were different. But one thing all these databases had in common, was that the Structured Query Language was no longer a good fit for using them. So each database replaced SQL with their own query languages. And so the term NoSQL was born, as a label for all database technologies which defy the classic relational database model.
So what do NoSQL databases have in common?
Actually, not much.
You often hear phrases like:
NoSQL is scalable!
NoSQL is for BigData!
NoSQL violates ACID!
NoSQL is a glorified key/value store!
Is that true? Well, some of these statements might be true for some databases commonly called NoSQL, but every single one is also false for at least one other. Actually, the only thing NoSQL databases have in common, is that they are databases which do not use SQL. That's it. The only thing that defines them is what sets them apart from each other.
So what sets NoSQL databases apart?
So we made clear that all those databases commonly referred to as NoSQL are too different to evaluate them together. Each of them needs to be evaluated separately to decide if they are a good fit to solve a specific problem. But where do we begin? Thankfully, NoSQL databases can be grouped into certain categories, which are suitable for different use-cases:
Document-oriented
Examples: MongoDB, CouchDB
Strengths: Heterogenous data, working object-oriented, agile development
Their advantage is that they do not require a consistent data structure. They are useful when your requirements and thus your database layout changes constantly, or when you are dealing with datasets which belong together but still look very differently. When you have a lot of tables with two columns called "key" and "value", then these might be worth looking into.
Graph databases
Examples: Neo4j, GiraffeDB.
Strengths: Data Mining
While most NoSQL databases abandon the concept of managing data relations, these databases embrace it even more than those so-called relational databases.
Their focus is at defining data by its relation to other data. When you have a lot of tables with primary keys which are the primary keys of two other tables (and maybe some data describing the relation between them), then these might be something for you.
Key-Value Stores
Examples: Redis, Cassandra, MemcacheDB
Strengths: Fast lookup of values by known keys
They are very simplistic, but that makes them fast and easy to use. When you have no need for stored procedures, constraints, triggers and all those advanced database features and you just want fast storage and retrieval of your data, then those are for you.
Unfortunately they assume that you know exactly what you are looking for. You need the profile of User157641? No problem, will only take microseconds. But what when you want the names of all users who are aged between 16 and 24, have "waffles" as their favorite food and logged in in the last 24 hours? Tough luck. When you don't have a definite and unique key for a specific result, you can't get it out of your K-V store that easily.
Is SQL obsolete?
Some NoSQL proponents claim that their favorite NoSQL database is the new way of doing things, and SQL is a thing of the past.
Are they right?
No, of course they aren't. While there are problems SQL isn't suitable for, it still got its strengths. Lots of data models are simply best represented as a collection of tables which reference each other. Especially because most database programmers were trained for decades to think of data in a relational way, and trying to press this mindset onto a new technology which wasn't made for it rarely ends well.
NoSQL databases aren't a replacement for SQL - they are an alternative.
Most software ecosystems around the different NoSQL databases aren't as mature yet. While there are advances, you still haven't got supplemental tools which are as mature and powerful as those available for popular SQL databases.
Also, there is much more know-how for SQL around. Generations of computer scientists have spent decades of their careers into research focusing on relational databases, and it shows: The literature written about SQL databases and relational data modelling, both practical and theoretical, could fill multiple libraries full of books. How to build a relational database for your data is a topic so well-researched it's hard to find a corner case where there isn't a generally accepted by-the-book best practice.
Most NoSQL databases, on the other hand, are still in their infancy. We are still figuring out the best way to use them.
What exactly is it?
On one hand, a specific system, but it has also become a generic word for a variety of new data storage backends that do not follow the relational DB model.
How does it work?
Each of the systems labelled with the generic name works differently, but the basic idea is to offer better scalability and performance by using DB models that don't support all the functionality of a generic RDBMS, but still enough functionality to be useful. In a way it's like MySQL, which at one time lacked support for transactions but, exactly because of that, managed to outperform other DB systems. If you could write your app in a way that didn't require transactions, it was great.
Why would it be better than using a SQL Database? And how much better is it?
It would be better when your site needs to scale so massively that the best RDBMS running on the best hardware you can afford and optimized as much as possible simply can't keep up with the load. How much better it is depends on the specific use case (lots of update activity combined with lots of joins is very hard on "traditional" RDBMSs) - could well be a factor of 1000 in extreme cases.
Is the technology too new to start implementing yet or is it worth taking a look into?
Depends mainly on what you're trying to achieve. It's certainly mature enough to use. But few applications really need to scale that massively. For most, a traditional RDBMS is sufficient. However, with internet usage becoming more ubiquitous all the time, it's quite likely that applications that do will become more common (though probably not dominant).
Since someone said that my previous post was off-topic, I'll try to compensate :-) NoSQL is not, and never was, intended to be a replacement for more mainstream SQL databases, but a couple of words are in order to get things in the right perspective.
At the very heart of the NoSQL philosophy lies the consideration that, possibly for commercial and portability reasons, SQL engines tend to disregard the tremendous power of the UNIX operating system and its derivatives.
With a filesystem-based database, you can take immediate advantage of the ever-increasing capabilities and power of the underlying operating system, which have been steadily increasing for many years now in accordance with Moore's law. With this approach, many operating-system commands become automatically also "database operators" (think of "ls" "sort", "find" and the other countless UNIX shell utilities).
With this in mind, and a bit of creativity, you can indeed devise a filesystem-based database that is able to overcome the limitations of many common SQL engines, at least for specific usage patterns, which is the whole point behind NoSQL's philosophy, the way I see it.
I run hundreds of web sites and they all use NoSQL to a greater or lesser extent. In fact, they do not host huge amounts of data, but even if some of them did I could probably think of a creative use of NoSQL and the filesystem to overcome any bottlenecks. Something that would likely be more difficult with traditional SQL "jails". I urge you to google for "unix", "manis" and "shaffer" to understand what I mean.
If I recall correctly, it refers to types of databases that don't necessarily follow the relational form. Document databases come to mind, databases without a specific structure, and which don't use SQL as a specific query language.
It's generally better suited to web applications that rely on performance of the database, and don't need more advanced features of Relation Database Engines. For example, a Key->Value store providing a simple query by id interface might be 10-100x faster than the corresponding SQL server implementation, with a lower developer maintenance cost.
One example is this paper for an OLTP Tuple Store, which sacrificed transactions for single threaded processing (no concurrency problem because no concurrency allowed), and kept all data in memory; achieving 10-100x better performance as compared to a similar RDBMS driven system. Basically, it's moving away from the 'One Size Fits All' view of SQL and database systems.
In practice, NoSQL is a database system which supports fast access to large binary objects (docs, jpgs etc) using a key based access strategy. This is a departure from the traditional SQL access which is only good enough for alphanumeric values. Not only the internal storage and access strategy but also the syntax and limitations on the display format restricts the traditional SQL. BLOB implementations of traditional relational databases too suffer from these restrictions.
Behind the scene it is an indirect admission of the failure of the SQL model to support any form of OLTP or support for new dataformats. "Support" means not just store but full access capabilities - programmatic and querywise using the standard model.
Relational enthusiasts were quick to modify the defnition of NoSQL from Not-SQL to Not-Only-SQL to keep SQL still in the picture! This is not good especially when we see that most Java programs today resort to ORM mapping of the underlying relational model. A new concept must have a clearcut definition. Else it will end up like SOA.
The basis of the NoSQL systems lies in the random key - value pair. But this is not new. Traditional database systems like IMS and IDMS did support hashed ramdom keys (without making use of any index) and they still do. In fact IDMS already has a keyword NONSQL where they support SQL access to their older network database which they termed as NONSQL.
It's like Jacuzzi: both a brand and a generic name. It's not just a specific technology, but rather a specific type of technology, in this case referring to large-scale (often sparse) "databases" like Google's BigTable or CouchDB.
NoSQL the actual program appears to be a relational database implemented in awk using flat files on the backend. Though they profess, "NoSQL essentially has no arbitrary limits, and can work where other products can't. For example there is no limit on data field size, the number of columns, or file size" , I don't think it is the large scale database of the future.
As Joel says, massively scalable databases like BigTable or HBase, are much more interesting. GQL is the query language associated with BigTable and App Engine. It's largely SQL tweaked to avoid features Google considers bottle-necks (like joins). However, I haven't heard this referred to as "NoSQL" before.
NoSQL is a database system which doesn't use string based SQL queries to fetch data.
Instead you build queries using an API they will provide, for example Amazon DynamoDB is a good example of a NoSQL database.
NoSQL databases are better for large applications where scalability is important.
Does NoSQL mean non-relational database?
Yes, NoSQL is different from RDBMS and OLAP. It uses looser consistency models than traditional relational databases.
Consistency models are used in distributed systems like distributed shared memory systems or distributed data store.
How it works internally?
NoSQL database systems are often highly optimized for retrieval and appending operations and often offer little functionality beyond record storage (e.g. key-value stores). The reduced run-time flexibility compared to full SQL systems is compensated by marked gains in scalability and performance for certain data models.
It can work on Structured and Unstructured Data. It uses Collections instead of Tables
How do you query such "database"?
Watch SQL vs NoSQL: Battle of the Backends; it explains it all.
When would a database design be described as overnormalized? Is this characterization an absolute one? Or is it dependent on the way it is used in the application? Thanks.
In the general sense, I think that overnormalized is when you are doing so many JOINs to retrieve data that it is causing notable performance penalties and deadlocks on your database, even after you've tuned the heck out of your indexes. Obviously, for huge applications and sites like MySpace or eBay, de-normalization is a scaling requirement.
As a developer for several small businesses, I tell you that in my experience it's always been easier to go from normalized -> denormalized than the other way around, and in fact going the other way around (to avoid duplication of data now that the business requirements have changed a year or so later) is much more difficult.
When I read general statements such as "you should put the address in your customers table instead of a separate address table so you can avoid the join", I shudder, because you just know that a year from now somebody's going to ask you to do something with addresses that you totally didn't foresee, like maintaining an audit trail, or storing multiple per customer. If your database allows you to create an indexed view, you can sidestep that issue until you get to the point where your dataset is so large that it can't possibly exist or be served by a single server or set of servers in a 1-write, many-read environment. For most of us, I don't think that scenario happens very often.
When in doubt, I aim for third normal form with some exceptions (for example, having a field contain a CSV-list of separated strings because I know I'll never ever look at the data from the other angle). When I need to consolidate, I'll look at my views or indexes first. Hope this helps.
It's always a question of the application domain. It's usually a question of correctness, but occasionally a question of performance.
There's one case where I can think of a prima facie case of overnormalization: say you have an order + orderitem, and the orderitem references productID, and leaves pricing to the product.price. Since that introduces temporal coupling, you've incorrectly normalized because the overnormalization affects already shipped orders, unless prices absolutely never change. You can certainly argue that this is simply a modeling error (as in the comments), but I see under-normalization as a modeling error in most cases, too.
The other category is performance related. In principle, I think there are generally better solutions to performance than denormalizing data, such as materialized views, but if your application suffers from the performance consequences of many joins, it may be worth assessing whether denormalizing can help you. I think these cases are often over-emphasized, because people sometimes reach for denormalization before they properly profile their application.
People also often forget about alternatives, like keeping a canonical form of the database and using warehousing or other strategies for frequently-read, but infrequently changed data.
Normalization is absolute. A database follows Normal Forms or it does not. There are a half-dozen normal forms. Mostly, they have names like First through Fifth. Plus there's a Boyce-Codd Normal Form.
Normalization exists for precisely one purpose -- to prevent "update anomalies".
Normalization isn't subjective. It isn't a judgement. Each table and relationship among tables either does or does not follow a normal form.
Consequently, you can't be "over-normalized" or "under-normalized".
Having said that, normalization has a performance cost. Some people elect to denormalize in various ways to improve performance. The most common sensible denormalization is to break 3NF and include derived data.
A common mistake is to break 2NF and have duplicate copies of a functional dependency between a key and non-key value. This requires extra updates or -- worse -- triggers to keep the copies in parallel.
Denormalization of a transactional database should be a case-by-case situation.
A data warehouse, also, rarely follows any of the transactional normalization rules because it's (essentially) never updated.
"Over-normalization" could mean that a database is too slow because of a large number of joins. This may also mean that the database has outgrown the hardware. Or that the applications haven't been designed to scale.
The most common issue here is that folks try to use a transactional database for reporting while transactions are going on. The locking for transactions interferes with reporting.
"Under-normalization," however, means that there are NF violations and needless processing is being done to handle the replicated data and correct update anomalies.
When the performance cost exceeds the benefit towards the application's intended purpose.
Normalize your OLTP databases, and denormalize your OLAP databases. Each has a mission that dictates its schema. Like normalized transaction databases, data warehouses exist for a reason. A complete system needs both.
A lot of people are talking about performance. I think a key issue is flexibility. In general, the more normalized your database, the more flexible it is.
We currently use an "over-normalized" database because, in our operating environment, client requirements change on a monthly basis. By "over-normalizing" we can adopt our software accordingly, without changing the database structure.
My take on this:
Always normalize as much as you are able to do. I usually go crazy on normalization, and try to design something that could handle every thinkable future extensions. What I end up with is a database design that is extremely flexible... and impossible to implement.
Then the real job starts: De-normalization. Here you solve what you know would be problematic to implement and/or would slow the queries down because of too many joins.
This way you know what you scarify for make the design usable.
Edit: Documentations! I forgot to mention that documenting the de-normalization is very important. It is extremely helpful when you take over a project to know the reason behind the choices.
Third Normal Form (3NF) is considered the optimal level of normalization for many a rational database application. This is a state in which, as Bill Kent once summarized, every "non-key field [in every table within a particular a relational database management system, or RDBMS] must provide a fact about the key, the whole key, and nothing but the key." 3NF is a term that was introduced by E.F. Codd, inventor of the relational model for database management. Generally, the data that a software application is dependent on, especially an application used for an Online Transaction Processing System (OLTP), will fare well in 3NF. This normal form by definition reduces database size by calling for a minimum repetition of row/column data, and maximizes query efficiency and ease of application maintenance. 3NF achieves that by requiring that a database's tables (i.e., its schema) be broken down into separate tables related by primary/foreign keys--basically until Kent's rule holds true (well, I've stated it this way for ease of reading but the actual definition of 3NF is much more detailed than that). In contrast, overnormalization implies increasing the number of joins required in a query between related tables. This comes as a result of breaking down the database schema into a much more granular level than 3NF. However, though normalization past the 3rd degree can often be considered overnormalization, the negative connotation of the term "overnormalization" can sometimes be unwarranted. Overnormalization may be desirable in some applications which by design require 4NF (and beyond) due to the complexity and versatility of the application software. An example of that is a highly customizable and extensible commercial database program for some industry in which it is sold to end users requiring an open API. But then the reverse can be desirable as well--that is, denormalization--most notably, when designing an Online Analytical Processing (OLAP) database used strictly to summarize data from an OLTP database just for querying/reporting--such as a data warehouse. In this case the data must by necessity reside in a highly denormalized format (i.e, 1NF, or 2NF). It's often under these constraints--when there are high demands for efficient querying and reporting--that we find database and application programmers calling a database, "overnormalized". But as Redgate's Tony Davis once said--taking into account today's much more advanced and efficient database software and storage systems--"the performance hit from multiple joins in a query is negligible. If your database is slow, it isn’t because it is ‘over-normalized’!" So in conclusion, this characterization--overnormalization--isn't an absolute one, and it is dependent on the way it is used in the application. In Kent's words, "The normalization rules are designed to prevent update anomalies and data inconsistencies. . . [but] there is no obligation to fully normalize all records when actual performance requirements are taken into account. . . The normalized design enhances the integrity of the data, by minimizing redundancy and inconsistency, but at some possible performance cost for certain retrieval applications. . . [Thus,] the desirability of normalization has to be assessed, in terms of its performance impact on retrieval applications."
..or hitting limits on the number of joins your RDBMS will do.
If performance is affected by too many joins, creating de-normalized tables for reporting purposes can speed things up. By copying the data into new tables, it may be possible to run reports with no joins at all.
In my experience, I've never seen a normalized database that contains postal addresses, as it's usually acceptable to store the address as a string. Ideally, there would be tables for countries, counties / states, cities, districts and streets. I've not come across anyone who needs to report on street level, so it hasn't been necessary. The addresses have only be used for postal contact, so are treated as a single entity.