To what extent should a developer learn specifics about database systems? [closed] - database

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Modern database systems today come with loads of features. And you would agree with me that to learn one database you must unlearn the concepts you learned in another database. For example, each database would implement locking differently than others. So to carry the concepts of one database to another would be a recipe for failure. And there could be other examples where two databases would perform very very differently.
So while developing the database driven systems should the programmers need to know the database in detail so that they code for performance? I don't think it would be appropriate to have the DBA called for performance later as his job is to only maintain the database and help out the developer in case of an emergency but not on a regular basis.
What do you think should be the extent the developer needs to gain an insight into the database?

I think these are the most important things (from most important to least, IMO):
SQL (obviously) - It helps to know how to at least do basic queries, aggregates (sum(), etc), and inner joins
Normalization - DB design skills are an major requirement
Locking Model/MVCC - Its nice to have at least a basic grasp of how your databases manage row locking (or use MVCC to accomplish similar goals with optimistic locking)
ACID compliance, Txns - Please know how these work and interact
Indexing - While I don't think that you need to be an expert in tablespaces, placing data on separate drives for optimal performance, and other minutiae, it does help to have a high level knowledge of how index scans work vs. tablescans. It also helps to be able to read a query plan and understand why it might be choosing one over the other.
Basic Tools - You'll probably find yourself wanting to copy production data to a test environment at some point, so knowing the basics of how to restore/backup your database will be important.
Fortunately, there are some great FOSS and free commercial databases out there today that can be used to learn quite a bit about db fundamentals.

I think a developer should have a fairly good grasp of how their database system works, not matter which one it is. When making design and architecture decisions, they need to understand the possible implications when it comes to the database.

Personally, I think you should know how databases work as well as the relational model and the rhetoric behind it, including all forms of normalization (even though I rarely see a need to go beyond third normal form). The core concepts of the relational model do not change from relational database to relational database - implementation may, but so what?
Developers that don't understand the rationale behind database normalization, indexes, etc. are going to suffer if they ever work on a non-trivial project.

I think it really depends on your job. If you are a developer in a large company with dedicated DBAs then maybe you don't need to know much, but if you are in a small company then it may be really helpful knowing more about databases. In small companies you may wear more than one hat.
It cannot hurt to know more in any situation.

It certainly can't hurt to be familiar with relational database theory, and have a good working knowledge of the standard SQL syntax, as well as knowing what stored procedures, triggers, views, and indexes are. Obviously it's not terribly important to learn the database-specific extensions to SQL (T-SQL, PL/SQL, etc) until you start working with that database.
I think it's important to have a basic understand of databses when developing database applications just like it's important to have an understanding of the hardware your your software runs on. You don't have to be an expert, but you shouldn't be totally ignorant of anything your software interacts with.
That said, you probably shouldn't need to do much SQL as an application developer. Most of the interaction with the database should be done through stored procedures developed by the DBA, I'm not a big fan of including SQL code in your application code. If your queries are in stored procedures, then the DBA can change the implementation of the stored procedure, or even the database schema, and so long as the result is the same it doesn't require any changes to your application code.

If you are uncertain about how to best access the database you should be using tried and tested solutions like the application blocks from Microsoft - http://msdn.microsoft.com/en-us/library/cc309504.aspx. They can also prove helpful to you by examining how that code is implemented.

Basic things about Sql queries are must. then you can develop simple system. but when you are going to implement Complex systems you should know Normalization, Procedures, Functions, etc.

Related

Best practices for creating a data model [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
For a current project I'm creating a data model. Are there any sources where I can find "best practices" for a good data model? Good means flexible, efficient, with good performance, style, ... Some example questions would be "naming of columns", "what data should be normalized", or "which attributes should be exported into an own table". The source should be a book :-)
Personally I think you should read a book on performance tuning before beginning to model a database. The right design can make a world of difference. If you are not expert in performance tuning, you aren't qualified to design a database.
These books are Database specific, here is one for SQl Server.
http://www.amazon.com/Server-Performance-Tuning-Distilled-Experts/dp/1430219025/ref=sr_1_1?s=books&ie=UTF8&qid=1313603282&sr=1-1
Another book that you should read before starting to design is about antipatterns. Always good to know what you should avoid doing.
http://www.amazon.com/SQL-Antipatterns-Programming-Pragmatic-Programmers/dp/1934356557/ref=sr_1_1?s=books&ie=UTF8&qid=1313603622&sr=1-1
Do not get stuck in the trap of designing for flexibility. People use that as a way to get out of doing the work to design correctly and flexible databases almost always perform badly. If more than 5% of your database design depends on flexibility, you haven't modeled correctly in my opinion. All the worst COTS products I've had to work with were designed for flexibility first.
Any decent database book will discuss normalization. You can also find that information easily on the web. Be sure to actually create FK/PK relationships.
As far as naming columns, pick a standard and stick with it consistently. Consistency is more important than the actual standard. Don't name columns ID (see SQL antipatterns book). Use the same name and datatypes if columns are going to be in several different tables. What you are going for is to not have to use functions to do joins because of datatype mismatches.
Always remember that databases can (and will) be changed outside the application. Anything that is needed for data integrity must be in the database not the application code. The data will be there long after the application has been replaced.
The most important things for database design:
Thorough definition of the data needed (including correct datatypes)
and the relationships between pieces of data (including correct normalization)
data integrity
performance
security
consistency (of datatypes, naming standards etc.)
The best book I've read on the design of database systems was "An Introduction to Database Systems". Joe Celko's SQL for Smarties books are also worth reading.
Assuming you're building an application and not just a database, and assuming you're using an Object Oriented language, Applying UML and Patterns by Craig Larman has a good discussion on mapping databases to objects.
In terms of defining "good", in my experience "maintainable" is probably top of the list. Maintainability in database design means many things, such as sticking to conventions - I often recommend http://justinsomnia.org/2003/04/essential-database-naming-conventions-and-style/. Normalization is another obvious maintainability strategy. I often recommend being generous with column types - it's hard to change an application if you find out that postal codes in different countries are longer than in the US. I often recommend using views to abstract complex data relations away for less experienced developers.
A key thing with maintainability is the ability to test and deploy. It's worth reading up about Continuous Database Integration (http://www.codeproject.com/KB/architecture/Database_CI.aspx) - whilst not strictly associated with the design of the database schema, it's important context.
As for performance - I believe you should design for maintainability first, and only design for performance if you know you have a problem. Sometimes, you know in advance that performance will be a major problem - designing a database for Facebook (or Stack Exchange), designing a database with huge amounts of data (terabytes and up), or huge numbers of users. Most systems don't fall into that camp - so I recommend regular performance tests, with representative data, to find if you have a problem, and only tune when you can prove you have to. Many performance optimizations are at the expense of maintainability - denormalization, for instance.
Oh, and in general, avoid triggers and stored procedures if you can. That's just my opinion, though...
Even though it is not a book I recommend to read Query evaluation techniques for large databases. It gives a background on query processing which largely influences your schema design, especially for data intensive (e.g., analytical) workloads. It is less hands-on but I believe every database designer should read it at least once :-).

Which database if learning from scratch in 2010? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
If someone knew little about databases and wanted to learn about them from scratch, which database would you recommend learning with and why?
MySQL seems ubiquitous, but are there others that are more modern that have learned lessons from the past, or others that are simply nicer or more logical to work with?
Universal compatibility/libraries is not a big concern, unless it is something truly obscure. Mac (Unix) compatibility is a must.
If you just want to be learning the SQL language, and not database administration, I would recommend working with SQLite. If you're on a Mac, it should already be installed. It is a much simpler system than most RDBMSes; there is no server to set up, and client to connect to the server. There are no directories of cryptic files, or anything of the sort. To get started, you can just type:
sqlite3 mydatabase.db
And start working with it. It's so much lighter weight and easier to set up and use than the other database systems that I think it's a good choice for a beginner.
Now, SQLite is a fairly small and lightweight language. If you need be getting into any kind of really complex queries and data mining, I would recommend PostgreSQL. It has a fairly advanced query optimizer, and a pretty long list of SQL features.
And if you want to learn a database as something to use for back-end storage for web programming or something of the sort, MySQL is what I'd choose. It's ubiquitous, supported by almost any web host, and it's pretty fast for very simple queries and updates, which is generally what you need for a web system. It has some real gotchas to avoid when setting it up; you have to choose between several different storage engines, and it can take a lot of work to convince it to actually work with Unicode data. But it's good to learn mostly for its ubiquity.
From what I've seen (at least on the web), MySQL and PostgreSQL are the most ubiquitous free database systems. If you're considering learning one of them, check this comparison out.
You may also want to consider learning SQLite, a "self-contained, embeddable, zero-configuration SQL database engine." It's really easy to get up and going, stored in a single file, and as its description says, has no complicated configuration. SQLite has proved enormously popular as a persistent data store for local apps on the desktop/iPhone. If you're going down this route on a Mac/iPhone, you may also want to check out Core Data, which is an abstraction layer Apple developed on top of SQLite(but can work with pretty much any DB), to simplify working with a database. As a bonus, Core Data includes a nice GUI for forming relationships and entities. You can check out this tutorial for more information.
If you really, truly, want to "learn from scratch", then theory is the first thing to learn. And that means : NOT products, not any. Not DB2, not MySQL, not oracle, not any-of-them.
Hugh Darwen has a freely available e-book entitled "An Introduction to Relational Database Theory". The material is quite "accessible" and quite unlike most other theory textbooks. It's also the accompanying textbook for his university course on database technology.
Chris Date has several books, of which "Introduction to database systems" is the most comprehensive, also the standard textbook in the field, but maybe a little too abstract for some.
If you think that all you need is "just to know a product" and that you can do equally well "without all that theory", then in that case, please disregard this response, because the wording of your question is dishonest.
Sad thing about databases is that each and every one works bit differently. I would most likely pick MySQL first and play with it a bit. Then get PostgreSQL and do the same.
If you need to use databases in corporate environment then I would aim to test also Oracle and SQL Server which both have express versions that can be installed free for yourself.
http://www.microsoft.com/express/sql/download/
http://www.oracle.com/technology/software/products/database/index.html
At start all databases are more or less confusing but I would pick MySQL as first because it can do most of the basic functionalities and has a lot of help available.
I'd go with mysql. It's easy to setup, easy to mess around with via with mysql client, and it's well documented. If you're just starting out, you probably won't need most of the features offered by other databases, like stored procedures and the like.
First of all, MySQL is both ubiquitous and modern.
ANSI SQL is more or less the same in all RDBMs, so you can learn any of them and you'll be good.
Once you've mastered ANSI SQL, then all you've got left is the localized solutions for each one of them, which won't be portable to other systems - and so, totally discouraged to use, unless they simplify your tasks in a way justifying it.
MySQL, PostgreSQL, SQLite - pick one. PostgreSQL is more like Oracle, and in my opinion a bit more mature. It's had stored procedures, triggers, and referential integrity longer than MySQL has. I'll admit that I have both installed, but I use MySQL more often because it's quick and easy.
But do be aware that non-SQL alternatives are out there and growing in importance. BigTable, object databases like db4o, are worth being aware of. "No SQL" is out there.
If you are just getting your feet wet, MYSQL is a great one to start with. Easy install on any platform, great community support and lots of free tools to work with (SQLYog is a favorite of mine).
I agree that theory is very important. Depending on how you learn best, digging in and tinkering may be the thing to do before you try to absorb 40 years of thought on relational systems.
Codd and Date are legends in the field and can help you understand the broader points of relational theory, but are hard to absorb before you have context for the topic.
If you are looking for more pragmatic/immediate guidance, I'd suggest a book like "Databases for Mere Mortals" and anything written by Joe Celko.
Once you get comfortable with the basics, there are lots of other platforms to explore as well. As mentioned above, SQLLite and PostGres are two other great choices for the Mac OS.
If you want to learn SQL : the best way is to choose database who implement more features of the SQL standard. So I would recommand Firebird or PostgreSQL
I might be "sidetracking" a bit with this answer, but I think we're in the same situation!
Check out the "The Manga Guide to Databases"! I haven't read it myself yet, but it's on the way in the mail as we speak! I've heard good things about it from friends and colleges, and it's got some good reviews as well. Albeit a bit "controversial," it's supposed to be a fun and surprisingly in-depth introduction to fundamental techniques and principles!
Alex wrote: "Reading a textbook without incrementally testing your knowledge on an actual database is not going to produce good results for the majority of people."
My book and my university course both use Dave Vooorhis's Rel for that very purpose.
Hugh
A database in the cloud: Amazon EC2, Google App Engine or Microsoft Azure

Why did object oriented databases fail? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Why did object oriented databases fail?
I find it astonishing that:
foo bar = new foo();
bar.saveToDatabase();
Lost to:
foo bar = new foo();
/* write complicated code to extract stuff from foo */
/* write complicated code to write stuff to database */
Related questions:
Are Object oriented databases still
in use?
Object Oriented vs Relational
Databases
Why have object oriented databases not been successful (yet)?
Probably because of their coupling with specific programming languages.
First, I don't believe they have "failed" entirely. There are still some around, and they're still used by a couple of companies, as far as I know.
anyway, probably because a lot of the data we want to store in a database is relational in nature.
The problem is that while yes, OO databases are easier to integrate into OO programming languages, relational databases make it easier to define queries and, well, relations between the data stored. Which is often the complicated part.
I have been using db4o (an object oriented database) lately on my personal pet projects, just because it is so quick to set up & get going. No need with the itty, gritty details.
That aside, as I see it, the main reasons why they haven't become popular are:
Reporting is more difficult on object oriented databases. Related to this, it is also easier to manually look into the actual data in a relational database.
Microsoft and Oracle base so much of their business on relational databases.
Lots of businesses already have relational databases in place.
The IT departments often have relational database expertise.
And, as Jan Aagaard, have pointed out, lately it is because the problem have been solved in a different way, giving programmers the object oriented feel even though they program against a relational database.
There are countless numbers of existing applications out there storing their data in relational databases. This data is the lifeblood of those companies. They have collectively invested huge amounts in storing, maintaining and reporting on this data. The cost and risk of moving this priceless information into a fundamentally different environment is extremely high.
Now consider that ORM tools can map modern application data structures into traditional relational models, and you remove pretty much any incentive to migrate to OODBMS. They provide a low-risk alternative to a very costly high risk migration.
Because, as much as ODBMS advertisements were laden with derogatory language about ORM systems, it wasn't that hard to make ORMs do the job, and without all the various hits taken in switching to a pure ODBMS.
What actually happened is that your first code sample won, it just happens to be on a RDBMS persistence layer.
I think it is because the problem was solved differently. You might be using a relational database behind the scenes when you are coding in Ruby on Rails or LINQ to SQL, but it feels like you are working with objects.
Very subjective, but a few reasons come to mind:
Performance has not been as good as relational databases (or at least that's the perception)
Again with performance - relational databases allow you to do things like denormalizing data to further improve performance.
Legacy support for all the non-OO apps that need to access the data.
I think a lot of your answer lies in the "Why we abandoned Object Databases" answer of "Object Oriented vs Relational Databases".
As far as your example goes, it doesn't have to be that way. Linq to SQL is actually a quite nice basic layer over a DBMS, and Linq to Entities (v2 -- v1 sucked) will be pretty cool too. (N)Hibernate has been solving the problem you're having for years now using RDBMSes.
So I guess my answer to you is that O/R mappers are getting to the point where they solve your problem nicely, and you don't need an ODBMS to get what you need.
They will succeed some day. They are future.
Looking back to software technologies in history, the trend is sacrificing performance to reduce complexity (Assembly => C => C++ => .NET). An application which takes 30 minutes to code now, some days in past took a month.
ORMs are right answer to wrong question. Currently, they are the choice, since they make life easier in the absence of a better solution. But they cannot handle the level of complexity they aimed to. "Problems cannot be solved by the same level of thinking that created them." A.E
As others mentioned relational databases are heavily used and relied and replacing them forces a lot of risks. Look the interval between SQL versions and the major changes between these versions and other Microsoft products (conservative approach, which is necessary here). Also I'll add the following items:
Current approach still works. You may argue it will work forever (we
can code assembly yet), but here I mean it doesn't
work logically when, the AVERAGE level of projects complexity and
the time to develop them on relational databases rings the bell.
Major companies did not involved seriously. When the market signals, they do.
The problem is not well-defined yet. Unfortunately current failures help.
It need some improvements in other sciences (QC, AI) rather than
computer. Storing and querying multidimensional data on flat
infrastructure and without enough smartness for self-organizing are
the top obstacles at the theoretic level.
Why not?
I guess they were a solution to a problem nobody was having, or not having enough to pay for it.
Further, OOP and set-based programming are not always very comptatble.
Personally, when I started reading about OO databases, I couldn't help but think "Boy, I hope I never have to work on one of those, update 1 million rows out of a 6 million row table and then make sure all appropriate records in other tables get updated as well"

Best way to design database tables with interbase?

I am about to start redoing a company database in a proper fashion. Our current database is a mess and has little to no documentation. I was wondering what people recommend to use when designing an Interbase database? Is there some sort of good visual schema designer that will generate the SQL? Is it better to do it all by hand?
Basically, what are the steps people usually take when designing and documenting a database? If it matters, I intend to use Hibernate as an ORM for the database. (Specific tips with Interbase would be appreciated as well).
thanks!
Usually, I use a text editor. Occasionally, I use Database Workbench. Last I heard, Embarcadero was going to add InterBase support to some of their database modeling tools, but I don't know if that has shipped yet.
If this is a brand new database app that has recently been created and is not being relied on in the business, then I say full steam ahead with your re-write / new database. I suspect however that you are dealing with a database that has been around for a few years and is heavily used.
If I am right about the database being a few years old, I strongly recommend against starting from scratch. Almost any production database that has been around a few years will be "messy". This is usually because the real world requirements for programs usually demand the solutions to be somewhat messy. This will be true of your brand new database (should you go this route) a few years from now as well.
Here are some reasons I would not recreate a production database from scratch:
The live database contains years worth of transactions and customer data that is very valuable. It will be very difficult to transfer this data into a completely different database structure. Believe me, even if the company tells you now they will not need to access this old data, they will.
Many business rules have probably been built into the database structure, in the form of defaults, triggers, stored procedures, even the data types of the columns, and without examining these very carefully and documenting them, you are likely to leave them out of your new database and spend much time debugging and adding these in when people start using the system and discover the rules are not being applier properly
You are liable to make mistakes in your new database design, or realise later that the structure needs to change to accommodate a new feature. If you have been making changes to your current database, and learning from that, future changes become easier and more intuitive.
Here is the approach I recommend:
Understand and document the current database, which will give you a really good understanding of the information flows in your business.
When you see what appears to be bad or messy design, look at it carefully. You may be right, and see potential for change, or you might find a trade off was made for performance or other reasons, and you can learn from this.
Make incremental improvements to the database structure, being sure to update documentation, alter the programs that rely on those areas (or work with your programmer if that isn't you).
I know this seems like a very long way around, but take it from someone who has been maintaining and creating databases for 12 years now - your current database is probably messy because the real-world requirements are messy.

How much of an applications "smarts" should reside in the database? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I've noticed a trend lately that people are moving more and more processing out of databases and in to applications. Some people are taking this to what seems to me to be ridiculous extremes.
I've seen application designs that not only banned all use of stored procedures, but also banned any kind of constraints enforced at the database (this would include primary key, foreign key, unique, and check constraints). I have even seen applications that required the use of only one data type stored in the database, namely varchar(2000). DateTime and number types were not allowed. Transactions and concurrency were also handled outside the database.
Has anyone seen these kind of applications implemented successfully? Both of the implementations I've dealt with that were implemented this way had all kinds of data integrity and concurrency problems. Can anyone explain this trend to move stuff (logic, processing, constraints) out of the database? What is the motivation behind it? Is it something I'm imagining?
Firstly, I really hope there is no trend towards databases without PKs and FKs and sensible datatypes. That would really be a tragedy.
But there is definitely a large core of developers who prefer putting logic in their apps than in stored procedures. I agree with Riho on the main reason for this: usually, DBAs manage databases, meaning that a developer has to go through a bunch of administrative overhead -- getting approvals from the DBA -- in order to create and update stored procs. Programmers by nature like to have control over their world, and to do things "their way."
There are also a couple of valid technical reasons:
Procedural extensions to SQL (e.g. T-SQL) used for developing stored procs have traditionally lacked user-defined datatypes, debuggability, portability, and interoperability with external systems -- all qualities helpful for developing reliable large-scale software. (And the clumsy syntax doesn't help.)
Software version control (e.g. svn) works well for managing even very large codebases, but managing DB schema changes is a harder problem and less well supported. Every serious shop uses version control for their application codebase, but many still don't have any rigorous system for managing their databases; hence stored procs can easily fall into an unversioned "black hole" that makes coders rightly nervous.
My personal view is that the closer the core business logic is to the data, the better, especially if more than one agent accesses the DB (or may do in the future). It's an unfortunate artefact of history that T-SQL and its ilk were weak languages, leading to the rise of the idea that "data and logic should be separated." My ideal world is one in which every business rule is encapsulated in a constraint enforced by the database, and all inconsistencies fail fast.
I like to keep logic out of the database. I tend to avoid stored procedures and triggers. I do, though, always use proper data types, keys, indicies and constraints. The way I see it is that the database is a database and the application is the application. The database should keep your data stored properly and efficiently whereas the application should own the logic. Perhaps I have never been in a situation where a stored procedure or trigger was needed; and thus never been inclined to use them to solve a problem. But to me, giving logic a home on the database seems "messy" to me; I would rather control everything from the application itself.
The trend results from the fact that the software technology industry is populated and driven largely by humans, and thus subject to trends and irrational behavior. To understand what's going on today requires a bit of perspective in the history of databases, and their parallel development with programming languages.
To be brief in this answer that will likely get downvoted: SQL is the IE6 of the database languages world. It breaks many of the rules of the relational model- in other words, it's a little bit like a calculator that performs multiplication incorrectly, and doesn't have a minus operator. SQL is not complete enough to be a real solution. It was never developed beyond the prototype stage, and was never meant to be used in industrial settings. But then it was naively used by oracle, which turned out to be a "killer app", SQL became industry standard instead of its technically superior competitors, and the rest is history. SQL's syntax is based around a set of command line tabular data processing tools, and COBOL. Full of bugs, inconsistencies, and a mishmash proprietary versions and features that don't have a grounding in math or logic, results in a situation where it really is unclear what goes where.
I think the trend you must be talking about is recent proliferation of ORMs: misguided and ill thought out attempts to patch over the obvious deficiencies of SQL. Database triggers and procedures are another misfeature trying to patch over SQL's problems.
If history had played out in a logical and orderly way, the answer to your question would be simple: Just follow the rules of the relational model and everything will work itself out. Unfortunately, the rules of the relational model don't fit cleanly into the current crop of SQL based DBMS's, so some application level fiddling, or triggers, or whatever other stupid patch is unfortunately necessary, and it ends up being a matter of subjective opinion, rather than reasoned argument, which stupid hack you use.
So the real answer is to just follow the relational model as close as you can, and then fudge it the rest of the way. Put the logic in the application if you're the only one using the db, and you need to keep all your source code in a version repository. If multiple applications are likely to use the database, make the DB as bullet proof and self sufficient as it can be- The main goal here is to ensure that the data remains consistent.
Ultimately the database and how you connect to it is your "persistence API" -- how much is in the database and how much is in the application is application-specific. But the important aspect is that the API boundary is responsible for producing or consuming correct data.
Personally I prefer a thin access layer in the application and sprocs/PKs/FKs in the database to enforce transactional correctness and to enable API versioning. This also allows other applications to modify the database without upsetting the data model.
And if I see another moron using *SELECT * FROM blah* I'm going to go nuts with an Uzi... :-)
"The database should keep your data stored properly and efficiently whereas the application should own the logic" - Nelson LaQeut in another answer.
This seems to be the crux of the issue: that all "logic" belongs to the application and not to the database. But what is meant by "logic"? There are various kinds of "logic", some of which belong in an application and some, I would say, better placed in the database.
I would think most developers would agree (surely?) that basic data integrity such as primary and foreign keys belongs in the database. There is less agreement on more sophisticated data integrity logic - even the humble but useful check constraint is woefully underused in general. .
The application camp see the database is "merely" a place to store the data that "belongs" to their application. The database camp (which is where I sit) see the application as "merely" one (perhaps currently the only) user of the data that "belongs" to the database - or rather that belongs to the business and is managed for the business by the database (DBMS = database management system).
If all your data logic is tied up in your application, what happens when the application needs to be rewritten in the latest trendy paradigm (or do you think J2EE for example is the last there will ever be)? As Tom Kyte often says, it's all about the data.
The database is an integral part of an application, but everyone interprets that differently. It's definitely a wise move to isolate them, but that shouldn't mean that you circumvent what they do in your programming. Correct data types and primary key references are important parts of good database design, on top of which a good application can be built.
Although I personally believe the Database should have enough smarts to defend itself, some people that don't understand that Databases aren't dumb services, think, and not incorrectly mind you, that data and logic should be separated. Now in many cases the separation of data and logic is a powerful tool, however most databases already provide us with solid implementations of atomicity, redundancy, processing, checking, etc... And many times that's where it belongs, however since the quality of these services and their API differs among vendors, many application programmers have felt like its worth trying to implement this sort of stuff in the application layer, to avoid tying themselves up with a specific database layer.
I can't say that I've seen a "trend" to create poor applications with terrible database designs. Programming is just like any other discipline in that there will be people who won't learn the tools or just want to cut corners. I've even talked to a person that just didn't "trust" databases. The applications that you described are just as you said, ridiculous nightmares. Don't follow those "trends".
I still prefer to use Stored Procedures and functions in SQL server. It adds more flexibility to application acturally. And it has a performance benefit also. Generally I don't think it is good idea to put everything to applicatons.
I think that those "developers" who created databases without indexes or with single VARCHAR(2000) column are just art majors who are making their first attempt into entering the high-priced IT world.
No-one, who has even little-bit of IT education, makes this kind of database structures.
I can understand the reason to keep logic out of the well formed database. Usually it is time-consuming to make changes (you have to convince database admins to make it, and all the red-tape that comes with it). If the business logic is in your program, then its up to you only.
Use constraints in the database, but for any sophisticated logic I would place that in a data access layer or use one of the standard Object Relational Mapping (ORM) tools such as Hibernate/NHibernate.
There is a general belief that this will affect performance; based on the view that stored procedures are pre-compiled but 'raw' queries have to be compiled on every call. However, I work mostly in SQL Server 2005/2008, and that is very efficient at handling 'raw' parameterised queries, caching the compiled query path for future calls to the database. This means that there under SQL Server there is essentially no difference between the performance of stored procedures to parameterised SQL queries.
The only downside on losing stored procedures is if you are very granular with your database security permissions, and which to enforce security at the database login level.
I have a simple philosophy.
If it's need to keep the database secure and in a consistant state, make sure to do it in the database
I do try to keep a lot of other stuff there too, in my world it's easier to update a client's database than it is to update their application...
Essentially I try to treat the database as a pseudo object. A bunch of methods I can call, etc, but I don't want the app to care about the detail of the internal data storage.
In my experience, putting any application logic in the database always results in a WTF. It doesn't matter how smart the database programmer, how advanced the database, it always ends up being a mistake. The reverse question is "how often should my C# code manage relational data using its own flat-file structure and query language", to which the answer is (almost) always never.
I think the database should be used for data storage, which is what it's good at.

Resources