Data independence in relational database - database

Can anybody explain how data independence in ensured in a relational database? What says that nothing will change for the user if the database structure changes?
For example, I have relation R (and have created an application which uses this relation) and the database admin decides to make a decomposition of R to R1 and R2. How is application inalterability ensured for the end user?

I asked myself exactly the same question during my Database class.
According Codd's 12 rules, there are two kinds of data independence:
Physical Data Independence requires that changes at the physical level (like data structures) have no impact in the applications that consume the database. For example, let's say you decide to stop using a Hash Index in your table and decide to use a B-Tree Index instead: Your application that executes queries against this table doesn't have to change at all.
Logical Data Independence states that changes at the logical level (tables, columns, rows) will have no impact in the applications that access the database. As you already noticed, this feature is harder to implement that Physical Data Independence but there are still cases when this feature works. For example, if you add Tables, Columns or Rows to your current scheme the already working queries aren't affected at all.

Your question is not phrased very clearly. I don't see the relationship between between "data independence" and "application inalterability".
A proper relational structure decomposes data into entities and relationships. The idea is that when a value changes, it only changes in one place. This is the reasoning behind the various "normal forms" of data.
Most user applications do not want to see data in a normalized form. They want to see data in a denormalized form, often with lots of fields gathered together on one line. Similarly, an update might involve several fields in different entities, but to a user, it is just one thing.
A relational database can maintain the structure of the data and allow you to combine data for different viewpoints. It has nothing to do with your second point. Application independence (I think this is a better word than "inalterability") depends on how the application is designed. A well-designed application has a well-design application programming interface (also known as an API).
It seems that a lot of database developers think that the physical data structure is good enough as an API. However, this is often a bad design decision. Often, a better design decision is to have all database operations performed through stored procedures, views, and user defined functions. In other words, don't directly update a table. Create a stored procedure called something usp_table_update that takes fields and updates the table.
With such a structure, you can modify the underlying database structure and maintain user applications at the same time.

what says that nothing will change for the user if the database
structure changes?
Well, database structures can change for many reasons. On a high level, I see two possibilities:
Performance / internal database reasons
Business rules / the world outside the application changed
#1: in this case, the DBA has decided to change some structure for performance or ... In that case an extra layer, for example using stored procedures, views etc. can help to "hide" the change to the application/user. Or a good data-layer on the application side could be helpfull.
#2: if the outside world changes, or your business rules change, NOTHING you can do on the database level can keep that away from the user. For example a company that always has used only ONE currency in the database is suddenly going international: in that case your database has to be adopted to support multi currency and it will need serious alteration in the database and for the user.

For example, I have relation R (and created application which uses this relation) and the database admin desides to make a decomposion of R to R1 and R2. How the application inalternability is ensured for the end user?
The admin should create a view which would represent R1 and R2 as the original R.

Related

How to handle different organization with single DB?

Background
building an online information system which user can access through any computer. I don't want to replicate DB and code for every university or organization.
I just want user to hit a domain like www.example.com sign in and use it.
For second user it will also hit the same domain www.example.com sign in and use it. but the data for them are different.
Scenario
suppose a university has 200 employees, 2nd university has 150 and so on.
Qusetion
Do i need to have separate employee table for each university or is it OK to have a single table with a column that has University ID?
I assume 2nd is best but Suppose i have 20 universities or organizations and a total of thousands of employees.
What is the best approach?
This same thing is for all table? This is just to give you an example.
Thanks
The approach will depend upon the data, usage, and client requirements/restrictions.
Use an integrated model, as suggested by duffymo. This may be appropriate if each organization is part of a larger whole (i.e. all colleges are part of a state college board) and security concerns about cross-query access are minimal2. This approach has a minimal amount of separation between each organization as the same schema1 and relations are "openly" shared. It leads to a very simple model initially, but it can become very complicated (with compound FKs and correct usage of such) if needing relations for organization-specific values because it adds another dimension of data.
Implement multi-tenancy. This can be achieved with implicit filters on the relations (perhaps hidden behinds views and store procedures), different schemas, or other database-specific support. Depending upon implementation this may or may not share schema or relations even though all data may reside in the same database. With implicit isolation, some complicated keys or relationships can be hidden/eliminated. Multi-tenancy isolation also generally makes it harder/impossible to cross-query.
Silo the databases entirely. Each customer or "organization" has a separate database. This implies separate relations and schema groups. I have found this approach to to be relatively simple with automated tooling, but it does require managing multiple database. Direct cross-querying is impossible, although "linked databases" can be used if there is a need.
Even though it's not "a single DB", in our case, we had the following restrictions 1) not allowed to ever share/expose data between organizations, and 2) each organization wanted their own local database. Thus, our product ended up using a silo approach. Make sure that the approach chosen meets customer requirements.
None of these approaches will have any issue with "thousands", "hundreds of thousands", or even "millions" of records as long as the indices and queries are correctly planned. However, switching from one to another can violate many assumed constraints and so the decision should be made earlier on.
1 In this response I am using "schema" to refer to the security grouping of database objects (e.g. tables, views) and not the database model itself. The actual database model used can be common/shared, as we do even when using separate databases.
2 An integrated approach is not necessarily insecure - but it doesn't inherently have some of the built-in isolation of other designs.
I would normalize it to have UNIVERSITY and EMPLOYEE tables, with a one-to-many relationship between them.
You'll have to take care to make sure that only people associated with a given university can see their data. Role based access will be important.
This is called a multi-tenant architecture. you should read this:
http://msdn.microsoft.com/en-us/library/aa479086.aspx
I would go with Tenant Per Schema, which means copying the structure across different schemas, however, as you should keep all your SQL DDL in source control, this is very easy to script.
It's easy to screw up and "leak" information between tenants if doing it all in the same table.

Should business rules be enforced in both the application tier and the database tier, or just one of the two?

I have been enforcing business rules in both my application tier (models) and my database tier (stored procedures who raise errors).
I've been duplicating my validations in both places for a few reasons:
If the conditions change between
when they are checked in the
application code and when they are
checked in the database, the
business rule checks in the database
will save the day. The database
also allows me to lock various
records in a simpler manner than in
my application code, so it seems
natural to do so here.
If we have
to do some batch data
insertions/updates to the database directly, if I route
all these operations through my
stored procedures/functions which
are doing the business rule
validations, there's no chance of me
putting in bad data even though I lack the protections that I would get if I was doing single-input through the application.
While
enforcing these things ONLY in the
database would have the same effect
on the actual data, it seems
improper to just throw data at the
database before first making a good
effort to validate that it conforms
to constraints and business rules.
What's the right balance?
You need to enforce at the data tier to ensure data integrity. That's your last line of defense, and that's the DBs job, to help enforce its world view of the data.
That said, throwing junk data against the DB for validation is a coarse technique. Typically the errors are designed to be human readable rather than machine readable, so its inefficient for the program to process the error from the DB and make heads or tails out of it.
Stored Procedures are a different matter. Back in the day, Stored Procedures were The Way to handle business rules on the data tiers, etc.
But today, with the modern application server environments, they have become a, in general, better place to put this logic. They offer multiple ways to access and expose the data (the web, web services, remote protocols, APIs, etc). Also, if your rules are CPU heavy (arguably most aren't) it's easier to scale app servers than DB servers.
The large array of features within the app servers give them a flexibility beyond what the DB servers can do, and thus much of what was once pushed back in to the DBs is being pulled out with the DB servers being relegated to "dumb persistence".
That said, there are certainly performance advantages using Stored Procs and such, but now that's a tuning thing where the question becomes "is it worth losing the app server capability for the gain we get by putting it in to the DB server".
And by app server, I'm not simply talking Java, but .NET and even PHP etc.
If the rule must be enforced at all times no matter where the data came from or how it was updated, the database is where it needs to be. Remember databases are affected by direct querying to make changes that affect many records or to do something the application would not normally do. These are things like fixing a group of records when a customer is bought out by another customer and they want to change all the historical data, the application of new tax rates to orders not yet processed, the fixing of a some bad data inputs. They are also affected sometimes by other applications which do not use your data layer. They may also be affected by imports run through ETL programs which also cannot use your data layer. So if the rule must in all cases be followed, it must be in the database.
If the rule is only for special cases concerning how this particular input page works, then it needs to be in the application. So if a sales manager has only specific things he can do from his user interface, these things can be specified in the application.
Somethings it is helpful to do in both places. For instance, it is silly to allow a user to put a non-date in an input box that will relate to a date field. The datatype in the database should still be a datetime datatype, but it is best to check some of this stuff before you send.
Your business logic can sit in either location, but should not be in both. The logic should NOT be duplicated because it's easy to make a mistake trying to keep both in sync. If you put it in the model you'll want all data access to go through your models, including batch updates.
There will be trade-offs to putting it in the database vs the application models (here's a few of the top of my head):
Databases can be harder to maintain and update than applications
It's easier to distribute load if it's in the application tier
Multiple, disparate dbs may require splitting business rules (which may not be possible)

Few database design questions relating to user content site

Designing a user content website (kind of similar to yelp but for a different market and with photo sharing) and had few databse questions:
Does each user get their own set of
tables or are we storing multiple
user data into common tables? Since
this even a social network, when
user sizes grows for scalability
databases are usually partitioned
off. Different sets of users are
sent separately, so what is the best
approach? I guess some data like
user accounts can be in common
tables but wall posts, photos etc
each user will get their own table?
If so, then if we have 10 million
users then that means 10 million x
what ever number of tables per user?
This is currently being designed in
MySQL
How does the user tables know what
to create each time a user joins the
site? I am assuming there may be a
system table template from which it
is pulling in the fields?
In addition to the above question,
if tomorrow we modify tables,
add/remove features, to roll the
changes down to all the live user
accounts/tables - I know from a page
point of view we have the master
template, but for the database, how
will the user tables be updated? Is
that something we manually do or the
table will keep checking like every
24 hrs with the system tables for
updates to its structure?
If the above is all true, that means we are maintaining 1 master set of tables with system default values, then each user get the same value copied to their tables? Some fields like say Maximum failed login attempts before system locks account. One we have a system default of 5 login attempts within 30 minutes. But I want to allow users also to specify their own number to customize their won security, so that means they can overwrite the system default in their own table?
Thanks.
Users should not get their own set of tables. It will most likely not perform as well as one table (properly indexed), and schema changes will have to be deployed to all user tables.
You could have default values specified on the table for things that are optional.
With difficulty. With one set of tables it will be a lot easier, and probably faster.
That sort of data should be stored in a User Preferences table that stores all preferences for all users. Again, don't duplicate the schema for all users.
Generally the idea of creating separate tables for each entity (in this case users) is not a good idea. If each table is separate querying may be cumbersome.
If your table is large you should optimize the table with indexes. If it gets very large, you also may want to look into partitioning tables.
This allows you to see the table as 1 object, though it is logically split up - the DBMS handles most of the work and presents you with 1 object. This way you SELECT, INSERT, UPDATE, ALTER etc as normal, and the DB figures out which partition the SQL refers to and performs the command.
Not splitting up the tables by users, instead using indexes and partitions, would deal with scalability while maintaining performance. if you don't split up the tables manually, this also makes that points 2, 3, and 4 moot.
Here's a link to partitioning tables (SQL Server-specific):
http://databases.about.com/od/sqlserver/a/partitioning.htm
It doesn't make any kind of sense to me to create a set of tables for each user. If you have a common set of tables for all users then I think that avoids all the issues you are asking about.
It sounds like you need to locate a primer on relational database design basics. Regardless of the type of application you are designing, you should start there. Learn how joins work, indices, primary and foreign keys, and so on. Learn about basic database normalization.
It's not customary to create new tables on-the-fly in an application; it's usually unnecessary in a properly designed schema. Usually schema changes are done at deployment time. The only time "users" get their own tables is an artifact of a provisioning decision, wherein each "user" is effectively a tenant in a walled-off garden; this only makes sense if each "user" (more likely, a company or organization) never needs access to anything that other users in the system have stored.
There are mechanisms for dealing with loosely structured types of information in databases, but if you find yourself reaching for this often (the most common method is called Entity-Attribute-Value), your problem is either not quite correctly modeled, or you may not actually need a relational database, in which case it might be better off with a document-oriented database like CouchDB/MongoDB.
Adding, based on your updated comments/notes:
Your concerns about the number of records in a particular table are most likely premature. Get something working first. Most modern DBMSes, including newer versions of MySql, support mechanisms beyond indices and clustered indices that can help deal with large numbers of records. To wit, in MS Sql Server you can create a partition function on fields on a table; MySql 5.1+ has a few similar partitioning options based on hash functions, ranges, or other mechanisms. Follow well-established conventions for database design modeling your domain as sensibly as possible, then adjust when you run into problems. First adjust using the tools available within your choice of database, then consider more drastic measures only when you can prove they are needed. There are other kinds of denormalization that are more likely to make sense before you would even want to consider having something as unidiomatic to database systems as a "table per user" model; even if I were to look at that route, I'd probably consider something like materialized views first.
I agree with the comments above that say that a table per user is a bad idea. Also, while it's a good idea to have strategies in mind now for how you can cope when things get really big, I'd concentrate on getting things right for a small number of users first - if no-one wants to / is able to use your service, then unfortunately you won't be faced with the problem of lots of users.
A common approach among very large sites is database sharding. The summary is: you have N instances of your database in parallel (on separate machines), and each holds 1/N of the total data. There's some shared way of knowing which instance holds a given bit of data. To access some data you have 2 steps, rather than the 1 you might expect:
Work out which shard holds the data
Go to that shard for the data
There are problems with this, such as: you set up e.g. 8 shards and they all fill up, so you want to share the data over e.g. 20 shards -> migrating data between shards.

Database per application VS One big database for all applications [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I'm designing a few applications that will share 2 or 3 database tables and all of the other tables will be independent of each app. The shared databases contain mostly user information, and there might occur the case where other tables need to be shared, but that's my instinct speaking.
I'm leaning over the one database for all applications solution because I want to have referential integrity, and I won't have to keep the same information up to date in each of the databases, but I'm probably going to end with a database of 100+ tables where only groups of ten tables will have related information.
The database per application approach helps me keep everything more organized, but I don't know a way to keep the related tables in all databases up to date.
So, the basic question is: which of both approaches do you recommend?
Thanks,
Jorge Vargas.
Edit 1:
When I talk about not being able to have referential integrity, it's because there's no way to have foreign keys in tables when those tables are in different databases, and at least one of the tables per application will need a foreign key to one of the shared tables.
Edit 2:
Links to related questions:
SQL design around lack of cross-database foreign key references
Keeping referential integrity across multiple databases
How to salvage referential integrity with mutiple databases
Only the second one has an accepted answer. Still haven't decided what to do.
Answer:
I've decided to go with a database per application with cross-database references to a shared database, adding views to each database mimicking the tables in the shared database, and using NHibernate as my ORM. As the membership system I'll be using the asp.net one.
I'll also use triggers and logical deletes to try and keep to a minimum the number of ID's I'll have flying around livin' la vida loca without a parent. The development effort needed to keep databases synced is too much and the payoff is too little (as you all have pointed out). So, I'd rather fight my way through orphaned records.
Since using an ORM and Views was first suggested by svinto, he gets the correct answer.
Thanks to all for helping me out with this tough decision.
Neither way looks ideal
I think you should consider not making references in database layer for cross-application relations, and make them in application layer. That would allow you to split it to one database per app.
I'm working on one app with 100+ tables. I have them in one database, and are separated by prefixes - each table has prefix for module it belongs to. Then i have built a layer on top of database functions to use this custom groups. I'm also building data administrator, which takes advantage of this table groups and makes editing data very easy.
It depends and your options are a bit different depending on the database and frameworks you're using. I'd recommend using some sort of ORM and that way you don't need to bother that much. Anyways you could probably put each app in it's own schema in the database and then either reference the shared tables by schemaname.tablename or create views in each application schema that's just a SELECT * FROM schemaname.tablename and then code against that view.
There are no hard and fast rules to choose one over the other.
Multiple databases provide modularity. As far as sync-ing across multiple databases are concerned, one can use the concept of linked servers and views thereof and can gain the advantages of integrated database (unified access) as well.
Also, keeping multiple databases can help better management of security, data, backup & restore, replication, scaling out etc!
My 2cents.
THat does not sound like "a lot of applications" at all, but like "one application system with different executables". Naturally they can share one database. Make smart usage of Schemata to isolate the different funcational areas within one database.
One database for all application in my opinion .Data would be stored once no repitation.
With the other approach you would end up replicating and in my opinion when you start replicating it will bring its own headache and data would go out of sync too
The most appropriate approach from scalability and maintenence point of view would be to make the "shared/common" tables subset self-sufficient and put it to "commons" database, for all others have 1 db per application of per logical scope (business logic determined) and maintain this structure always
This will ease the planning and execution commissioning/decommissioning/relocation/maintenence procedures of your software (you will know exactly which two affected DBs (commons+app_specific) are involved if you know which app you are going to touch and vice versa.
At our business, we went with a separate database per application, with cross database references for the small amount of shared information and an occasional linked server. This has worked pretty well with a development, staging, build and production environments.
For users, our entire user base is on windows. We use Active Directory to manage the users with application references to groups, so that the apps don't have to manage users, which is nice. We did not centralize the group management, that is each application has tables for groups and security which is not so nice but works.
I would recommend, that if your applications are really different, to have a database per application. Looking back, the central shared database for users sounds workable as well.
You can use triggers for cross database referential integrity:
Create a linked server to the server that holds the database that you want to reference. Then use 4-part naming to reference the table in the remote database that holds the reference data. Then put this in the insert and update triggers on the table.
EXAMPLE(assumes single row inserts and updates):
DECLARE #ref (datatype appropriate to your field)
SELECT #ref = refField FROM inserted
IF NOT EXISTS (SELECT *
FROM referenceserver.refDB.dbo.refTable
WHERE refField = #ref)
BEGIN
RAISERROR(...)
ROLLBACK TRAN
END
To do multi row inserts and updates you can join the tables on the reference field but it can be very slow.
I think the answer to this question depends entirely on your non functional requirements. If you are designing a application that will one day need to be deployed across 100's of nodes then you need to design your database so that if need be it could be horizontally scaled. If on the other hand this application is to be used by a hand full of users and may have a short shelf life then you approach will be different. I have recently listened to a pod cast of how EBAY's architecture is set-up, http://www.se-radio.net/podcast/2008-09/episode-109-ebay039s-architecture-principles-randy-shoup, and they have a database per application stream and they use sharding to split tables across physical nodes. Now their non-functional requirements are that the system is available 24/7, is fast, can support thousands of users and that is does not lose any important data. EBAY make millions of pounds and so can support the effort that this takes to develop and maintain.
Anyway this does not answer your question:) my personnel opinion would be to make sure your non-functional requirements have been documented and signed off by someone. That way you can decide on the best Architecture. I would be tempted to have each application using its own database and a central database for shared data. And I would try to minimise the dependencies between them, which I'm sure is not easy or you would have done it:), but I would also try to steer clear of having to produce some sort of middle ware software to keep tables in sync as this could create a headaches for you.
At the end of the day you need to get your system up and running and the guys with the pointy hair wont give a monkeys chuff about how cool your design is.
We went for splitting the database down, and having one common database for all the shared tables. Due to them all being on the save SQL Server instance it didn't affect the cost of running queries across multiple database.
The key in replication for us was that whole server was on a Virtual Machine (VM), so for replication to create Dev/Test environments, IT Support would just create a copy of that image and restore additional copies when required.

What's the benefit of using a lot of complex stored procedures

For typical 3-tiered application, I have seen that in many cases they use a lot of complex stored procedures in the database. I cannot quite get the benefit of this approach. In my personal understanding, there are following disadvantages on this approach:
Transactions become coarse.
Business logic goes into database.
Lots of computation is done in the database server, rather than in the application server. Meanwhile, the database still needs to do its original work: maintain data. The database server may become a bottleneck.
I can guess there may be 2 benefits of it:
Change the business logic without compile. But the SPs are much more harder to maintain and test than Java/C# code.
Reduce the number of DB connection. However, in the common case, the bottleneck of database is hard disk io rather than network io.
Could anyone please tell me the benefits of using a lot of stored procedures rather than letting the work be done in business logic layer?
Basically, the benefit is #2 of your problem list - if you do a lot of processing in your database backend, then it's handled there and doesn't depend on the application accessing the database.
Sure - if your application does all the right things in its business logic layer, things will be fine. But as soon as a second and a third application need to connect to your database, suddenly they too have to make sure to respect all the business rules etc. - or they might not.
Putting your business rules and business logic in the database ensures that no matter how an app, a script, a manager with Excel accesses your database, your business rules will be enforced and your data integrity will be protected.
That's the main reason to have stored procs instead of code-based BLL.
Also, using Views for read and Stored Procs for update/insert, the DBA can remove any direct permissions on the underlying tables. Your users do no longer need to have all the rights on the tables, and thus, your data in your tables is better protected from unadvertent or malicious changes.
Using a stored proc approach also gives you the ability to monitor and audit database access through the stored procs - no one will be able to claim they didn't alter that data - you can easily prove it.
So all in all: the more business critical your data, the more protection layer you want to build around it. That's what using stored procs is for - and they don't need to be complex, either - and most of those stored procs can be generated based on table structure using code generation, so it's not a big typing effort, either.
Don't fear the DB.
Let's also not confuse business logic with data logic which has its rightful place at the DB.
Good systems designers will encompass flexible business logic through data logic, i.e. abstract business rule definitions which can be driven by the (non)existence or in attributes of data rows.
Just FYI, the most successful and scalable "enterprise/commercial" software implementations with which I have worked put all projection queries into views and all data management either into DB procedures or triggers on staged tables.
Network between appServer and sqlServer is the bottle neck very often.
Stored procedures are needed when you need to do complex query.
For example you want collect some data about employee by his surname. Especially imagine, that data in DB looks like some kind of tree - you have 3 records about this employee in table A. You have 10 records in table B for each record in table A. You have 100 records in table C for each record in table B. And you want to get only special 5 records from table C about that employee. Without stored procedures you will get a lot of queries traffic between appServer and sqlServer, and a lot of code in appServer. With stored procedure which accepts employee surname, fetches those 5 records and returns them to appServer you 1) decrease traffic by hundreds times, 2) greatly simplify appServer code.
The life time of our data exceeds that of our applications. Also data gets shared between applications. So many applications will insert data into the database, many applications will retrieve data from it. The database is responsible for the completeness, integrity and correctness of the data. Therefore it needs to have the authority to enforce the business rules relating to the data.
Taking you specific points:
Transactions are Units Of Work. I
fail to see why implementing
transactions in stored procedures
should change their granularity.
Business logic which applies to the
data belongs with the data: that
maximises cohesion.
It is hard to write good SQL and to
learn to think in sets. Therefore
it may appear that the database is
the bottleneck. In fact, if we are
undertaking lots of work which
relates to the data the database is
probably the most efficient place to
do.
As for maintenance: if we are familiar with PL/SQL, T-SQL, etc maintenance is easier than it might appear from the outside. But I concede that tool support for things like refactoring lags behind that of other languages.
You listed one of the main ones putting business logic in the Db often gives the impression of making it easier to maintain.
Generally complex SP logic in the db allows for cheaper implementation of the actual implementation code, which may be beneficial if its a transitional application (say being ported from legacy code), its code which needs to be implemented in several languages (for instance to market on different platforms or devices) or because the problem is simpler to solve in the db.
One other reason for this is often there is a general "best practice" to encapsulate all access to the db in sps for security or performance reasons. Depending on your platform and what you are doing with it this may or may not be marginally true.
I don't think there are any. You are very correct that moving the BL to the database is bad, but not for everything. Try taking a look at Domain Driven Design. This is the antidote to massive numbers of SPROCs. I think you should be using your database as somewhere to store you business objects, nothing more.
However, SPROCs can be much more efficient on certain, simple functions. For instance, you might want to increase the salary to every employee in your database by a fixed percentage. This is quicker to do via a SPROC than getting all the employees from the db, updating them and then saving them back.
I worked in a project where every thing is literally done in database level. We wrote lot of stored procedures and did lot of business validation / logic in the database. Most of the times it became a big overhead for us to debug.
The advantages I felt may be
Take advantage of full DB features.
Database intense activities like lot of insertion/updation can be better done in DB level. Call an SP and let it do all the work instead of hitting DB several times.
New DB servers can accommodate complex operations so they no longer see this as a bottleneck. Oh yeah, we used Oracle.
Looking at it now, I think few things could have been better done at application level and lesser at DB level.
It depends almost entirely on the context.
Doing work on the server rather than on the clients is generally a bad idea as it makes your server less scalable. However, you have to balance this against the expected workload (if you know you will only have 100 users in a closed enironment, you may not need a scalable server) and network traffic costs (if you have to read a lot of data to apply calculations/processes to, then it can be cheaper/faster overall to run those calculations on the server and only send the results over the net).
Also, if you have custom client applications (as opposed to web browsers etc) it makes it very easy to push updates out to your clients, because you don't need to recompile and deploy the client code, you simply upgrade the database stored procedures.
Of course, using stored procedures rather than executing dynamically compiled SQL statements can be more efficient (it's precompiled, and the code doesn't need to be uploaded to the server) and aids encapsulation to give the database better integrity/security. But by the sound of it, you're talking about masses of busines logic, not simple efficiency and security measures.
As with most things, a sensible compromise/balance is needed. Stored Procedures should be used enough to enhance efficiency and security, but you don't want your server to become unscalable.
"there are following disadvantages on this approach:
...
Business logic goes into database."
Insofar as by "busines logic" you mean "enforcement of business rules", the DBMS is EXACTLY where "business logic" belongs.

Resources