How to handle retrieving data from a different table based on environment? - database

I am intentionally keeping this question agnostic to any specific language as I am looking for a solution in the realm of design.
I am working on a program that selects data from an external table using a dblink. Depending on what environment I am working in, the dblink changes. For example, if I am in the production environment, the dblink will also be for production. When in a lower environment the dblink will be for development.
To accommodate this we concatenate a SQL query together, placing the appropriate table name and dblink in which is determined by checking which environment we are currently in. See the psuedocode below:
If ENV = "PRD"
dblink = "table#production";
Else
dblink = "table#development";
SQL = "SELECT * FROM " + dblink + "WHERE...";
I just feel there may be a better way of doing this whether inside the program or through database setup. Any information or resources on this would be appreciated.

Related

Duplicate SQL Schema in SQL Server

I have a requirement that a user can have multiple environments to make experiments and test how good their modifications are; after the users are satisfied with the modifications they've done to the data in the working environment, these modifications can be (partially or completely) copied to another environment; these environments can be created as empty or as copies of other environments; right now we are using SQL Azure and our current (not implemented) approach is creating each environment as a different SQL schema in the same database using the statement
CREATE SCHEMA
till now in POCs this is working really good for us. But what i don't like of this approach is that creating a new schema involves executing several scripts to create the tables and the SPs in the new schema, so as we create or update the default schema objects, we also need to update the scripts that create the schema, also that when the schema is created we need to bulk copy the data from the original schema using another script, so considering the size of the client's data this process sometimes it cannot be not as fast as I would like, and also maintaining the SQL code to create environments is not that good for the team.
so my question is, is there any way to duplicate an entire dbo schema with a different name using T-SQL Statements?, i know this can be done manually using SQL Server Management Studio and the generate scripts option, but this must happen automatically because the users can create a new environment at any time, i already checked the documentation for
ALTER SCHEMA TargetSchema
TRANSFER SourceSchema.TableName;
but this just changes the database object schema, it does not create an actual copy of the object.
EDIT:
I am not trying to create different databases for dev, qa and production, I already have them; what I want to achieve is create a web app with multiple environments, each environment is a sandbox for the final user to make experiments, imagine it is like creating a draft before making this data available for the general public, so when the users are satisfied with the modifications they can move this data to the public environment and then when it is moved, it is available for others to see it
You can use the CREATE DATABASE ... AS COPY OF transact-sql statement to create copies of your Production database that can be used as QA, testing and development databases.
CREATE DATABASE db_copy
AS COPY OF ozabzw7545.db_original ( SERVICE_OBJECTIVE = 'P2') ;
Here ozabzw7545 is the name of Azure SQL Database server.
The following is the full syntax + additional information that is specific to Azure SQL database: CREATE DATABASE (Azure SQL Database)
Additional Information for Copy an Azure SQL database
You also can use PowerShell:
New-AzureRmSqlDatabaseCopy -ResourceGroupName "myResourceGroup" `
-ServerName $sourceserver `
-DatabaseName "MySampleDatabase" `
-CopyResourceGroupName "myResourceGroup" `
-CopyServerName $targetserver `
-CopyDatabaseName "CopyOfMySampleDatabase"

SQL Server How to add a linked server to the same instance without performance impact

in my company, we have several environments with MS SQL database servers (SQL 2008 R2, SQL 2014). For the sake of simplicity, let us consider just a TEST environment and a PROD environment and two sql servers in each. Let the servers be called srTest1, srTest2, srProd1, srProd2 and each be running a default MS SQL Server instance. We work with multiple databases, say DataDb, ReportDb, DWHDb.
We want to keep the same source code in T-SQL for both TEST and PROD, but the problem is the architecture or distribution of the above mentioned databases in each environment:
TEST:
srTest1 - DataDb
srTest2 - DWHDb, ReportDb
PROD:
srProd1 - DataDb, ReportDb
srProd2 - DWHDb
Now, say, in ReportDb, we write stored procedures with many SELECTs referencing tables and other objects in DataDb and DWHDb. In order to have source code as universal as possible, we decided to create linked servers for each database on each db server in each environment and name them with respect to the database they're created for. Therefore, there'll be these linked servers:
lnkDataDb, lnkReportDb and lnkDWHDb on srTest1,
lnkDataDb, lnkReportDb and lnkDWHDb on srTest2,
lnkDataDb, lnkReportDb and lnkDWHDb on srProd1,
lnkDataDb, lnkReportDb and lnkDWHDb on srProd2.
And we'll adjust the source in the stored procs accordingly. For instance:
Instead of
SELECT * FROM DataDb.dbo.Contact
We'll write
SELECT * FROM lnkDataDb.DataDb.dbo.Contact
The example above is reasonable for a situation where the database from which you execute the query (ReportDb) lies on a different server than that with the referenced table (DataDb). Which is the case for the TEST environment. But not so in PROD. It is performance I'm here concerned about. The SQL Server will treat that SELECT as a "remote query" no matter whether, in fact, it is a reference to a local object or not.
Now, it comes the most important part:
If you check these 3 queries for their actual execution plans, you'll see an interesting thing:
(1) SELECT * FROM DataDb.dbo.Contact
(2) SELECT * FROM srProd1.DataDb.dbo.Contact
(3) SELECT * FROM lnkDataDb.DataDb.dbo.Contact
The first two (query #1 and #2) have the same execution plan (the fastest possible) even if you use the four-part name manner of referencing the table Contact in #2.
The last query has a different plan (remote query, thus slower).
The question is:
Can you somehow create a linked server to self (the same sql server instance, the default instance actually) as an "alias" to the name of the host (srProd1) in order for the SQL server to be forced to understand it as local and not issue "remote execution" plans?
Thanks a lot for any hints
Pavel
Recently I found a workaround which seems to solve this kind of issues more efficiently and more elegantly than the solution with self-pointing linked servers.
If you work (making reports, for example) with multiple databases on multiple SQL servers and the physical distribution of the databases on the servers is a challenge since it may differ from one environment to another (e.g. TEST vs PROD), I suggest this:
Use three-part db object names whenever possible. If the objects are local, then execution plans are also local, and thus effective.
Example:
SELECT * FROM DataDb.dbo.Contact
If you happen to run the above query from within a different SQL server instance (residing on a different physical machine, for example, but this not necessarily, the other SQL server instance could be installed even on the same machine), briefly if you're about to use a four-part name:
SELECT * FROM lnkDataDb.DataDb.dbo.Contact
Then you can circumvent that using the following trick:
Let's assume lnkDataDb points to srTest2 and you're executing your queries from srTest1. Now, you'll create a "fake" database DataDb on your local server (srTest1). This fake DataDb shall contain no real db objects (no tables, no views, no stored procedures, no UDFs etc.). There shall only be synonyms defined in it. (And there also shall be the same schemas in it as those in the real DataDb on srTest2). These synonyms shall be named exactly the same way as their real db-object counterparts in DataDb on srTest2. Example:
-- To be executed on srTest1.
EXEC sp_addlinkedserver
#server = N'lnkDataDb',
#srvproduct = N'',
#provider = N'SQLNCLI',
#datasrc = N'srTest2'
;
GO
CREATE DATABASE [DataDb];
GO
USE [DataDb];
GO
CREATE SYNONYM dbo.Contact FOR lnkDataDb.DataDb.dbo.Contact;
GO
Now, if you want to SELECT rows from the table dbo.Contact residing in the database DataDb on srTest2 and you're executing your query from srTest1, you'll use a simple three-part table name:
SELECT * FROM DataDb.dbo.Contact
Of course, on srTest1, this is not a table, that's just a synonym referencing the same-named table on srTest2. However, that's the trick, you use the same query syntax as if you were executing it on srTest2 where the real db object resides.
There are disadvantages of this approach:
On the local server, at the beginning, there must not be a database
with the same name as the remote one. Because you're about to create
a "fake" database with that name to reflect the names of remote
db objects.
You're creating one database that is almost empty, thus
increasing the mess of various databases residing on your local
SQL server. This might provoke reluctance of your database admin
if they prefer having as few databases as possible.
If you're developing your T-SQL scripts in SQL Server Management
Studio, for example, using synonyms cuts you off from the convenience
of the IntelliSense feature.
Advantages outweigh the above-mentioned disadvantages, though:
Your scripts work in any environment (DEV, TEST, PROD) without
the need to change any part of the source code.
If the other database you're querying data from resides on the same
SQL server instance as your script, you also use the three-part name
convention and your SQL server evaluates the query in execution plan
as local which is OK. (This is what the original question of this
post was searching to solve.)
If the other database you're querying data from resides on another
SQL server instance, you still use a "local syntax manner" of a SQL
query (with the synonym) which, only at runtime, evaluates in
a remote execution plan. Which is also fine because the db object
actually is remote.
To summarize
The query executes as local if the referenced object is local, the query executes as remote if the referenced object is remote, but the T-SQL script is always the same. You don't have to change a letter in it.

Business and log data separated by DB or Schema? [duplicate]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 months ago.
The community reviewed whether to reopen this question 7 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
After this comment to one of my questions, I'm thinking if it is better using one database with X schemas or vice versa.
I'm developing a web application where, when people register, I create (actually) a database (no, it's not a social network: everyone must have access to his own data and never see the data of the other user). That's the way I used for the previous version of my application (that is still running on MySQL): through the Plesk API, for every registration, I do:
Create a database user with limited privileges;
Create a database that can be accessed just by the previous created user and the superuser (for maintenance)
Populate the database
Now, I'll need to do the same with PostgreSQL (the project is getting mature and MySQL don't fulfil all the needs). I need to have all the databases/schemas backups independent: pg_dump works perfectly in both ways, and the same for the users that can be configured to access just one schema or one database.
So, assuming you are more experienced PostgreSQL users than me, what do you think is the best solution for my situation, and why? Will there be performance differences using $x database instead of $x schemas? And what solution will be better to maintain in the future (reliability)? All of my databases/schemas will always have the same structure!
For the backups issue (using pg_dump), is maybe better using one database and many schemas, dumping all the schemas at once: recovering will be quite simple loading the main dump in a development machine and then dump and restore just the schema needed: there is one additional step, but dumping all the schema seem faster than dumping them one by one.
UPDATE 2012
Well, the application structure and design changed so much during those last two years. I'm still using the "one db with many schemas" -approach, but still, I have one database for each version of my application:
Db myapp_01
\_ my_customer_foo_schema
\_ my_customer_bar_schema
Db myapp_02
\_ my_customer_foo_schema
\_ my_customer_bar_schema
For backups, I'm dumping each database regularly, and then moving the backups on the development server. I'm also using the PITR/WAL backup but, as I said before, it's not likely I'll have to restore all database at once. So it will probably be dismissed this year (in my situation is not the best approach).
The one-db-many-schema approach worked very well for me since now, even if the application structure is totally changed. I almost forgot: all of my databases/schemas will always have the same structure! Now, every schema has its own structure that change dynamically reacting to users data flow.
A PostgreSQL "schema" is roughly the same as a MySQL "database". Having many databases on a PostgreSQL installation can get problematic; having many schemas will work with no trouble. So you definitely want to go with one database and multiple schemas within that database.
Definitely, I'll go for the one-db-many-schemas approach. This allows me to dump all the database, but restore just one very easily, in many ways:
Dump the db (all the schema), load the dump in a new db, dump just the schema I need, and restore back in the main db.
Dump the schema separately, one by one (but I think the machine will suffer more this way - and I'm expecting like 500 schemas!)
Otherwise, googling around I've seen that there is no auto-procedure to duplicate a schema (using one as a template), but many suggest this way:
Create a template-schema
When need to duplicate, rename it with new name
Dump it
Rename it back
Restore the dump
The magic is done.
I've written two rows in Python to do that; I hope they can help someone (in-2-seconds-written-code, don’t use it in production):
import os
import sys
import pg
# Take the new schema name from the second cmd arguments (the first is the filename)
newSchema = sys.argv[1]
# Temperary folder for the dumps
dumpFile = '/test/dumps/' + str(newSchema) + '.sql'
# Settings
db_name = 'db_name'
db_user = 'db_user'
db_pass = 'db_pass'
schema_as_template = 'schema_name'
# Connection
pgConnect = pg.connect(dbname= db_name, host='localhost', user= db_user, passwd= db_pass)
# Rename schema with the new name
pgConnect.query("ALTER SCHEMA " + schema_as_template + " RENAME TO " + str(newSchema))
# Dump it
command = 'export PGPASSWORD="' + db_pass + '" && pg_dump -U ' + db_user + ' -n ' + str(newSchema) + ' ' + db_name + ' > ' + dumpFile
os.system(command)
# Rename back with its default name
pgConnect.query("ALTER SCHEMA " + str(newSchema) + " RENAME TO " + schema_as_template)
# Restore the previous dump to create the new schema
restore = 'export PGPASSWORD="' + db_pass + '" && psql -U ' + db_user + ' -d ' + db_name + ' < ' + dumpFile
os.system(restore)
# Want to delete the dump file?
os.remove(dumpFile)
# Close connection
pgConnect.close()
I would recommend against accepted answer - multiple databases instead of multiple schemas for this set of reasons:
If you are running microservices, you want to enforce the inability to join between your "schemas", so the data is not entangled and developers won't end up joining other microservice's schema and wonder why when other team makes a change their stuff no longer works.
You can later migrate to a separate database machine if your load requires with ease.
If you need to have a high-availability and/or replication set up, it's better to have separate databases completely independent of each other. You cannot replicate one schema only compared to the whole database.
I would say, go with multiple databases AND multiple schemas :)
Schemas in PostgreSQL are a lot like packages in Oracle, in case you are familiar with those. Databases are meant to differentiate between entire sets of data, while schemas are more like data entities.
For instance, you could have one database for an entire application with the schemas "UserManagement", "LongTermStorage" and so on. "UserManagement" would then contain the "User" table, as well as all stored procedures, triggers, sequences, etc. that are needed for the user management.
Databases are entire programs, schemas are components.
In a PostgreSQL context I recommend to use one db with multiple schemas, as you can (e.g.) UNION ALL across schemas, but not across databases. For that reason, a database is really completely insulated from another database while schemas are not insulated from other schemas within the same database.
If you -for some reason- have to consolidate data across schemas in the future, it will be easy to do this over multiple schemas. With multiple databases you would need multiple db-connections and collect and merge the data from each database "manually" by application logic.
The latter have advantages in some cases, but for the major part I think the one-database-multiple-schemas approach is more useful.
A number of schemas should be more lightweight than a number of databases, although I cannot find a reference which confirms this.
But if you really want to keep things very separate (instead of refactoring the web application so that a "customer" column is added to your tables), you may still want to use separate databases: I assert that you can more easily make restores of a particular customer's database this way -- without disturbing the other customers.
Working with single Database with multiple Schemas is good way to
practice in postgres database because:
No any data is shared across databases in postgres.
any given connection to the server can access only the data in the single database, the one specified in the connection request.
With using multiple schemas:
To allow many users to use one database without interfering with eachother.
To organize database objects into logical groups to make them more manageable.
Third party applications can be put into separate schemas so they cannot collide with the names of other objects.
It depends on how the availability and connectivity of your system is designed. What are the data that are stored in these Databases.If they are linked data, there they can be kept on single DB instance but if they are partially linked and can run partially if one system is down then it must be on different instances.
Detailed explanation:-
1) When you use one DB instance and in that you use multiple databases, then you are caught up with the issue that if your connection goes down(due to system crash or mysql server is down),all Databases as they are on same instance are also down, so all your applications are impacted.
2) When you separate DB instance for each Database,then if any one Database system is down,your other applications doesn't have impact.So other application can run only the application which depends on down DB is impacted.
Also,in both the cases i think you must also use replication mechanism so that load balancing can be done on slave Databases.

Restore production database to multiple test databases SQL server

What is the best way to copy a production database to multiple test databases, while maintaining the integrity of fully qualified names?
Currently to refresh a test environment we restore the test database from the production database. Then, we script all of the stored procedures/views/etc. and do a find/replace on all of the database references to point at the test objects. After we have all of the references correct, we alter them.
For example, after the database is copied from production, we'll have a stored procedure like so:
alter procedure dbo.SomeProcedure
as
select SomeColumn
from DB.dbo.SomeTable
join Validation.dbo.AnotherTable on SomId = AnoId
For the test database, it needs to be:
alter procedure dbo.SomeProcedure
as
select SomeColumn
from DBQA1.dbo.SomeTable
join ValidationQA1.dbo.AnotherTable on SomId = AnoId
Each test database has views/stored procedures/functions can reference up to 30 different other test databases, so the "find/replace" process is very time consuming and is prone to a lot of errors.
What is the best way to restore these test environments?
We are using SQL Server 2008R2.
Assuming that the different environments are on different SQL Servers (or at least on different Instances), then would recommend that you keep the database names on all environments exactly the same. Use permissions (e.g. Integrated security) to ensure that only the correct environment systems and users access the appropriate environment databases.
However, if you do need to keep different database names for different environments (e.g. all environments on the same SQL instance), you could look at using sqlcmd with the -v switch to parameterize the database name.
Your change scripts would then need to be rewritten like so:
alter procedure dbo.SomeProcedure
as
select SomeColumn
from [$(InternetSecurity)].dbo.SomeTable
join [$(Validation)].dbo.AnotherTable on SomId = AnoId
And then you could write batch files to pass the correct parameter values to sqlcmd.
Alternatively, you could use a .dbproj project in Visual Studio to setup multiple configurations to provide different values for each environment, and generate scripts / publish from Visual Studio.
Also, AFAIK SQL Synonyms are't really going to help here. You would need to replace the 3 part table names in all procs and functions with synonyms, which could confuse the issue as it doesn't make it clear whether the table is local or external.
As far as I know,There is no other simpler way than replacing the database names in the script here.

Is it better to use multiple databases with one schema each, or one database with multiple schemas? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 months ago.
The community reviewed whether to reopen this question 7 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
After this comment to one of my questions, I'm thinking if it is better using one database with X schemas or vice versa.
I'm developing a web application where, when people register, I create (actually) a database (no, it's not a social network: everyone must have access to his own data and never see the data of the other user). That's the way I used for the previous version of my application (that is still running on MySQL): through the Plesk API, for every registration, I do:
Create a database user with limited privileges;
Create a database that can be accessed just by the previous created user and the superuser (for maintenance)
Populate the database
Now, I'll need to do the same with PostgreSQL (the project is getting mature and MySQL don't fulfil all the needs). I need to have all the databases/schemas backups independent: pg_dump works perfectly in both ways, and the same for the users that can be configured to access just one schema or one database.
So, assuming you are more experienced PostgreSQL users than me, what do you think is the best solution for my situation, and why? Will there be performance differences using $x database instead of $x schemas? And what solution will be better to maintain in the future (reliability)? All of my databases/schemas will always have the same structure!
For the backups issue (using pg_dump), is maybe better using one database and many schemas, dumping all the schemas at once: recovering will be quite simple loading the main dump in a development machine and then dump and restore just the schema needed: there is one additional step, but dumping all the schema seem faster than dumping them one by one.
UPDATE 2012
Well, the application structure and design changed so much during those last two years. I'm still using the "one db with many schemas" -approach, but still, I have one database for each version of my application:
Db myapp_01
\_ my_customer_foo_schema
\_ my_customer_bar_schema
Db myapp_02
\_ my_customer_foo_schema
\_ my_customer_bar_schema
For backups, I'm dumping each database regularly, and then moving the backups on the development server. I'm also using the PITR/WAL backup but, as I said before, it's not likely I'll have to restore all database at once. So it will probably be dismissed this year (in my situation is not the best approach).
The one-db-many-schema approach worked very well for me since now, even if the application structure is totally changed. I almost forgot: all of my databases/schemas will always have the same structure! Now, every schema has its own structure that change dynamically reacting to users data flow.
A PostgreSQL "schema" is roughly the same as a MySQL "database". Having many databases on a PostgreSQL installation can get problematic; having many schemas will work with no trouble. So you definitely want to go with one database and multiple schemas within that database.
Definitely, I'll go for the one-db-many-schemas approach. This allows me to dump all the database, but restore just one very easily, in many ways:
Dump the db (all the schema), load the dump in a new db, dump just the schema I need, and restore back in the main db.
Dump the schema separately, one by one (but I think the machine will suffer more this way - and I'm expecting like 500 schemas!)
Otherwise, googling around I've seen that there is no auto-procedure to duplicate a schema (using one as a template), but many suggest this way:
Create a template-schema
When need to duplicate, rename it with new name
Dump it
Rename it back
Restore the dump
The magic is done.
I've written two rows in Python to do that; I hope they can help someone (in-2-seconds-written-code, don’t use it in production):
import os
import sys
import pg
# Take the new schema name from the second cmd arguments (the first is the filename)
newSchema = sys.argv[1]
# Temperary folder for the dumps
dumpFile = '/test/dumps/' + str(newSchema) + '.sql'
# Settings
db_name = 'db_name'
db_user = 'db_user'
db_pass = 'db_pass'
schema_as_template = 'schema_name'
# Connection
pgConnect = pg.connect(dbname= db_name, host='localhost', user= db_user, passwd= db_pass)
# Rename schema with the new name
pgConnect.query("ALTER SCHEMA " + schema_as_template + " RENAME TO " + str(newSchema))
# Dump it
command = 'export PGPASSWORD="' + db_pass + '" && pg_dump -U ' + db_user + ' -n ' + str(newSchema) + ' ' + db_name + ' > ' + dumpFile
os.system(command)
# Rename back with its default name
pgConnect.query("ALTER SCHEMA " + str(newSchema) + " RENAME TO " + schema_as_template)
# Restore the previous dump to create the new schema
restore = 'export PGPASSWORD="' + db_pass + '" && psql -U ' + db_user + ' -d ' + db_name + ' < ' + dumpFile
os.system(restore)
# Want to delete the dump file?
os.remove(dumpFile)
# Close connection
pgConnect.close()
I would recommend against accepted answer - multiple databases instead of multiple schemas for this set of reasons:
If you are running microservices, you want to enforce the inability to join between your "schemas", so the data is not entangled and developers won't end up joining other microservice's schema and wonder why when other team makes a change their stuff no longer works.
You can later migrate to a separate database machine if your load requires with ease.
If you need to have a high-availability and/or replication set up, it's better to have separate databases completely independent of each other. You cannot replicate one schema only compared to the whole database.
I would say, go with multiple databases AND multiple schemas :)
Schemas in PostgreSQL are a lot like packages in Oracle, in case you are familiar with those. Databases are meant to differentiate between entire sets of data, while schemas are more like data entities.
For instance, you could have one database for an entire application with the schemas "UserManagement", "LongTermStorage" and so on. "UserManagement" would then contain the "User" table, as well as all stored procedures, triggers, sequences, etc. that are needed for the user management.
Databases are entire programs, schemas are components.
In a PostgreSQL context I recommend to use one db with multiple schemas, as you can (e.g.) UNION ALL across schemas, but not across databases. For that reason, a database is really completely insulated from another database while schemas are not insulated from other schemas within the same database.
If you -for some reason- have to consolidate data across schemas in the future, it will be easy to do this over multiple schemas. With multiple databases you would need multiple db-connections and collect and merge the data from each database "manually" by application logic.
The latter have advantages in some cases, but for the major part I think the one-database-multiple-schemas approach is more useful.
A number of schemas should be more lightweight than a number of databases, although I cannot find a reference which confirms this.
But if you really want to keep things very separate (instead of refactoring the web application so that a "customer" column is added to your tables), you may still want to use separate databases: I assert that you can more easily make restores of a particular customer's database this way -- without disturbing the other customers.
Working with single Database with multiple Schemas is good way to
practice in postgres database because:
No any data is shared across databases in postgres.
any given connection to the server can access only the data in the single database, the one specified in the connection request.
With using multiple schemas:
To allow many users to use one database without interfering with eachother.
To organize database objects into logical groups to make them more manageable.
Third party applications can be put into separate schemas so they cannot collide with the names of other objects.
It depends on how the availability and connectivity of your system is designed. What are the data that are stored in these Databases.If they are linked data, there they can be kept on single DB instance but if they are partially linked and can run partially if one system is down then it must be on different instances.
Detailed explanation:-
1) When you use one DB instance and in that you use multiple databases, then you are caught up with the issue that if your connection goes down(due to system crash or mysql server is down),all Databases as they are on same instance are also down, so all your applications are impacted.
2) When you separate DB instance for each Database,then if any one Database system is down,your other applications doesn't have impact.So other application can run only the application which depends on down DB is impacted.
Also,in both the cases i think you must also use replication mechanism so that load balancing can be done on slave Databases.

Resources