how to avoid downtime when swapping out databases? - database

We have a staging server and a production server, we do all the processing on our staging server and in a timely manner move the database to production server.
So the database on production server gets replaced with the latest copy of database from staging server.
DB_Name: `employee`
Step 1: Copy over staging database `emloyee_staging`
Step 2: rename `employee` to `employee_tmp`
step 3: rename `employee_staging` to `employee`
We see that sometimes there might be users accessing the service and get a exception if they access the service when the database swapping is going on.
Are there any ways we can avoid this slight downtime when database swapping happens?
Any help on how to address this would be really great!
Thanks.

Related

Synchronize data from the production Azure SQL database with the staging database allowing changes to be made in the second

I have two databases in Azure SQL Database almost identical in their structure (unless one of the two is modified). I need to synchronize the data from the production database with the staging database, being able to make changes in staging without harming production unless I need to do a production restore (that's another topic).
In the case that there is no solution, I want to be able to make staging equal to production when my developers need it, is there a way to modify a database with a backup of another without having to create a new database (since you would have to modify the server name in the app)
The process I currently employ across my environments is:
Start with a baseline where STAGING and PROD are exactly the same. This you can achieve by restoring the latest backup(s) from your PROD environment to your STAGING environment.
As changes occur in DEV and need to be released to STAGING, create the appropriate release scripts to apply to STAGING.
When you need to refresh the data in STAGING, restore the latest backup(s) from PROD to STAGING again.
Optional: Run any post-refresh STAGING specific scripts that are needed. E.g. if you obfuscate any of the data, or change of the data to signify it's the STAGING environment.
Run any release scripts in STAGING, that haven't yet been released to PROD.
Over time your release scripts that were used in STAGING, should get used to release to PROD, which is what will help keep the two environments in sync.
Repeat steps 2, 3, and 4 at the frequency acceptable to your goals.
I have the above in a SQL Job so it's essentially a one click process, but my database is SQL Server on-prem. I also use a schema comparison tool called SQL Examiner to automate generating the release scripts I need, but not sure it's applicable to Azure SQL Database.

How to update staging SQL database to replicate the live database when on the same server

I have two databases, staging and production, on the same Windows server. I am using a CMS and want the environments to be identical. I have taken backups of both databases using SSMS. What would be the best approach to update the staging database so that it is the same as the production database?
I have tried restoring the staging database with the production backup but it sets the staging database to single user. Usually when I have done this the databases have been on separate servers. Could someone advise on the best approach.
For example, do I take the staging database offline then complete the restore with the production backup?
What degree of latency is acceptable? Do you need the staging DB to be identical to production in near-realtime, within the hour, or is a once-nightly update sufficient? Depending on your needs, the best approach could be transactional replication, log shipping, backup/restore, etc.
I did the following steps and hope this helps others:
Took a backup of the production server
Went to IIS and stopped the staging site (otherwise you will not be able to take the staging database offline)
Took the staging database offline
Restored the staging backup with the live backup (I renamed the backup to the name of the staging database so not to cause confusion)

Modify table row while export/import table to another database SQL Server

I have issue with my production database, so I want to reproduce the issue on my development database. The DBMS I use is SQL Server 2016 (SP1)
To reproduce the error I'm copying all the data to development database using Export in SQL Server Management Studio.
The production database is running and user still using the database, so there's gonna be insert, update, or even delete row while I'm exporting the data.
What will happen to the modified row(insert, update, or even delete) while I'm exporting the data. Will it be exported to my development database? And why, like how SQL Server handle something like this?
What is the good way to move production database to development database?
And the extreme one, What will happen if table columns modified while export is in process?
EDIT :
I need to mention that the DBMS version on production is higher then development so I can't use backup/restore to move database
What is the good way to move production database to development
database
You should backup your database on the production server and restore it on the dev server.
This will not block user activity on the prod
What will happen to the modified row(insert, update, or even delete)
while I'm exporting the data.
If your insert/update is concurrent but the reading process is already strated on a table, your changes will be blocked. Vice versa, if any DML is already started on the same rows, reading process will wait until modification is committed/rollbacked.
And the extreme one, What will happen if table columns modified while
export is in process?
While you are reading Sch-S lock is held on the table, so no column modification can be done until this lock is released.

Database mirroring for report database

I have a problem that our production database is under heavy load.
So we have decide that we setup a second SQL server running copy of production database that doesn't need to be 100% uptodate with production database.
After searching I have found that asynchronous mirroring of production database might do the trick.
Mirror database would be only read-only for reports and stuff.
So I managed to set it up, but have found out that I can't read any data from mirror database because it is in recovery mode.
No I would like to know if my problem is solvable with mirroring or we should use alternative?
Thanks to Blim, we have decided for transaction replication. It works great on our development database (so it should in production too).
Step by step article: http://www.sql-server-performance.com/2010/transactional-replication-2008-r2/

Copying Large Amounts of Data to Replicated Database

I have a local SQL Server database that I copy large amounts of data from and into a remote SQL Server database. Local version is 2008 and remote version is 2012.
The remote DB has transactional replication set-up to one local DB and another remote DB. This all works perfectly.
I have created an SSIS package that empties the destination tables (the remote DB) and then uses a Data Flow object to add the data from the source. For flexibility, I have each table in it's own Sequence Container (this allows me to run one or many tables at a time). The data flow settings are set to Keep Identity.
Currently, prior to running the SSIS package, I drop the replication settings and then run the package. Once the package completes, I then re-create the replication settings and reinitialise the subscribers.
I do it this way (deleting the replication and then re-creating) for fear of overloading the server with replication commands. Although most tables are between 10s and 1000s of rows, a couple of them are in excess of 35 million.
Is there a recommended way of emptying and re-loading the data of a large replicated database?
I don't want to replicate my local DB to the remote DB as that would not always be appropriate and doing a back and restore of the local DB would also not work due to the nature of the more complex permissions, etc. on the remote DB.
It's not the end of the world to drop and re-create the replication settings each time as I have it all scripted. I'm just sure that there must be a recommended way of managing this...
Not doing it. Empty / Reload is bad. Try to update the table via merge - this way you can avoid the drop and recreate, which also will result in 2 replicated operations. Load the new data into temp tables on the other server (not replicated), then merge them into the replicated tables. If a lot of data is unchanged, this will seriously reduce the replication load.

Resources