Mirror Live Database Sql Server 2008 R2 Enterprise [closed] - sql-server

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
Quick Question: Is it possible to mirror a database without downtime?
Long Question: I have a database on production being used by quite a few clients. The previous management did not implemented any kind of redundancy or high-availability strategy (no clustering... not even storage!!!), and now, as business grows, this is becoming a huge liability... as a emergency act I'm considering mirroring the database... The main problem is that I cannot take down the database. That would imply on some legal/financial problems due to some previous SLA agreements... So, can I mirror a database without taking it down?
Extra info:
The SQL Server version is 2008 R2 Enterprise.
The instance consist of one database only (it's a multi-tenant database)
The database infrastructure consist of one physical server running windows 2008 R2 (standalone server). It's not a cluster nor a VM and theres no storage behind it... all data is inside it's only 2TB disk...
The Database size (.mdf) is about 170 GB...
There's about 100 transactions each second
There's no hours when usage goes down... business are 24/7...
Yes, this totally looks like that environment a developer would create on their machine...

This is all in books online. Hopefully you can try in development, or on your desktop first. There are quite a few things to consider, so you should give it a thorough read.
Yes, you can mirror without taking it down. You restore a backup to the mirror and use NORECOVERY option. You then setup the mirroring and transactions start to move from primary to mirror. You'll need to set up any other things needed on that remote server - logins, jobs, etc.
Transfer logins using this method, so you get matching SIDs between servers.

Related

Using SQL Service broker vs Queues with web service [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
We have a legacy system with central database (SQL Server) and small clients (KIOSK- with local DB (SQL Express)) which is writtent using WPF application. The data sync between client and central DB is done using C#, ADO.NET sql statements. This takes huge toll on the performance. The number of clients currently we have are 400 and it will be increasing. Each client sends 100,000 records per day to the central database.
We are planning to re-write this sync part using SQL Service Broker
One of the main issue, the schema between client and central DB is different. The tables were not normalized and the worst case is most of the columns were using nvarchar datatypes for storing datetime, intergers data.
I am concerned about using Service Broker as most of the business logic would be written using Stored Procedure.
I would like to get some ideas on whether using this technology would be the best or do we need to consider creating REST based service using Message Queue.
TL;DR: I would recommend against using Service Broker in this kind of environment.
Detailed answer
While Service Broker is indeed a very lightweight and reliable communication mechanism, it was designed with a different goal in mind. Namely, it works best in a static topology, when administrators setup everything once and then the entire system runs for years, with little or no changes.
Judging by what I understood from your explanation, your network of connected hosts is much more dynamic, with hosts coming and going on a daily basis. This will incur high maintenance costs on your support, because in order to establish communication between two Service Broker endpoints belonging to different SQL Server instances, you will need (among many other things) to generate at least 1 pair of certificates and exchange their public keys between participating instances, after which they will have to be deployed in both the master and the subject databases on both sides.
This certificate exchange and deployment should be done before Service Broker messaging will be possible, so you will need another communication channel between the servers for the exchange to happen. Normally, this is done manually by DBAs due to high security risks associated with potential loss of transport-level keys. In your environment, however, there is a good chance that people will simply not be able to keep up. Not to mention a potential for human errors, which will be quite high due to large amount of repetitive manual work.
In short, I would recommend to look for something which is easier to deploy and maintain. Change tracking might be a good start; as for transport, you have a full smorgasbord of choices, from WCF to WebAPI (to whatever else have appeared in the last few years).

Questions about Active Geo-Replication (and how it interacts with local, within-datacenter, redundancy) [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I just read thisActive Geo-Replication for Azure SQL Database article on active geo-replication and have some questions. I posted these questions on the page with the article but haven't gotten a reply yet.
Could really use help!
“Every forced termination results in the irreversible loss of the replication relationship between the primary database and the associated online secondary database.” What does irreversible mean here? Does it mean that in order to re-establish a replication relationship between the primary database and an associated online secondary database after a forced termination, we’d need to start over with seeding another online secondary database?
“Local data redundancy and operational recovery are standard features for Azure SQL Database. Each database possesses one primary and two local replica databases that reside in the same datacenter, providing high availability within that datacenter. This means that the Active Geo-Replication databases also have redundant replicas. Both the primary and online secondary databases have two secondary replicas.” So, if we imagine we have active geo-replication up and running and the primary database is lost for some reason. Will Azure SQL automatically put one of the remaining two local replicas that exist within the same datacenter as part of the standard local data redundancy and operational recovery Azure SQL feature in place of the primary database that was just lost with no impact to existing connections? If so, I would take that to mean that the only case in which a forced termination of the replication relationship between the primary database and the online secondary (via geo-replication) would only be necessary would be if all 3 copies of the database existing in the local datacenter were lost. Is that right?
Using an active geo-replication configuration, can online secondary databases be in the same region as the primary? Say we wanted active geo replication within the same region for a time. Is that doable? I realize that, from a regional disaster perspective, having online secondaries in the same region would defeat the purpose. Still would be good to know if it’s doable.
Forced terminate requires you to restart the process of creating active geo replication.
You will be able to force terminate if the primary database is unavailable. SQL DB maintains local HA and if one replica down then it switches to a local secondary and built the replacement replica
For premium databases you can set up active replication with in the region. This also helps you read scale out.
If you force terminate the continuous copy link then a new link must be established to enable Geo-Replication on the new primary database. This would create a new database and start over with the seeding of another online secondary database.
Azure SQL Database has built-in high availability. Geo-Replication (and forced failover) only needs to be used in the event of a disaster causing the entire data center with the primary database (and its replicas) to be unavailable.
Active Geo-Replication can be used to scale out read workloads. Active Geo-Replication can be configured in the same region, but the replica must be on a different server than the primary. See "design pattern 2".

Database geo-replication with low latency [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I have a website which I am currently hosting on a single server in Europe. To improve the latency for non-European users I would like to add local servers in the US and Asia.
Keeping static files in sync is no problem. We add new content only once a day, so a simple rsync cron job will do fine to keep files updated.
What I am totally stuck on is how to handle this on the database side? I would prefer a single master database that holds all user information, so that if a local server ever goes offline we always have the user data in the main server (regardless of backups).
So far we consider 2 options:
A database with Geo Replication support
A database that supports geo replication out of the box. Should be very easy to setup and should have very low latency for DB writes (ie. without having to wait for a 'write success' message on the master server).
Programmatic approach with a master and a local database
A user is visiting from one region at the same time, so we could cook something up that connects to both the master and the local database. At first login all user information would be pulled from the master database and cached in the local database. All data generated by the user from then on, could be stored in the local database and synched back to the master database in the background. Could work, but seems overly complex and hard to fix if something goes out of sync?
A little more background information on the database
our database does a lot of reads and few writes
database performance is not an issue at all. So we are only looking to improve the user experience (lower latency)
a user does not generate much data (10kb in general, 200kb at maximum)
we are not a bank or stock exchange, if some user data is synched back to the master server a minute or even a few minutes later it's not a big problem.
Our questions
is there a name that describes this specific problem? (so I can Google better)
is there a database that does geo replication out of the box without latency penalty? (Couchbase perhaps?)
would the programmatic approach be doable, or will it be a world of pain?
I would be very thankful for any insights, or perhaps a link to an article that covers something like this. I'm sure there are more small scale websites out there which have run into this problem.

Copying a subscribed database to the publisher server [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 8 years ago.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
This question does not appear to be about programming within the scope defined in the help center.
Improve this question
I have two servers.
I have installed a merge replication in the publisher (Server A) and I have added two subscriptions with the same database name.
- One on the publisher itself and the seconde one on the subscriber (Server B).So far so good. The replication is working well.
I wanted to delete the subscibed database on the publisher (Server A) and replace it by a copy of subscribed database from Server B.
I thought the publisher will continue synchronizing on the newly attached database, but unfortunately, it didn't work as expected, it started aplying snapshot etc.. instead.
Is there anything to modify on the the copied database to make the publisher reconise it as the deleted one and continues synchronizing ?
All the meta-data that configures a database as a publisher is stored in the database its self. So deleting the database deletes the publication as well. Moving a database that was formerly a subscription and making it a publisher requires initializing the database and configuring it as a publisher the same way you would starting out from scratch.
There are a few tricks that could mimic what you are attempting to do however. Namely, backing up your subscription. Then deleting all the data from your publisher and syncing. Don't delete the publication, just the data in the database. The sync will merge the deletes to the subscriber as well but that's why you have a backup.
At that point you'd restore the subscription backup you took back onto the subscriber. Ensure that 'retain replication info' is set. Once restored sync again. The result should be that the original data from the subscription backup would be the only data to merge back to the publisher which was empty prior to the restore.
I have not personally tried in practice but in theory should work. Read here for more info on the in's and out's of backing up and restoring merge replication schemes.
http://msdn.microsoft.com/en-us/library/ms152497.aspx

What's the disadvantage of using Microsoft SQL Server Jobs for Backup? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I happen to find a lot of Automated SQL Backup Solutions on the net. two in particular are
- SQL Backup and FTP (http://sqlbackupandftp.com/)
- SQL Maintenance Solutions (http://ola.hallengren.com/)
My question, why do people use these solutions when SQL already has SQL Server Jobs Agent which can do this for them?
Another popular one you didn't mention is Red Gate's SQL Backup Pro - Here's their sales pitch, and I'm assuming the products you mention will have similar pitches.
Why SQL Backup Pro?
Save time and space: compress SQL Server backups by up to 95% for
faster, smaller backups
Strengthen SQL Server backup and restore
activities: use network resilience for backups, restores and log
shipping
Protect your data: use up to 256-bit AES encryption to secure
your data against unauthorized access

Resources