Multipurposing a failover server? - sql-server

I'm not a DBA so this may be a stupid question but I'll ask it anyway. We're upgrading our SQL Servers from 2000 to 2005 and we will probably use either database replication or database mirroring. Our DBA would like to "multipurpose" the standby server meaning that he'd like to increase our capabilities and capacity by running other database applications on the standby server since "it's just going to be sitting there anyway" (his words, not mine). Is this such a good idea? Right now, our main application server uses only one instance that contains 50+ databases. As I understand it, what we're doing now and what our DBA is proposing for a failover server is a bad idea because all of these databases are sharing memory, CPUs, and working areas. If one applications starts behaving badly, the other DBs could be affected.
Any thoughts?

It's really a business question that needs to be answered?? is a slow app better then no app if you can't afford the expense of extra hardware?
Standby and mirrored db's can be used for reporting. Using it as the failover db can work if you have enough headroom (i.e. both databases will comfortably run on the server)

Will you depend on these extra applications? Where do they run in the failover case?

You really need to understand your failure modes.
If you look at it as basic resource math, that doesn't often make sense unless the resources you have running in the failure scenarios can handle the entire expected load. Sometimes this is the case, but not always. In this case, to handle the actual load you may need yet another server to come in (like RAID - perhaps your load needs a minimum of 5 servers, but you have a farm of 6, then you need 1 standby server for ever server to fail above 1). Sometimes a farm can run degraded, but sometimes they just puke and die.
And in the case of out of normal operation, you often have accident cascading where a legitimate incident causes a cascade of issues - e.g. your backup tape is busy restoring a server from a backup (to a test environment, even - there are no real "failures"), now your sql server or exhcange server (or both) is not backed up and your log gets full.

Database Mirroring would not be the way to go here in my opinion as it provides redundancy at the database level only. So you would need to configure database mirroring for up to 50 databases based on the information you provided. The chances are that if one DB where to fail all, 50 would probably follow, as failures typically occur at the hardware level rather than a specific database.
It sounds to me like you should be using SQL Server Clustering technology. You could create an Active/Active cluster to support your requirements.
What is an Active/Active Cluster?
An Active/Active SQL Server cluster means that SQL Server is running on both nodes of a two-way cluster. Each copy of SQL Server acts independently, and users see two different SQL Servers. If one of the SQL Servers in the cluster should fail, then the failed instance of SQL Server will failover to the remaining server. This means that then both instances of SQL Server will be running on one physical server, instead of two.
Applying this to your scenario
You could then split the databases between two instances of SQL server, one active instance on each node. Should one node fail, the other node will pick up the slack and vice versa.
Further Reading
An introduction to SQL Server Clustering
I suspect that you will find the following MSDN thread useful reading also

"it's just going to be sitting there anyway"
It will be sitting there applying transactions...
Take note of John Sansom's recommendation. Keep in mind that a Active/Active cluster requires two sql server licenses and a failover cluster/mirror only needs one.
Setting up mirroring for a large number of db's could turn into a big pain. You need any jobs/maintenance to move over as well - which can be achieved with alerts on WMI failover events. There's probably more to think about that could complicate things.

Related

Sociable SQL Server instance replication - Best practice

I would like to know what are best practices for using SQL Server replication on a SQL Server instance that may have other application databases that may also use replication. That is, our product needs to play well with other users of the instance.
The product currently uses SQL Server replication to create a copy database used for reporting. It is always the sole user of the SQL Server instance. But we now need to document and test (regulatory requirements) how the product can share the instance.
I'm making the assumption here that we still need replication as we do not see another way to isolate reporting load from the application's database.
Has anybody done this successfully?
If we are using instance level replication:
Is there a way we can stop/start/modify replication for our application without affecting others?
Do setting differ greatly? That is, is it realistic to share instance level replication settings across applications?
Non-instance replication just looks hard, do I have the wrong view here?
Our customers use SQL Server 2008 R2 or SQL Server 2012.
At an instance level, replication configures only one distributor. That is, regardless of how many databases you have configured for replication on an instance, they'll all share one distributor. You do have the option to make that distributor local (i.e. on the same instance) or remote. So, if you find that distribution is taking up considerable resources (or anticipate that that's going to be the case), configure remote distribution.
Whatever drive holds your databases' log files will need to have sufficient headroom in their throughput to handle the logreader agent. If you're concerned that your database's activity will be impacting to other databases, isolate.
As for other concerns, replication is a lot like your line of business application. That is, it needs to read data (from the publisher and distributor depending on which phase of replication you're talking about) and write data (from the distributor and subscriber again depending on which phase of replication you're talking about). Provision resources accordingly and you should be just fine.

DBA's say no to SQL Server DTC?

I am trying to get our DBA's to enable DTC on a cluster of SQL Server 2005. Unfortunately they keep refusing. Their argument that they would need to set up a dedicated host for DTC (Could take months!!) as it is not a matter of ticking a few boxes. Is this true? How intrusive is DTC on a shared environment such as a SQL farm. Do I have an argument against this?
Thanks
Had to tone down the original response your 'DBA' team deserve!
In response to your questions:
Dedicated server - Not at all. Everywhere I've worked with clusters, the DTC service is installed when the cluster is commissioned. Typically it sits in its own resource group or within the cluster group. If in its own group its usually sits on whichever server is hosting the cluster group.
Intrusive? - Absolutely not. It should be installed when the cluster is created, as per MS best practice.
Do you have an argument? - You most certainly do. The links below should cover the why and how for getting it installed:
MSDTC and SQL on a Cluster
Clustered SQL Server do's, dont's and basic warnings
DTC needs to be enabled and running on both sides of the connection. In my organization, it took some research to figure out which four boxes to check and then some hand-holding to get those boxes checked on all db servers, all app servers and most laptops. There's still a couple of hold-out developer laptops... but they're ok as long as they don't write. :)
You should have some driving scenario (such as an atomic multiple database write) to hit the DBA's over the head with. Give them some time to guess at alternatives... then let them know that DTC is the only hammer for this kind of nail.
I'm unsure of the implications of DTC on a SQL farm. I imagine the whole farm could get involved in the transaction if it involves enough data... which can't be a good thing.

Simplest solution for high availability of SQL server 2008?

I have a bunch of SQL servers which I periodically performs maintainance on (Windows Update patches etc.). Now I want to the database online 24/7 and need to implement one of the high-availability solutions for SQL server.
The solutions needs to be cheap and simple to use.
I have no problems tweaking the connection strings for the clients of the database, so currently I'm looking into database mirroring with manual failovers when taking down a partner instance for patching etc.
Is this the best thing to do or are there other options which doesn't involve setting up a failover cluster?
The servers are virtualized with a fully redundant storage solution.
Any tips are appreciated, thanks in advance!
Mirroring with a PARTNER-server would probably be the cheapest solution (you can skip the PARTNER-server if you plan to switch manually).
Failover requires shared disks (NAS) aswell as cluster-capable Windows-licenses (very expensive).
I'm not sure about replication, or how it differs from mirroring, but my research I did gave the conclusion that mirroring was the one for me. However I don't mind some downtime when doing upgrades, I just keep mirrored instances of the database in case of severe hardware failure.
It might be that replication is for a complete instance of an SQL-server, whilst mirroring is done per database. In my case, I have 2 production servers, that both replicates it's databases to a third, backup-server for disaster-recovery. I think that wouldn't have been possible with replication.
The four high availability solutions I'm aware of are:
Failover cluster
Log shipping
Mirroring
Replication
Log shipping is probably not 24/7, so that leaves three. Serverfault is definitely a better place to ask about their relative merits.
For auto failover I would choose mirroring. You can build a 2nd database connection string into your app and whenever the preferred isnt available it will default to the backup - therefore giving your app 24/7. This has its downsides though, once 'flipped' to the mirror you have to either accept that this is the way it is until another maintenance job requires the mirror to shift back again or you have to manually swap the mirror over.
In order for this to be truly 24/7 you will need to enable auto rather than manual, maybe you will need a witness server to make the decision... There are lots of factors to include in the choice - are you working with servers on different sites, clustering, multiple web/app servers ... ?
As previous answers have suggested, https://serverfault.com/search?q=sql+mirroring will have people who have made just this choice, ready to help you in much more detail
A big benefit of mirroring is that providing the mirror server has no other activity it is license free, the live server license transfers over if the mirror takes over. Full details on SQL licensing pages at microsoft.com

Database mirroring/Replication, SQL Server 2005

I have two database servers running SQL Server 2005 Enterprise that I want to make one of them as mirror database server.
What I need is; to create an exact copy database from primary server on mirror server, so when the primary server was down, we could switch database IP on application to use mirror server.
I have examined "mirror" feature on SQL Server 2005, and based on this article:
http://aspalliance.com/1388_Database_Mirroring_in_Microsoft_SQL_Server_2005.all
The mirror database cannot be accessed directly; however snapshots of the mirror database can be taken for read only purposes. (Prerequisites no. 4)
So how it can be useful when I can't access it when primary server was down?
I've been thinking about creating a regular backup on primary server and restore it on mirror server on hourly basis, but that's quite inefficient (slow) especially if I want an exact copy (since hundreds data's are added once in minute).
Any other suggestion?
EDIT:
Maybe what I mean was a replication thing, not a mirror (thanks JP for commenting)
They are referring to the fact that you can't perform queries on the mirrored copy, but you can get around that limitation by creating a snapshot of the mirrored database. This is often done to create a read-only database copy for reporting uses. You would have full access of the mirror if the primary were to fail, but it will not failover automatically.
Log shipping is another option, which allows you to query (read-only) the standby database without having to create a snapshot.
If I understand your question correctly, you shouldn't have to do that. There are several role switching forms you can use to have your mirror take over as primary. You don't change the IP address at the application level, the cluster itself has a virtual IP address that allows access to the data at any given time (given a reasonable amount of time for the switch over to the mirror from a primary failure). The mirror stays in synch by itself. :) There are good articles here and here on clustering.
Edit: Okay, based on the comments, check out the various options for replication.
Your confusion is common - there's a lot of ways to do disaster recovery planning with SQL Server. I've recorded a 10-minute video tutorial of SQL Server disaster recovery options including log shipping, mirroring, replication and more. If you like that one, we've got a longer one at Quest called Disaster Recovery Techniques but that one requires registration.
Instead of investigating a specific technology here, what you might want to do is tell us what your needs are, and then we can help you find out what option is right for you. The videos will give you an idea of what kinds of information you need to know before selecting a particular solution.
When using only two SQL Servers, you need to do the fail-over manually. The 'backup' database will be usable after you do two things;
Disable mirroring on it
Restore the database with RECOVERY (but without a backup file, this will make the database usable).
Therefore mirroring in this manner does make scense, however it is hard to maintain;
Moving back from the backup database to the primary is a 'pain' as you have to set-up the complete mirroring again using a backup of the redundant server. This is needed to get the primary back up to speed.
My recommendation would be to get a thrid SQL Server into the picture that can act as a witness. The witness will monitor the status of the mirroring databases. Your bonus; you will get automatic failover, and will not have the fail-over (and after fail-over) issues.
If I remeber correct, the witness server can be running SQL Express so no need for the Enterprise version on all three - just the two where the actual mirroring will take place.
Let me know if you need Transact SQL for the commands to fail-over and 'anti-fail-over' in a two server scenario, and I can dig them up.

How do the servers for Fogbugz handle load balancing?

I remember hearing Joel say he has 2 different locations where the servers are located, each location has 2 front end servers and 1 back end server.
If a one of the hosting facilities goes down, how can he switch over to the other one? (Or is it just going to be a DNS change that will take 24-72 hours to propagate?).
How can a single SQL Server instance have so many databases on it? FB has a completely separate database per account. I can't see a single SQL Server instance having more than say 200-250 databases on it! And I'm sure they have more customers than that.
They talked about this in one of the Stack Overflow podcasts, but I can't find it in the transcripts.
1) Each of the two centers handles approximately 1/2 of the users. Fairly often (hourly, I think Joel said) they ship transaction logs to the other site. If site A goes down, they bring up the db backups on site B, and do the DNS switchover. It won't be instantaneous or automated, nor do they want it to be, because they'll be coming up with slightly stale data, and want to avoid that if it's at all possible to bring the broken site back up.
I'm not sure how they handle the DNS situation, but you can set the TTL on DNS records to mere seconds to limit caching, and have failover occur very quickly.
2) Why not? I'm not sure of the hard limit of databases per instance, but there's also nothing keeping you from running multiple instances of SQL Server on your box. I would imagine you're more limited by hardware than software. (You can also run Fogbugz with a MySQL database backend).
Got a few questions in here so I'll break these out:
If a one of the hosting facilities goes down, how can he switch over to the other one?
There's several ways to do this, including database mirroring (new in SQL Server 2005), log shipping, and replication. I've recorded a podcast on SQL Server high availability and disaster recovery options at SQLServerPedia.
(Or is it just going to be a DNS change that will take 24-72 hours to propagate?)
Like the other post mentioned, you can set your DNS time-to-live numbers very slow, but the cooler method uses database mirroring. With mirroring, you can set both the primary and secondary server names in your connection string, and your application will automatically try the second server when the first one doesn't respond.
How can a single SQL Server instance have so many databases on it? FB has a completely separate database per account. I can't see a single SQL Server instance having more than say 200-250 databases on it! And I'm sure they have more customers than that.
The largest SQL Server I've worked with had over a thousand databases, and I've talked to a couple of other DBAs who have worked on systems with more than 2,000 databases on a server. It certainly makes management much more challenging, that's for sure.

Resources