My current project requires immediate replication between 5 databases. Servers are physically distributed across the globe. Currently we use Redis with one master server installed on another 6th server. All database writes on any of 5 servers are performed to this 6th master server, and other 5 are slaves of it. This approach has a lot of flaws and i'm looking for solution to replace it. Any suggestions please?
Related
I have 2 servers. I need to copy some columns from 4 different tables from server 1 into the corresponding (empty) tables in server 2.
So basically, it's about replicating data from one table to another. How is this done best (and easiest)? Also, how do I make sure that the copied/replicated data is updated at the same frequency as the source (which runs completely fine and automatically)?
I want to avoid using Linked Server.
How is this done best (and easiest)?
For a one time replication consider a SQL Server Import and Export Wizard. This approach can also be scheduled by saving a final package and schedule it by SQL Agent
Example: Simple way to import data into SQL Server
For a continuous, low latency data syncronization - SQL Server Transactional Replication.
Further read: Tutorial: Configure replication between two fully connected servers (transactional)
Worth to mention, that transactional replication is not the easiest topic, however, it fits quite good to a requirement.
I have a database on an SQL Server instance hosted on Azure Windows VM. There are two things I need to achieve.
Create a real-time duplicate of the database on another server. i.e. I need my database to make a copy of itself and then copy all of it's data to the duplicate at regular intervals. Let's say, 2 hours.
If my original database fails due to some reason, I need it to redirect all read/write requests to the duplicate database.
Any elaborate answer or links to any articles you deem helpful are welcome. Thank you!
You can have a high availability solution for your SQL Server databases in Azure using AlwaysOn Availability Groups or database mirroring.
Basically, you need 3 nodes for true HA. The third one can be a simple file server that will work as the witness to complete the quorum for your failover cluster. Primary and Secondary will be synchronized and in case of a failure, secondary will take over. You can also configure read requests to be split among instances.
If HA is not really that important for your use case, disaster recovery will be a cheaper solution. Check the article below for more info.
High Availability and Disaster Recovery for SQL Server in Azure Virtual Machines
https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-sql-server-high-availability-and-disaster-recovery-solutions/
I have been tasked with getting a copy of our SQL Server 2005/2008 databases in the field on-line internally and update them daily. Connectivity with each site is regulated, so on-line access is not an option. Field databases are Workgroup licensed. Main server is Enterprise with some obscene number of processors and RAM. The purpose of the copies is two-fold: (1) on-line backup and (2) source for ETL to the data warehouse.
There are about 300 databases, identical schema for the most part, located throughout the US, Canada and Mexico. Current DB size is between 5 GB and over 1 TB. Activity varies, but is about a 1,500,000 new rows daily on each server, mostly in 2 tables. About 50 tables total in each. Connection quality and bandwidth with each site varies, but the main site has enough bandwidth to do many sites in parallel.
I'm thinking SSIS, but am not sure how to approach this task other than table-by-table. Can anyone offer any guidance?
Honestly, I would recommend using SQL replication. We do this quite a bit, and it will even work over dialup. It basically minimizes traffic needed as only changes are transferred.
There are several topologies. We only use merge (two way), but transactional might be OK for your needs (one way).
Our environments are a single central DB, replicating (using filtered replication articles) to various site databases. The central DB is the publisher. It is robust, once in place, but is a nuisance for schema upgrades.
However, given your databases aren't homogeneous, it might be easier to set it up where the remote site is the publisher, and the central SQL instance has a per site database that is a subscriber to the site publisher. The articles wouldn't even need to be filtered. And then you can process the individual site data centrally.
Note to have the site databases would need replication components installed (they are generally optional in the installer). To be setup as publishers they'd need local configuration also (distribution configured on each one). Being workgroup edition, it can act as a publisher. SQL express can't act as a publisher.
It sounds complicated, but it is really just procedural, and an inbuilt mechanism for doing this sort of thing.
I'm not a DBA so this may be a stupid question but I'll ask it anyway. We're upgrading our SQL Servers from 2000 to 2005 and we will probably use either database replication or database mirroring. Our DBA would like to "multipurpose" the standby server meaning that he'd like to increase our capabilities and capacity by running other database applications on the standby server since "it's just going to be sitting there anyway" (his words, not mine). Is this such a good idea? Right now, our main application server uses only one instance that contains 50+ databases. As I understand it, what we're doing now and what our DBA is proposing for a failover server is a bad idea because all of these databases are sharing memory, CPUs, and working areas. If one applications starts behaving badly, the other DBs could be affected.
Any thoughts?
It's really a business question that needs to be answered?? is a slow app better then no app if you can't afford the expense of extra hardware?
Standby and mirrored db's can be used for reporting. Using it as the failover db can work if you have enough headroom (i.e. both databases will comfortably run on the server)
Will you depend on these extra applications? Where do they run in the failover case?
You really need to understand your failure modes.
If you look at it as basic resource math, that doesn't often make sense unless the resources you have running in the failure scenarios can handle the entire expected load. Sometimes this is the case, but not always. In this case, to handle the actual load you may need yet another server to come in (like RAID - perhaps your load needs a minimum of 5 servers, but you have a farm of 6, then you need 1 standby server for ever server to fail above 1). Sometimes a farm can run degraded, but sometimes they just puke and die.
And in the case of out of normal operation, you often have accident cascading where a legitimate incident causes a cascade of issues - e.g. your backup tape is busy restoring a server from a backup (to a test environment, even - there are no real "failures"), now your sql server or exhcange server (or both) is not backed up and your log gets full.
Database Mirroring would not be the way to go here in my opinion as it provides redundancy at the database level only. So you would need to configure database mirroring for up to 50 databases based on the information you provided. The chances are that if one DB where to fail all, 50 would probably follow, as failures typically occur at the hardware level rather than a specific database.
It sounds to me like you should be using SQL Server Clustering technology. You could create an Active/Active cluster to support your requirements.
What is an Active/Active Cluster?
An Active/Active SQL Server cluster means that SQL Server is running on both nodes of a two-way cluster. Each copy of SQL Server acts independently, and users see two different SQL Servers. If one of the SQL Servers in the cluster should fail, then the failed instance of SQL Server will failover to the remaining server. This means that then both instances of SQL Server will be running on one physical server, instead of two.
Applying this to your scenario
You could then split the databases between two instances of SQL server, one active instance on each node. Should one node fail, the other node will pick up the slack and vice versa.
Further Reading
An introduction to SQL Server Clustering
I suspect that you will find the following MSDN thread useful reading also
"it's just going to be sitting there anyway"
It will be sitting there applying transactions...
Take note of John Sansom's recommendation. Keep in mind that a Active/Active cluster requires two sql server licenses and a failover cluster/mirror only needs one.
Setting up mirroring for a large number of db's could turn into a big pain. You need any jobs/maintenance to move over as well - which can be achieved with alerts on WMI failover events. There's probably more to think about that could complicate things.
we are planning to implement sql server 2005 cluster in next few months. i wanted to know what steps / precautions need to be taken as a database developer when trying to achieve this? do we need to change any ado.net code (in front end) / stored procs etc etc? are there any best practices to be followed?
reason i am asking this question is : for asp.net load balancing, you have to ensure your code for sessions / application / cache all comply with load balanced environment. (so incase you are using inproc sessions, you have to rewrite that code so that it works on load balanced environment). now this is at your web server level. i just wanted to the right things to do when trying to scale out at database server level
i am sorry if this question is stupid. please excuse my limited knowledge on this subject :-)
You have to make no front end changes to implement a SQL Server cluster, you simply connect to a SQL Server instance as normal.
SQL Server failover clustering is not load balancing however. It is used to add redundancy should any hardware fail on your primary node. Your other (secondary) node is doing nothing until your primary fails, in which case the failover happens automatically and your database is serving connections again after a 10-20 second delay.
Another issue is the cache on the secondary node is empty, so you may see some performance impact after failover. You can implement a "warm" cache on your mirror server using SQL Server database mirroring, but there is no way to do something similar with clustering.
Database clustering is different to load balancing. It's high availability, not "scaling out"
Basically:
2 servers (or nodes) with shared disks (that can only be owned by one node at any time)
one is "active" running a virtual windows server and SQL Server instance
one is monitoring the other ("passive")
you connect to the virtual windows server.
If node 1 goes off line, node 2 takes over. Or it can be failed over manually.
This means: services are shut down on node 1, node 2 takes control of the disks and services and starts up. Any connections will be broken, and no state or session is transferred.