I'm facing the following challenge:
I have a bunch of databases in different geographical locations where the network may fail a lot (I'm using cellular network). I need to keep all the databases synchronized but there is no need to be in real time. I'm using Java but I have the freedom to choose any free database.
How can I achieve this?
It's a problem with a quite established corpus of research (of which people is apparently unaware). I suggest to not reinvent a poor, defective wheel if not absolutely necessary (such as, for example, so unusual requirements to allow a trivial solution).
Some keywords: replication, mobile DBMSs, distributed disconnected DBMSs.
Also these research papers are relevant (as an example of this research field):
Distributed disconnected databases,
The dangers of replication and a solution,
Improving Data Consistency in Mobile Computing Using Isolation-Only Transactions,
Dealing with Server Corruption in Weakly Consistent, Replicated Data Systems,
Rumor: Mobile Data Access Through Optimistic Peer-to-Peer Replication,
The Case for Non-transparent Replication: Examples from Bayou,
Bayou: replicated database services for world-wide applications,
Managing update conflicts in Bayou, a weakly connected replicated storage system,
Two-level client caching and disconnected operation of notebook computers in distributed systems,
Replicated document management in a group communication system,
... and so on.
I am not aware of any databases that will give you this functionality out of the box; there is a lot of complexity here due to the need for eventual consistency and conflict resolution (eg, what happens if the network gets split into 2 halves, and you update something to the value 123 while I update it on the other half to 321, and then the networks reconnect?)
You may have to roll your own.
For some ideas on how to do this, check out the design of Yahoo's PNUTS system: http://research.yahoo.com/node/2304 and Amazon's Dynamo: http://www.allthingsdistributed.com/2007/10/amazons_dynamo.html
Check out SymmetricDS. SymmetricDS is web-enabled, database independent, data synchronization/replication software. It uses web and database technologies to replicate tables between relational databases in near real time. The software was designed to scale for a large number of databases, work across low-bandwidth connections, and withstand periods of network outage.
I don't know your requirements or your apps, but this isn't a quick answer type of question. I'm very interested to see what others have to say. However, I have a suggestion that may or may not work for you, depending on your requirements and situation. particularly, this will not help if your users need to use the app even when the network is unavailable (offline access).
Keeping a bunch of small databases synchronized is a fairly complex task to do correctly. Is there any possibility of just having one centralized database, and either having the client applications connect directly to it or (my preferred solution) write some web services to handle accessing/updating data rather than having a bunch of client databases?
I realize this limits offline access, but there are various caching strategies you can use. (Which of course, leads you back to your original question.)
Related
I've heard about two kind of database architectures.
master-master
master-slave
Isn't the master-master more suitable for today's web cause it's like Git, every unit has the whole set of data and if one goes down, it doesn't quite matter.
Master-slave reminds me of SVN (which I don't like) where you have one central unit that handles thing.
Questions:
What are the pros and cons of each?
If you want to have a local database in your mobile phone like iPhone, which one is more appropriate?
Is the choice of one of these a critical factor to consider thoroughly?
While researching the various database architectures as well. I have compiled a good bit of information that might be relevant to someone else researching in the future. I came across
Master-Slave Replication
Master-Master Replication
MySQL Cluster
I have decided to settle for using MySQL Cluster for my use case. However please see below for the various pros and cons that I have compiled
1. Master-Slave Replication
Pros
Analytic applications can read from the slave(s) without impacting the master
Backups of the entire database of relatively no impact on the master
Slaves can be taken offline and sync back to the master without any downtime
Cons
In the instance of a failure, a slave has to be promoted to master to take over its place. No automatic failover
Downtime and possibly loss of data when a master fails
All writes also have to be made to the master in a master-slave design
Each additional slave add some load to the master since the binary log have to be read and data copied to each slave
Application might have to be restarted
2. Master-Master Replication
Pros
Applications can read from both masters
Distributes write load across both master nodes
Simple, automatic and quick failover
Cons
Loosely consistent
Not as simple as master-slave to configure and deploy
3. MySQL Cluster
The new kid in town based on MySQL cluster design. MySQL cluster was developed with high availability and scalability in mind and is the ideal solution to be used for environments that require no downtime, high avalability and horizontal scalability.
See MySQL Cluster 101 for more information
Pros
(High Avalability) No single point of failure
Very high throughput
99.99% uptime
Auto-Sharding
Real-Time Responsiveness
On-Line Operations (Schema changes etc)
Distributed writes
Cons
See known limitations
You can visit for my Blog full breakdown including architecture diagrams that goes into further details about the 3 mentioned architectures.
We're trading off availability, consistency and complexity. To address the last question first: Does this matter? Yes very much! The choices concerning how your data is to be managed is absolutely fundamental, and there's no "Best Practice" dodging the decisions. You need to understand your particular requirements.
There's a fundamental tension:
One copy: consistency is easy, but if it happens to be down everybody is out of the water, and if people are remote then may pay horrid communication costs. Bring portable devices, which may need to operate disconnected, into the picture and one copy won't cut it.
Master Slave: consistency is not too difficult because each piece of data has exactly one owning master. But then what do you do if you can't see that master, some kind of postponed work is needed.
Master-Master: well if you can make it work then it seems to offer everything, no single point of failure, everyone can work all the time. The trouble with this is that it is very hard to preserve absolute consistency. See the wikipedia article for more.
Wikipedia seems to have a nice summary of the advantages and disadvantages
Advantages
If one master fails, other masters will continue to update the
database.
Masters can be located in several physical sites i.e.
distributed across the network.
Disadvantages
Most multi-master replication systems are only loosely consistent,
i.e. lazy and asynchronous, violating ACID properties.
Eager replication systems are complex and introduce some
communication latency.
Issues such as conflict resolution can become intractable as
the number of nodes involved rises and the required latency decreases.
I'm not sure if my question was at all clear or not, so let me just dive in and give you the long version:
Someone said recently, when discussing high-volume web applications, that "disk is the new tape". Website administrators use huge clusters of memcache servers to make disk I/O-free round trips between the client and the server. In order to accomplish this, application developers are having to treat RDBMS's like generic data stores, tossing out valuable features like foreign key constraints, check constraints, and cascading UPDATES and DELETES.
But what if you could put your memcache cluster on the other side of your db interface, outside the realm of the application software (PHP, Ruby, Python...), and inside the RDBMS? Think of it as a large, distributed, in-memory database cache. To make it clear, I'm talking about having the type of memcache cluster that can store pretty much an entire database in memory, guaranteeing 100% cache hit rate when reading from the database, regardless of the parameters of your SELECT statement. Then, when writing application code, you can not only forget about memory caching, you can start dealing with the data store as if it were a Relational Database Management System again, and use normalization and JOINS the way RDBMS's traditionally encourage.
The performance gains, while maintaining/restoring the data integrity benefits of RDBMS's, might be worth looking into.
Does anyone know of a project where someone has done this already, perhaps as an open source project wherin they might modify an open source RDBMS such as PostGRE or MySQL? I haven't found anything ye, and I have no idea how these programs are structured or whether it would be possible to even implement such a storage engine.
You should review the CAP theorom:
http://www.julianbrowne.com/article/viewer/brewers-cap-theorem
Keep in mind, if one of those memcache servers rebooted, you would be a bit miffed. Even if you used all memcache servers you would still be limited by the network speed, and then possibly network congestion would become an issue.
There are other devices such as fusionIO cards and other pure RAM harddisks that also solve this issue, such as hypersystem's ram disks. ooohh,. I wish my company could afford a dozen of these...
-daniel
I am just getting started breaking a .NET application and its SQL Server database into two systems - an intranet and a public website.
The various database tables will need to be synchronised between the two databases in different ways, for example:
Moving from web to intranet, with the intranet data becoming read-only
Moving from intranet to web, with the web data becoming read-only
Tables that need to be synchronised and are read/write on both the intranet and web databases.
Some of the synchronisation needs to occur relatively quickly with minimal lag, possibly with some type of transaction locking to ensure repeatable reads etc. Other times it doesn't matter if there is a delay between synchronisation.
I am not quite sure where to start with all this, as there seems to be many different ways of achieving this. Which technologies and strategies should I be looking at?
Any tips?
A system like that looks like the components are fairly tightly coupled. An upgrade across several systems all at once can turn into quite the nightmare.
It looks like this is less of a replication problem and more of a problem of how to maintain a constant connection to a remote database without much I/O lag. While it can be done, probably isn't going to work out very well in terms of scalability and being able to troubleshoot problems.
You might look at using some message queueing and asynchronous data processing from the remote site to the intranet. You'll probably have to adjust some expectations of the business side so that they don't assume that everything is accessible real-time all the time.
Of course, its hard to give specifics without more details. It might be a good idea to look into principles of SOA and messaging systems for what you're trying to do.
Out of the box you have SQL Server Replication. Sounds like a pair of filtered transactional replication publications can do the job. Transactional replication has a low overhead on the publisher and can ensure transactional consistency of the published changes.
Nathan raises some very valid points about the need for a more loosely coupled solution. Service Broker can fit that shoe quite well with its loosely coupled asynchronous nature, and provide a headache free upgrade future since SSB is compatible between SQL Server versions and editions. But this freedom comes at the cost of letting the heavy lifting of actually detecting the changes and applying them to the tables to you, as application code, not a trivial feats.
There is frequently the need to synchronize data from master tables in one database to clone tables in other databases, often on other servers. For example, consider the case where a backend system manages inventory data and that inventory data ultimately must be pushed to one or more databases that are part of a web site application.
The source data in the backend system is heavily normalized, with dozens of tables and foreign key constraints. It is a well-designed OLTP RDBMS system. Many of the tables in question contain millions of rows. The need is to push this data out to the other databases regularly. As frequently as feasible; latency can be tolerated. Above all, maximum uptime of both the backend and remote databases is imperative.
I am using SQL Server and am familiar with change tracking, rowversion, triggers, and so on. I know that Microsoft pushes replication, SyncFx, and SSIS heavily for these scenarios. However, there is quite a difference between vendor whitepapers and overviews recommending technologies and the actual implementation, deployment, and maintenance of the solution. In the SQL Server world, replication is often viewed as the turnkey solution, but I am trying to explore alternate solutions. (There is some fear that replication is difficult to administer, makes it hard to change schema, and in the event that a re-initialize is ever required there would be large downtime for critical systems.)
There are lots of gotchas. Due to the complex foreign key relationships among large numbers of tables, determining what order to perform captures or to apply updates is not trivial. Due to unique indexes, two rows might be interlocked in such a way that row-at-a-time update will not even work (need to perform intermediate updates to each row before the final update). These are not necessarily show-stoppers, as unique indexes can often be changed to regular indexes and foreign keys can be disabled (though disabling the foreign keys is extremely undesirable). Often, you will hear, "just" use SQL 2008 change tracking and SSIS or SyncFx. These kinds of answers really do not do justice to the practical difficulties. (And of course, clients really have a hard time wrapping their heads over how copying data could be so difficult, making a difficult situation all the worse!)
This issue is ultimately very generic: perform one-way synchronization of many heavily related database tables with lots of rows. Almost everyone involved in databases has to deal with this kind of issue. Whitepapers are common, practical expertise hard to find. We know this can be a difficult issue, but the job must get done. Let's hear about what has worked for you (and what to avoid). Tell your experience with Microsoft products or products from other vendors. But if you personally have not battle-tested the solution with large numbers of heavily-related tables and rows, please refrain from answering. Let's keep this practical -- not theoretical.
Better ask on serverfault.com (I can't post comments, scripts are broken in SO, so I have to post a full answer)
Update: (switched to Safari, scripts work again, I can post properly)
There is no silver bullet. For ease of use and 'one key turn' deployment nothing can beat replication. Is the only solution that covers deeply conflict detection and resolution, has support for pushing schema changes and comes with a comprehensive set of tools for setting it up and monitoring it. It has been the MS poster child of data synchronization for many years before this 'agenda' was taken over by the .Net crowd. Replication has two underlying problems in my opinion:
The technology used to pushing changes is primitive, slow and unreliable. It requires file shares to initiate the replicas and it depends on T-SQL to actually replicate data, resulting in all sort of scalability problems: the replication threads use server worker threads and the fact that they interact with arbitrary tables and application queries lead to blocking and deadlocks. The biggest deployments I've heard of are around 400-500 sites and are done by superhuman MVPs and top dollar consultants. This stops on its track many projects that start at 1500 sites (way beyond largest deployed replication projects). I'm curious to hear if I'm wrong and you know of a SQL Server replication solution deployed with more than 500 sites.
The replication metaphor is too data centric. It does not take into account the requirements of distributed applications: need of versioned and formalized contracts, autonomy of data 'fiefdoms', loose coupling from availability and security pov. As a result replication based solution solve the immediate need to 'make data available there', but fail to solve the true problem of 'my app needs to talk with your app'.
At the other end of the spectrum you'll find solutions that truly address the problem of application communication, like services based on queued messaging. But are either painfully slow and riddled with problems rooted in the separation of the communication mechanism (web services and or msmq) and the data storage (DTC transactions between comm and db, no common high availability story, no common recoverability story etc etc). Solutions that are blazingly fast and fully integrated with DB exists in the MS stack, but nobody knows how to use them. Somewhere in between these and replication you'll find various intermediate solutions, like OCS/Synch framework and SSIS based custom solutions. None will offer the ease of setup and monitoring of replication, but they might scale and perform better.
I was involved with several projects that required 'data synchronization' on a very large scale (+1200 sites, +1600 sites) and my solution was to turn the problem on a 'application communication' problem. Once the mindset is changed to this and the data flow is no longer seen as 'record with key X of table Y' but instead 'message communicating the purchase of item X by customer Y' the solution becomes easier to understand and apply. You no longer think in terms of 'insert records in order X-Y-Z so FK relations don't break' but instead in terms of 'process purchase as described by message XYZ'.
In my view replication, and it derivatives (ie. data tracking and data-gram shipping), are solutions anchored in the '80 technologies and view of the data/applications. Obsolete dinosaurs (and by no way turning into birds).
I know this does not even begin to address all your (very legit) concerns, but writing out all I have to say/rant/rable on this topic would fill volumes of paperback...
I'm currently estimating how to best share data between offices at different geographical locations.
My current preference is for using SQL Server Merge Replication and have a main database and handful of subscribers.
The system will also need to allow a few work sites to work disconnected (no or little connectivity on construction sites).
The amount of data is not going to be large, we're talking about sharing data from a custom ERP system between a manufacturing plant, a handful of regional offices and work sites.
The Sync Framework also looks good and seems to have good support in SQL Server 2008.
What other proven system out there should I investigate that can answer these needs?
For those with experience on sharing data in a similar environment, do you have any particular recommendation and tips?
How difficult has it been for you to deal with data conflicts?
Definitely stick with SQL Server replication, then decide to go down the path of 'build your own replication framework.' I've seen some applications become horrible messes that way.
I've had environments that are setup for snapshot replication in a disconnected model, but the remote sites were read-only. They worked quite well with minimal issues.
I'd also be interested in hearing people's experiences with the sync framework.
You may want to look at what microsoft calls smart clients which is an architecture microsoft talk about for applications that may have temporary network connectivity.
I have already discussed my own experience of SQLServer2005 with #cycnus. My answer is not a real one, just a few arguments to initiate a subject I am very interested in.
Our choice for 'not allways connected' sites is to implement web-based merge replication. Data exchanges happen to be even quicker than through VPNs (as we also have a combination of LAN merge replications). I will easily get a speed of 30 to 40 rows per second through web (512 Down/128 Up, shared) while I'll get a 5 rows per second through LAN (overseas, 256 Up/Down, dedicated). Don't ask me why ...
Tips are numerous: subscription should be of the client type (data circulating basically from the suscriber to the publisher before being distributed). Primary Keys should allways be GUID, for many reasons exposed here, but also for replication issues: we are then sure that any newly created record will be able to find its way up to the publisher, as its PK will be unique. Moreover, I recently had a non-convergence issue with one of my replications (bad experience, exposed here) , where I felt very happy not to use natural keys, as the problem occured on the potential "natural key" column.
Data conflicts should then be basically limited to work organisation problems, where (ususally for bad reasons) the same data is modified by different users in different places at the same time. With our "PK is GUID rule", we do not have conflicts out of these specific situations.
One should always have the possibility to modify its database structure, even if replications are running. It is possible to keep on adding fields, indexes, constraints while running merge replication processes. I also find a workaround for adding tables without reinitialising the replication process (exposed here, still did not understand why I was downvoted on this answer!)