A quorum is the majority number of servers that have to agree on a certain operation in order to move forward.
Versioning is a counter for each record.
In a database, if latest version will always give me the latest and correct record.
When and why should i use Quorum for distributed systems.
Quorum is required in a distributed environment where you are running a cluster of machines and anyone of these machines can accept a write/modify request and update the data. Under such scenarios Quorum is used to identify the leader that will accept the writes or determine which node can accept write/modify requests for a given range of keys.
Let's consider a scenario where you have 3 master server accepting writes, in that case if you want to update the data, can we just match the version on one of the masters and assume it is safe to update?
No, because at the same moment some other write request to other master server can also assume the same and hence you will end up with different state of data in different machines.
In this scenario, you need quorum to identify the leader that will accept writes for given range of data and then you can use versioning (optimistic locking) to ensure data is consistent across all machines and serialized.
Versioning, however is helpful when you have one master accepting the writes and multiple users might want try to update the data, using versioning here can help you to achieve Optimistic Locking. This is generally helpful when chances of locking are low.
Related
Hypothetically speaking, I plan to build a distributed system with cassandra as the database. The system will run on multiple servers say server A,B,C,D,E etc. Each server will have Cassandra instance and all servers will form a cluster.
In my hypothetical distributed system, X number of the total servers should process user requests. eg, 3 of servers A,B,C,D,E should process request from user uA. Each application should update its Cassandra instance with the exact copy of data. Eg if user uA sends a message to user uB, each application should update its database with the exact copy of the message sent and to who and as expected, Cassandra should take over from that point to ensure all nodes are up-to date.
How do I configure Cassandra to make sure Cassandra first checks all copies inserted into the database are exactly the same before updating all other nodes
Psst: kindly keep explanations as simple as possible. Am new to Cassandra, crossing over from MySQL. Thank you in advance
Every time a change happens in Cassandra, it is communicated to all relevant nodes (Nodes that have a replica of the data). But sometimes that doesn't happen either because a node is down or too busy, the network fails, etc.
What you are asking is how to get consistency out of Cassandra, or in other terms, how to make a change and guarantee that the next read has the most up to date information.
In Cassandra you choose the consistency in each query you make, therefore you can have consistent data if you want to. There are multiple consistency options but normally you would only use:
ONE - Only one node has to get or accept the change. This means fast reads/writes, but low consistency (If you write to A, someone can read from B while it was not updated).
QUORUM - 51% of your nodes must get or accept the change. This means not as fast reads and writes, but you get FULL consistency IF you use it in BOTH reads and writes. That's because if more than half of your nodes have your data after you inserted/updated/deleted, then, when reading from more than half your nodes, at least one node will have the most recent information, which would be the one to be delivered. (If you have 3 nodes ABC and you write to A and B, someone can read from C but also from A or B, meaning it will always get the most up to date information).
Cassandra knows what is the most up to date information because every change has a timestamp and the most recent wins.
You also have other options such as ALL, which is NOT RECOMMENDED because it requires all nodes to be up and available. If a node is unnavailable, your system is down.
Cassandra Documentation (Consistency)
I recently came up with a case that makes me wonder if I'm a newbie or something trivial has escaped to me.
Suppose I have a software to be run by many users, that uses a table. When the user makes login in the app a series of information from the table appears and he has just to add and work or correct some information to save it. Now, if the software he uses is run by many people, how can I guarantee is he is the only one working with that particular record? I mean how can I know the record is not selected and being worked by 2 or more users at the same time? And please I wouldn't like the answer use “SELECT FOR UPDATE... “
because for what I've read it has too negative impact on the database. Thanks to all of you. Keep up the good work.
This is something that is not solved primarily by the database. The database manages isolation and locking of "concurrent transactions". But when the records are sent to the client, you usually (and hopefully) closed the transaction and start a new one when it comes back.
So you have to care yourself.
There are different approaches, the ones that come into my mind are:
optimistic locking strategies (first wins)
pessimistic locking strategies
last wins
Optimistic locking: you check whether a record had been changed in the meanwhile when storing. Usually it does this by having a version counter or timestamp. Some ORMs and frameworks may help a little to implement this.
Pessimistic locking: build a mechanism that stores the information that someone started to edit something and do not allow someone else to edit the same. Especially in web projects it needs a timeout when the lock is released anyway.
Last wins: the second person storing the record just overwrites the first changes.
... makes me wonder if I'm a newbie ...
That's what happens always when we discover that very common stuff is still not solved by the tools and frameworks we use and we have to solve it over and over again.
Now, if the software he uses is runed by many people how can I guarantee is he
is the only one working with that particular record.
Ah...
And please I wouldn't like the answer use “SELECT FOR UPDATE... “ because for
what I've read it has too negative impact on the database.
Who cares? I mean, it is the only way (keep a lock on a row) to guarantee you are the only one who can change it. Yes, this limits throughput, but then this is WHAT YOU WANT.
It is called programming - choosing the right tools for the job. IN this case impact is required because of the requirements.
The alternative - not a guarantee on the database but an application server - is an in memory or in database locking mechanism (like a table indicating what objects belong to what user).
But if you need to guarantee one record is only used by one person on db level, then you MUST keep a lock around and deal with the impact.
But seriously, most programs avoid this. They deal with it either with optimistic locking (second user submitting changes gets error) or other programmer level decisions BECAUSE the cost of such guarantees are ridiculously high.
Oracle is different from SQL server.
In Oracle, when you update a record or data set the old information is still available because your update is still on hold on the database buffer cache until commit.
Therefore who is reading the same record will be able to see the old result.
If the access to this record though is a write access, it will be a lock until commit, then you'll have access to write the same record.
Whenever the lock can't be resolved, a deadlock will pop up.
SQL server though doesn't have the ability to read a record that has been locked to write changes, therefore depending which query you're running, you might lock an entire table
First you need to separate queries and insert/updates using a data-warehouse database. Which means you could solve slow performance in update that causes locks.
The next step is to identify what is causing locks and work out each case separately.
rebuilding indexes during working hours could cause very nasty locks. Push them to after hours.
Currently we have 2 servers with a load-balancer before them. We want to be able to turn 1 machine off and later on, without the user noticing it.
Our application also uses solr and now i wanted to install & configure solr on both servers and the question is how do i configure a master-master replication?
After my initial research i found out that it's not possible :(
But what are my options here? I want both indices to stay in sync and when a document is commited on one server it should also go to the other.
Thanks for your help!
Not certain of your specific use case (why turn 1 server on and off?), there is no specific "master-master" replication. Solr does however support distributed indexing and querying via SolrCloud. From the documentation for SolrCloud:
Replication ensures redundancy for your data, and enables you to send
an update request to any node in the shard. If that node is a
replica, it will forward the request to the leader, which then
forwards it to all existing replicas, using versioning to make sure
every replica has the most up-to-date version. This architecture
enables you to be certain that your data can be recovered in the event
of a disaster, even if you are using Near Real Time searching.
It's a bit complex so I'd suggest you spend some time going thru the documentation as it's not quite as simple as setting up a couple of masters and load balancing between them. It is a big step up from the previous master/slave replication that Solr used, so even if it's not a perfect fit it will be a lot closer to what you need.
https://cwiki.apache.org/confluence/display/solr/SolrCloud
https://cwiki.apache.org/confluence/display/solr/Getting+Started+with+SolrCloud
You can just create a simple master - slave replication as described here:
https://cwiki.apache.org/confluence/display/solr/Index+Replication
But be sure you send your inserts, deletes, updates directly to the master, but selects can go through the load balancer.
The other alternative is to create a third server as a master, and 2 slaves, and the lode balancer can be in front of the two slaves.
I have a program in C that monitors traffic and records the URLs visited by the user. Currently, I am maintaining this in a hash table. My key is the src-IP address and the result is a data-structure with a linked list of URLs. I am currently maintaining 50k to 100k records in a hash table. When the user logs out, the record can get deleted.
The program independently runs on a Active-Standby pair. I want to replicate this database to another machine in case my primary machine crashes (the 2 systems act as Client and Server) and continue recording stuff associated with the user.
The hard way is to write code for sending this information to the peer and on the peer system to receive and store. The issue is, it will add lots of code (and bugs!). To do data-replication and data-store, here are a few prereqs:
I want data-record replication between these machines. I am NOT looking at adding another machine/cluster unless required.
Prefer library so that query is quick. If not another process on the same machine to which I can IPC.
Add, update and delete operations should be supported.
In memory database a must.
Support multiple such databases with different keys.
Something that has publish/subscribe.
Resync capability if the backup dies and comes back again.
Interface should be in C
Possible options I looked at were zookeeper, redis, memcached, sql-lite, berkeley-db.
Zookeeper - Needs odd number of systems for tie-break. Not suitable for 1 to 1.
Redis - Looks to fit my requirements with hiredis for C interface. Separate process though.
Memcached - I don't have any caching requirements.
Sql-lite - Embedded database with C interface
Berkeley-DB - Embedded database for better scale.
So, Redis, Sql-lite and Berkeley-DB look like my options to go forward. Appreciate any help/thoughts on the DBs I should research more for my requirements. Or if there are any other DBs I should research? I apologize if my question is very generic. If the question does not belong here, please point me to the right forum.
I'm designing a distributed system with a certain flow of data in it. I'd like to guarantee that at least N nodes have almost-current data at any given time.
I do not need complete consistency, only eventual consistency (t.i. for any time instant, the current snapshot of data should eventually appear on at least N nodes. It is tricky to define the term "current" here, but still). Nodes may fail and go back up at any moment, and there is no single "central" node.
O overflowers! Point me to some good papers describing replication schemes. I've so far found one: Consistency Management in Optimistic Replication Algorithms and a more broad and recent article by the same author: Optimistic Replication.
A lot of the trick to this is finding your exact requirements, and yours still sound pretty vague. Do you just need to support operations like this?
Update key K to value V.
Look up a somewhat-recent value of key K.
You mentioned you need eventual consistency. So if you do a single update, it will eventually replicate everywhere. If you do two nearly-simultaneous updates, do you care which one wins? If one replica reports that an update was successfully completed, do you care if the value could be lost if that replica were to temporarily crash shortly afterward? Or if that replica were permanently destroyed?
How precise should somewhat-recent be? If there's a netsplit or something, a lookup might return a very stale result or just fail. Do you care which?
Do you ever need to support fancier operations like...
Get the absolute latest value of key K?
Update the value of key K to value V' provided the latest value is currently V?
Do you have rigid reliability, latency, and/or bandwidth requirements? How far apart are your replicas / how good is the network between? This impacts if you can have cross-replica communication on every update and even on every lookup; or even if you can/should fail over operations to a remote replica if the local one seems to be down.
Depending on your answers here, I've worked with a couple different schemes that might meet your requirements. There are several possible variations on them.
The simplest thing is to just have the application always talk to the local replica. Replicas timestamp values (using NTP-synced clocks) and only talk to each other for asynchronous replication. Highest timestamp wins in replication. Of course, if applications on two different replicas each do a read/modify/write near simultaneously, one of the modifications can easily be lost. (In fact, without a conditional update scheme, the same is even true for near-simultaneous changes on the same replica.) If a replica permanently fails, recent-ish updates can be lost. This is more or less what Bigtable's built-in replication does. In the paper you linked, it'd be the "Optimistic - Multimaster" branch but not caring too much about losing some updates makes it simpler than they suggest.
Some databases use the Paxos algorithm (see for example "Data Management for Internet-Scale Single-Sign-On" here to make fancier things possible. Each replica can know how far behind it might be so you can say "give me a value that's no more than 1 minute old" or "give me the absolute latest value". An update isn't considered complete until a quorum of replicas have accepted it, so "give me the absolute latest value" will definitely always return that value until another update happens. You can do the conditional update operation I mentioned to prevent simultaneous writers from tramping each other. This doesn't seem to fit neatly into either the optimistic or pessimistic category as defined by that author because updates are replicated synchronously to a quorum but replicas which didn't vote in the latest Paxos round may still be able to answer some queries. The scheme can be very complicated, though...
Not RDBMS agnostic, but SQL Server 2008 (2005 onwards) supports Peer-To-Peer Replication