I'm currently reading through Seven Databases in Seven Weeks and I've come across this statement.
HBase also makes strong consistency guarantees, making it easier to
transition from relational databases for some use cases. Finally,
HBase guarantees atomicity at the row level, which means that you can
have strong consistency guarantees at a crucial level of HBase’s data
model.
I'm having some trouble understanding it.
My shallow understanding is that Apache HBase is a distributed database, so it's like a Master-Slave sort of thing?
So, when you do a write, you first do it on the Master and then the Master copies over the writes to the slaves. The consistency guarantee is that all the slaves have the same the same values for their records? So, a high consistency guarantee means that they will all have the same values, where as a low consistency guarantee means that the master may have written changes to some of the slaves, but not all (so if you're reading values from one of the slaves, you might get different results based off which slave you read from)?
Is this correct so far?
So, with HBase... "guaranteeing atomicity at the row level" means a transaction will only be completed when the master has written to all the slaves? And that provides the high consistency?
Am I headed on the right track? If not, I'd really appreciate some clarification on what that paragraph means.
Thank you very much!
If by 'master' you mean region/shard/partition master, then you are on the right track. Every row key is associated with exactly one Region (HBase terminology for shard), and every region is replicated across multiple servers/disks/racks/whatever. There is only one primary Region server (or 'master') that the client talks to, as per every row key.
So, with HBase... "guaranteeing atomicity at the row level" means a transaction will only be completed when the master has written to all the slaves? And that provides the high consistency?
No, consistency and atomicity are two different things.
HBase provides atomicity on a row level, which means that when you write to a row, then the entire write operation is fully completed or not changed anything - there is no in between (partial update). This is not the case when you write to multiple rows in one command - some might chage and some might not, but no row will be partially updated or changed.
Consistency (in this context) means that updates must first be acknowledged by the remote replicas, before the clients gets ok. This is done primarily via HDFS-based transaction log file. You may read on HBase WAL for more details.
Related
My understanding: a read replica database exists to allow read volumes to scale.
So far, so good, lots of copies to read from - ok, that makes sense, share the volume of reads between a bunch of copies.
However, the things I'm reading seem to imply "tada! magic fast copies!". How are the copies faster, as surely they must also be burdened by the same amount of writing as the main db in order that they remain in sync?
How are the copies faster, as surely they must also be burdened by the same amount of writing as the main db in order that they remain in sync?
Good question.
First, the writes to the replicas may be more efficient than the writes to the primary if the replicas are maintained by replaying the Write-Ahead Logs into the secondaries (sometimes called a "physical replica"), instead of replaying the queries into the secondaries (sometimes called a "logical replica"). A physical replica doesn't need to do any query processing to stay in sync, and may not need to read the target database blocks/pages into memory in order to apply the changes, leaving more of the memory and CPU free to process read requests.
Even a logical replica might be able to apply changes cheaper on a replica as a query on the primary of the form
update t set status = 'a' where status = 'b'
might get replicated as a series of
update t set status = 'a' where id = ?
saving the replica from having to identify which rows to update.
Second, the secondaries allow the read workload to scale across more physical resources. So total read workload is spread across more servers.
My company is using Cassandra (2.1.8), running on debian servers.
Without setting a consistency on my queries/preparedStatements, I would sometimes get "null" back from a "SELECT * FROM table_name WHERE key = ?". After adding:
statementGet.setConsistencyLevel(ConsistencyLevel.ONE);
I get an answer every time. But from the article below:
Read/Write Strategy For Consistency Level
If you don't care about consistency, then this discussion is moot;
read/write with whatever Consistency Level and let Cassandra's
asynchronous replication and anti-entropy features do their work. That
said, though your goal is minimizing network traffic/cost, if your
workload is mostly reads then the added cost of doing writes at CL
QUORUM or ALL may not actually be that much.
it is said that my consistency levels do not matter, if I don't care about consistency. And I don't. It doesn't not matter much to me if the data is hours or days old. But I do care about getting data everytime.
What I need to know is: Do I need to put a consistency level on my Writes as well, to ensure that they are actually written, and that my Read will always retrieve a value, if I have previously written anything.
And what might that consistency level be, given the above criteria?
Definition for Consistency:
In database systems, a consistent transaction is one that starts with a database in a consistent state and ends with the database in a consistent state. Any data written to the database must be valid according to all defined rules, including but not limited to constraints, cascades, triggers, and any combination thereof.
Definition for Read Consistency:
Oracle always enforces statement-level read consistency. This guarantees that all the data returned by a single query comes from a single point in time—the time that the query began. Therefore, a query never sees dirty data or any of the changes made by transactions that commit during query execution.
I've got puzzled, it seems that Read Consistency is kind of Isolation but not Consistency. Is that true?
That's right, but the concepts are related and the verbage maybe mixed up a bit.
"Consistency" (as in ACID) means that when you update the database, you cannot put it into an inconsistent state (the database will enforce that all constraints are met).
"Read Consistency" is one of the "Transaction Isolation" levels that describe how good concurrent transactions are isolated from each-other (i.e. in how far they can treat the database as if only they work on it).
Okay, let me rephrase that. The four transaction isolation levels describe what happens when you run multiple queries in the same transaction. So that is a potentially much stronger consistency.
The "Read Consistency" you mentioned is about a single query returning results that are "internally consistent": They represent data that existed at one point in time (when the query started). You won't get weird results for long-running queries.
I think this is a bit stronger of a guarantee than READ COMMITTED (you don't see anything that is not yet committed -- no dirty reads -- but you also don't see anything that has been committed after your query started).
But it does not imply that you get the same results when you run the same query in the same transaction five minutes later: you might see data that has been updated (and committed!) in the mean-time. If you don't want that, you need REPEATABLE READ.
As far as I know, READ UNCOMMITTED is not available in Oracle. That is probably what they mean by "Oracle always enforces statement-level read consistency".
The BASE acronym is used to describe the properties of certain databases, usually NoSQL databases. It's often referred to as the opposite of ACID.
There are only few articles that touch upon the details of BASE, whereas ACID has plenty of articles that elaborate on each of the atomicity, consistency, isolation and durability properties. Wikipedia only devotes a few lines to the term.
This leaves me with some questions about the definition:
Basically Available, Soft state, Eventual consistency
I have interpreted these properties as follows, using this article and my imagination:
Basically available could refer to the perceived availability of the data. If a single node fails, part of the data won't be available, but the entire data layer stays operational.
Is this interpretation correct, or does it refer to something else?
Update: deducing from Mau's answer, could it mean the entire data layer is always accepting new data, i.e. there are no locking scenarios that prevent data from being inserted immediately?
Soft state: All I could find was the concept of data needing a period refresh. Without a refresh, the data will expire or be deleted.
Automatic deletion of data in a database seems strange to me.
Expired or stale data makes more sense. But this concept would apply to any type of redundant data storage, not just NoSQL. Does it describe something else then?
Eventual consistency means that updates will eventually ripple through to all servers, given enough time.
This property is clear to me.
Can someone explain these properties in detail?
Or is it just a far-fetched and meaningless acronym that refers to the concepts of acids and bases as found in chemistry?
The BASE acronym was defined by Eric Brewer, who is also known for formulating the CAP theorem.
The CAP theorem states that a distributed computer system cannot guarantee all of the following three properties at the same time:
Consistency
Availability
Partition tolerance
A BASE system gives up on consistency.
Basically available indicates that the system does guarantee availability, in terms of the CAP theorem.
Soft state indicates that the state of the system may change over time, even without input. This is because of the eventual consistency model.
Eventual consistency indicates that the system will become consistent over time, given that the system doesn't receive input during that time.
Brewer does admit that the acronym is contrived:
I came up with [the BASE] acronym with my students in their office earlier that year. I agree it is contrived a bit, but so is "ACID" -- much more than people realize, so we figured it was good enough.
It has to do with BASE: the BASE jumper kind is always Basically Available (to new relationships), in a Soft state (none of his relationship last very long) and Eventually consistent (one day he will get married).
Basic Availability: The database appears to work most of the time.
Soft State: Stores don’t have to be write-consistent or mutually consistent all the time.
Eventual consistency: Data should always be consistent, with regards how any number of changes are performed.
ACID and BASE are consistency models for RDBMS and NoSQL respectively. ACID transactions are far more pessimistic i.e. they are more worried about data safety. In the NoSQL database world, ACID transactions are less fashionable as some databases have loosened the requirements for immediate consistency, data freshness and accuracy in order to gain other benefits, like scalability and resiliency.
BASE stands for -
Basic Availability - The database appears to work most of the time.
Soft-state - Stores don't have to be write-consistent, nor do different replicas have to be mutually consistent all the time.
Eventual consistency - Stores exhibit consistency at some later point (e.g., lazily at read time).
Therefore BASE relaxes consistency to allow the system to process request even in an inconsistent state.
Example: No one would mind if their tweet were inconsistent within their social network for a short period of time. It is more important to get an immediate response than to have a consistent state of users' information.
To add to the other answers, I think the acronyms were derived to show a scale between the two terms to distinguish how reliable transactions or requests where between RDMS versus Big Data.
From this article acid vs base
In Chemistry, pH measures the relative basicity and acidity of an
aqueous (solvent in water) solution. The pH scale extends from 0
(highly acidic substances such as battery acid) to 14 (highly alkaline
substances like lie); pure water at 77° F (25° C) has a pH of 7 and is
neutral.
Data engineers have cleverly borrowed acid vs base from chemists and
created acronyms that while not exact in their meanings, are still apt
representations of what is happening within a given database system
when discussing the reliability of transaction processing.
One other point, since I having been working with Big Data using Elasticsearch it would help if I explained how it is structured. An instance of Elasticsearch is a node and a group of nodes form a cluster.
To me, from a practical standpoint, BA (Basically Available), in this context, has the idea of multiple master nodes to handle the Elasticsearch cluster and it's operations.
If you have 3 master nodes and the currently directing master node goes down, the system stays up, albeit in a less efficient state, and another master node takes its place as the main directing master node. If two master nodes go down, the system still stays up and the last master node takes over.
It could just be because ACID is one set of properties that substances show( in Chemistry) and BASE is a complement set of them.So it could be just to show the contrast between the two that the acronym was made up and then 'Basically Available Soft State Eventual Consistency' was decided as it's full-form.
One of the classical reasons we have a database deadlock is when two transactions are inserting and updating tables in a different order.
For example, transaction A inserts in Table A then Table B.
And transaction B inserts in Table B followed by A.
Such a scenario is always at risk of a database deadlock (assuming you are not using serializable isolation level).
My questions are:
What kind of patterns do you follow in your design to make sure that all transactions are inserting and updating in the same order.
A book I was reading- had a suggestion that you can sort the statements by the name of the table. Have you done something like this or different - which would enforce that all inserts and updates are in the same order?
What about deleting records? Delete needs to start from child tables and updates and inserts need to start from parent tables. How do you ensure that this would not run into a deadlock?
All transactions are
inserting\updating in the same order.
Deletes; identify records to be
deleted outside a transaction and
then attempt the deletion in the
smallest possible transaction, e.g.
looking up by the primary key or similar
identified during the lookup stage.
Small transactions generally.
Indexing and other performance
tuning to both speed transactions
and to promote index lookups over
tablescans.
Avoid 'Hot tables',
e.g. one table with incrementing
counters for other tables primary
keys. Any other 'switchboard' type
configuration is risky.
Especially if not using Oracle, learn
the looking behaviour of the target
RDBMS in detail (optimistic /
pessimistic, isolation levels, etc.)
Ensure you do not allow row locks to
escalate to table locks as some
RDMSes will.
Deadlocks are no biggie. Just be prepared to retry your transactions on failure.
And keep them short. Short transactions consisting of queries that touch very few records (via the magic of indexing) are ideal to minimize deadlocks - fewer rows are locked, and for a shorter period of time.
You need to know that modern database engines don't lock tables; they lock rows; so deadlocks are a bit less likely.
You can also avoid locking by using MVCC and the CONSISTENT READ transaction isolation level: instead of locking, some threads will just see stale data.
Carefully design your database processes to eliminate as much as possible transactions that involve multiple tables. When I've had database design control there has never been a case of deadlock for which I could not design out the condition that caused it. That's not to say they don't exist and perhaps abound in situations outside my limited experience; but I've had no shortage of opportunities to improve designs causing these kinds of problems. One obvious strategy is to start with a chronological write-only table for insertion of new complete atomic transactions with no interdependencies, and apply their effects in an orderly asynchronous process.
Always use the database default isolation levels and locking settings unless you are absolutely sure what risks they incur, and have proven it by testing. Redesign your process if at all possible first. Then, impose the least increase in protection required to eliminate the risk (and test to prove it.) Don't increase restrictiveness "just in case" - this often leads to unintended consequences, sometimes of the type you intended to avoid.
To repeat the point from another direction, most of what you will read on this and other sites advocating the alteration of database settings to deal with transaction risks and locking problems is misleading and/or false, as demonstrated by how they conflict with each other so regularly. Sadly, especially for SQL Server, I have found no source of documentation that isn't hopelessly confusing and inadequate.
I have found that one of the best investments I ever made in avoiding deadlocks was to use a Object Relational Mapper that could order database updates. The exact order is not important, as long as every transaction writes in the same order (and deletes in exactly the reverse order).
The reason that this avoids most deadlocks out of the box is that your operations are always table A first, then table B, then table C (which perhaps depends on table B).
You can achieve a similar result as long as you exercise care in your stored procedures or data layer's access code. The only problem is that it requires great care to do it by hand, whereas a ORM with a Unit of Work concept can automate most cases.
UPDATE: A delete should run forward to verify that everything is the version you expect (you still need record version numbers or timestamps) and then delete backwards once everything verifies. As this should all happen in one transaction, the possibility of something changing out from under you shouldn't exist. The only reason for the ORM doing it backwards is to obey the key requirements, but if you do your check forward, you will have all the locks you need already in hand.
I analyze all database actions to determine, for each one, if it needs to be in a multiple statement transaction, and then for each such case, what the minimum isolation level is required to prevent deadlocks... As you said serializable will certainly do so...
Generally, only a very few database actions require a multiple statement transaction in the first place, and of those, only a few require serializable isolation to eliminate deadlocks.
For those that do, set the isolation level for that transaction before you begin, and reset it whatever your default is after it commits.
Your example would only be a problem if the database locked the ENTIRE table. If your database is doing that...run :)