Definition relationship between Read Consistency in Oracle and Consistency in database ACID - database

Definition for Consistency:
In database systems, a consistent transaction is one that starts with a database in a consistent state and ends with the database in a consistent state. Any data written to the database must be valid according to all defined rules, including but not limited to constraints, cascades, triggers, and any combination thereof.
Definition for Read Consistency:
Oracle always enforces statement-level read consistency. This guarantees that all the data returned by a single query comes from a single point in time—the time that the query began. Therefore, a query never sees dirty data or any of the changes made by transactions that commit during query execution.
I've got puzzled, it seems that Read Consistency is kind of Isolation but not Consistency. Is that true?

That's right, but the concepts are related and the verbage maybe mixed up a bit.
"Consistency" (as in ACID) means that when you update the database, you cannot put it into an inconsistent state (the database will enforce that all constraints are met).
"Read Consistency" is one of the "Transaction Isolation" levels that describe how good concurrent transactions are isolated from each-other (i.e. in how far they can treat the database as if only they work on it).
Okay, let me rephrase that. The four transaction isolation levels describe what happens when you run multiple queries in the same transaction. So that is a potentially much stronger consistency.
The "Read Consistency" you mentioned is about a single query returning results that are "internally consistent": They represent data that existed at one point in time (when the query started). You won't get weird results for long-running queries.
I think this is a bit stronger of a guarantee than READ COMMITTED (you don't see anything that is not yet committed -- no dirty reads -- but you also don't see anything that has been committed after your query started).
But it does not imply that you get the same results when you run the same query in the same transaction five minutes later: you might see data that has been updated (and committed!) in the mean-time. If you don't want that, you need REPEATABLE READ.
As far as I know, READ UNCOMMITTED is not available in Oracle. That is probably what they mean by "Oracle always enforces statement-level read consistency".

Related

Thread-Safety and the Foreign Key Constraint

Background
Given:
a set of threads
each thread has its own data source
objects in each data source references objects in other data sources
there is a possibility for duplicate objects across various data sources
the threads are writing to a database with an engine that enforces the foreign key constraint
each type of object gets its own table and references the other objects through a foreign key
each thread has its own connection to the database
Proposed Solution
A register class which tracks the ID of the objects that have been written. The inteface of the register class has public methods, thus (represented in Java):
public interface Register
{
synchronized boolean requestObjectLock(int id);
synchronized boolean saveFinalized(int id);
synchronized boolean checkSaved(int id);
}
The method requestObjectLock checks to see if the object has been locked by another thread yet, and returns false it has. Otherwise, it locks that ID and returns true. It is then the responsibility of the calling thread to call saveFinalized when it has been successfully written to the database, and the responsibility of all other threads to check to see whether it has been written already with checkSaved before writing an object that references it. In other words, there are three states an object can be in: unregistered, locked (registered but unwritten), and saved (registered and written).
Reasoning
As far as I know there is no way to guarentee that one SQL query will finish before another when called by different threads. Thus, if an object was only registered or unregistered, it seems possible that a thread could check to see if an object was written, start writing an object that referenced it, and have its query complete (and fail) before the query that actually wrote the referenced object did.
Questions
Is it possible to guarantee the sequence of execution of queries being executed by different threads? And therefore, is this solution overengineered? Is there a simpler solution? On the other hand, is it safe?
The terms you need to research on the database side are "transaction isolation level" and "concurrency control". DBMS platform support varies. Some platforms implement a subset of the isolation levels defined in the SQL standards. (The SQL standards allow this. They're written in terms of what behavior isn't allowed.) And different platforms approach concurrency control in different ways.
Wikipedia, although not authoritative, has a good introduction to isolation levels, and also a good introduction to concurrency control.
As far as I know there is no way to guarentee that one SQL query will
finish before another when called by different threads.
That's kind of true. It's also kind of not true. In SQL standards, transaction isolation levels aren't concerned with who finishes first. They're concerned with behavior that's not allowed.
dirty read: Transaction A can read data written by concurrent, uncommitted transaction B.
nonrepeatable read: Transaction A reads data twice. A concurrent transaction, B, commits between the two reads. The data transaction A read first is different from the data it read second, because of transaction B. (Some people describe transaction A as seeing "same rows, different column values".)
phantom read: Transaction A reads data twice. A concurrent transaction, B, commits between the two reads. Transaction A's two reads return two different sets of rows, because transaction B has affected the evaluation of transaction A's WHERE clause. (Some people describe transaction A as seeing "same column values, different rows".)
You control transaction behavior in SQL using SET TRANSACTION. So SET TRANSACTION SERIALIZABLE means dirty reads, nonrepeatable reads, and phantom reads are impossible. SET TRANSACTION REPEATABLE READ allows phantom reads, but dirty reads and nonrepeatable reads are impossible.
You'll have to check your platform's documentation to find out what it supports. PostgreSQL, for example, supports all four isolation levels syntactically. But internally it only has two levels: read committed and serializable. That means you can SET TRANSACTION READ UNCOMMITTED, but you'll get "read committed" behavior.
Important for you: The effect of a serializable isolation level is to guarantee that transactions appear to have been issued one at a time by a single client. But that's not quite the same thing as saying that if transaction A starts before transaction B, it will commit before transaction B. If they don't affect each other, the dbms is allowed to commit transaction B first without violating the serializable isolation level semantics.
When I have questions myself about how these work, I test them by opening two command-line clients connected to the same database.

Can a database support "Atomicity" but not "Consistency" or vice-versa?

I am reading about ACID properties of a database. Atomicity and Consistency seem to be very closely related. I am wondering if there are any scenarios where we need to just support Atomicity but not Consistency or vice-versa. An example would really help!
They are somewhat related but there's a subtle difference.
Atomicity means that your transaction either happens or doesn't happen.
Consistency means that things like referential integrity are enforced.
Let's say you start a transaction to add two rows (a credit and debit which forms a single bank transaction). The atomicity of this has nothing to do with the consistency of the database. All it means it that either both rows or neither row will be added.
On the consistency front, let's say you have a foreign key constraint from orders to products. If you try to add an order that refers to a non-existent product, that's when consistency kicks in to prevent you from doing it.
Both are about maintaining the database in a workable state, hence their similarity. The former example will ensure the bank doesn't lose money (or steal it from you), the latter will ensure your application doesn't get surprised by orders for products you know nothing about.
Atomicity:
In an atomic transaction, a series of
database operations either all occur,
or nothing occurs. A guarantee of
atomicity prevents updates to the
database occurring only partially,
which can cause greater problems than
rejecting the whole series outright.
Consistency:
In database systems, a consistent
transaction is one that does not
violate any integrity constraints
during its execution. If a transaction
leaves the database in an illegal
state, it is aborted and an error is
reported
A database that supports atomicity but not consistency would allow transactions that leave the database in an inconsistent state (that is, violate referential or other integrity checks), provided the transaction completes successfully. For instance, you could add a string to an int column provided that the transaction performing this completed successfully.
Conversely, a database that supports consistency but not atomicity would allow partial transactions to complete, so long as the effects of that transaction didn't break any integrity checks (e.g. foreign keys must match an existing identity).
For instance, you could try adding a new row that included string and int values, and even if the insertion failed half way through losing half the data, the row would be allowed provided that none of the lost data was for required columns and no data was inserted into an incorrectly typed column.
Having said that, consistency relies on atomicity for the reversal of inconsistent transactions.
There is indeed a strong relation between Atomicity and Consistency, but they are not the same:
A DBMS can (theoretically) support Consistency and not Atomicity: for example, consider a transaction that consists SQL operations O1,O2, and O3. Now, assume that after O1 and O2 the DB is already in a consistent state. Then the DBMS can stop the transaction after O1 and O2 without O3 and still preserves consistency. Clearly, such a DBMS does nto supports atomicity (as O3 was not executed by O1 and O2 was).
A DBMS can (theoretically) support Atomicity and not Consistency: this can occur in a multi-user scenario, where atomicity only ensures that all actions of a transaction will be performed (or none of them) but it does not guaranteee that actions of one transaction done concurrently with another transaction may not end up in an inconsistent state.
However, what I do believe (but have not proven formally) is that if your DMBS guarantees both Atomicity and Isolation, then it must also guarantee Consistency.
I was also getting confused when reading about atomicity & consistency. Let's say there is scenario to do batch insert of 1000 records in the account table.
Atomicity of the batch is if all the 1000 records are inserted or none of the records are inserted if there is an error.
Consistency of the batch will be violated if at the account record level, we have put the logic to make the insert successful even if data type didn't match, related record was inserted in the foreign key table and later deleted after the successful account record update.
Hopefully this example clears the confusion.
I have a different understanding of consistency in the ACID context:
Within a transaction, if a given item of data is retrieved and retrieved again later in the same transaction, no changes are seen. That is, the transaction is given a consistent state of the database throughout the transaction. The only updates that can change data visible to the transaction are updates done by the transaction itself.
In my mind, this is tantamount to serializability.

How do ACID and database transactions work?

What is the relationship between ACID and database transaction?
Does ACID give database transaction or is it the same thing?
Could someone enlighten this topic.
ACID is a set of properties that you would like to apply when modifying a database.
Atomicity
Consistency
Isolation
Durability
A transaction is a set of related changes which is used to achieve some of the ACID properties. Transactions are tools to achieve the ACID properties.
Atomicity means that you can guarantee that all of a transaction happens, or none of it does; you can do complex operations as one single unit, all or nothing, and a crash, power failure, error, or anything else won't allow you to be in a state in which only some of the related changes have happened.
Consistency means that you guarantee that your data will be consistent; none of the constraints you have on related data will ever be violated.
Isolation means that one transaction cannot read data from another transaction that is not yet completed. If two transactions are executing concurrently, each one will see the world as if they were executing sequentially, and if one needs to read data that is written by another, it will have to wait until the other is finished.
Durability means that once a transaction is complete, it is guaranteed that all of the changes have been recorded to a durable medium (such as a hard disk), and the fact that the transaction has been completed is likewise recorded.
So, transactions are a mechanism for guaranteeing these properties; they are a way of grouping related actions together such that as a whole, a group of operations can be atomic, produce consistent results, be isolated from other operations, and be durably recorded.
ACID are desirable properties of any transaction processing engine.
A DBMS is (if it is any good) a particular kind of transaction processing engine that exposes, usually to a very large extent but not quite entirely, those properties.
But other engines exist that can also expose those properties. The kind of software that used to be called "TP monitors" being a case in point (nowadays' equivalent mostly being web servers).
Such TP monitors can access resources other than a DBMS (e.g. a printer), and still guarantee ACID toward their users. As an example of what ACID might mean when a printer is involved in a transaction:
Atomicity: an entire document gets printed or nothing at all
Consistency: at end-of-transaction, the paper feed is positioned at top-of-page
Isolation: no two documents get mixed up while printing
Durability: the printer can guarantee that it was not "printing" with empty cartridges.
What is the relationship between ACID and database transaction?
In a relational database, every SQL statement must execute in the scope of a transaction.
Without defining the transaction boundaries explicitly, the database is going to use an implicit transaction which is wraps around every individual statement.
The implicit transaction begins before the statement is executed and end (commit or rollback) after the statement is executed.
The implicit transaction mode is commonly known as auto-commit.
A transaction is a collection of read/write operations succeeding only if all contained operations succeed.
Inherently a transaction is characterized by four properties (commonly referred as ACID):
Atomicity
Consistency
Isolation
Durability
Does ACID give database transaction or is it the same thing?
For a relational database system, this is true because the SQL Standard specifies that a transaction should provide the ACID guarantees:
Atomicity
Atomicity takes individual operations and turns them into an all-or-nothing unit of work, succeeding if and only if all contained operations succeed.
A transaction might encapsulate a state change (unless it is a read-only one). A transaction must always leave the system in a consistent state, no matter how many concurrent transactions are interleaved at any given time.
Consistency
Consistency means that constraints are enforced for every committed transaction. That implies that all Keys, Data types, Checks and Trigger are successful and no constraint violation is triggered.
Isolation
Transactions require concurrency control mechanisms, and they guarantee correctness even when being interleaved. Isolation brings us the benefit of hiding uncommitted state changes from the outside world, as failing transactions shouldn’t ever corrupt the state of the system. Isolation is achieved through concurrency control using pessimistic or optimistic locking mechanisms.
Durability
A successful transaction must permanently change the state of a system, and before ending it, the state changes are recorded in a persisted transaction log. If our system is suddenly affected by a system crash or a power outage, then all unfinished committed transactions may be replayed.
I slightly modified the printer example to make it more explainable
1 document which had 2 pages content was sent to printer
Transaction - document sent to printer
atomicity - printer prints 2 pages of a document or none
consistency - printer prints half page and the page gets stuck. The printer restarts itself and prints 2 pages with all content
isolation - while there were too many print outs in progress - printer prints the right content of the document
durability - while printing, there was a power
cut- printer again prints documents without any errors
Hope this helps someone to get the hang of the concept of ACID
ACID properties are very old and important concept of database theory. I know that you can find lots of posts on this topic, but still I would like to start share answer on this because this is very important topic of RDBMS.
Database System plays with lots of different types of transactions where all transaction has certain characteristic. This characteristic is known ACID Properties.
ACID Properties take grantee for all database transactions to accomplish all tasks.
Atomicity : Either commit all or nothing.
Consistency : Make consistent record in terms of validate all rule and constraint of transaction.
Isolation : Make sure that two transaction is unaware to each other.
Durability : committed data stored forever.
Reference taken from this article:
To quote Wikipedia:
ACID (atomicity, consistency, isolation, durability) is a set of properties that guarantee database transactions are processed reliably.
A DBMS that supports transactions will strive to support all of these properties - any commercial DBMS (as well as several open-source DBMSs) provide full ACID 'support' - although it's often possible (for example, with varying isolation levels in MSSQL) to lessen the ACIDness - thus losing the guarantee of fully transactional behaviour.
ACID Properties in Databases:
Atomicity : Transactions are all or nothing
Consistency: Only valid data is saved (database from one state that is consistent to another state that is also consistent.)
Isolation: Transaction do not effect each other (Multiple transactions can run at the same time in the system. Executing multiple transactions in parallel must have the same results as running them sequentially.)
Durability: Written data will not be lost (even if the database crashes immediately or in the event of a power loss.)
Credit
[Gray] introduced the ACD properties for a transaction in 1981. In 1983 [Haerder] added the Isolation property. In my opinion, the ACD properties would be have a more useful set of properties to discuss. One interpretation of Atomicity (that the transaction should be atomic as seen from any client any time) would actually imply the isolation property. The "isolation" property is useful when the transaction is not isolated; when the isolation property is relaxed. In ANSI SQL speak: if the isolation level is weaker then SERIALIZABLE. But when the isolation level is SERIALIZABLE, the isolation property is not really of interest.
I have written more about this in a blog post: "ACID Does Not Make Sense".
http://blog.franslundberg.com/2013/12/acid-does-not-make-sense.html
[Gray] The Transaction Concept, Jim Gray, 1981.
http://research.microsoft.com/en-us/um/people/gray/papers/theTransactionConcept.pdf
[Haerder] Principles of Transaction-Oriented Database Recovery, Haerder and Reuter, 1983.
http://www.stanford.edu/class/cs340v/papers/recovery.pdf
Transaction can be defined as a collection of task that are considered as minimum processing unit. Each minimum processing unit can not be divided further.
All transaction must contain four properties that commonly known as ACID properties. i.e ACID are the group of properties of any transaction.
Atomicity :
Consistency
Isolation
Durability

Why use a READ UNCOMMITTED isolation level?

In plain English, what are the disadvantages and advantages of using
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
in a query for .NET applications and reporting services applications?
This isolation level allows dirty reads. One transaction may see uncommitted changes made by some other transaction.
To maintain the highest level of isolation, a DBMS usually acquires locks on data, which may result in a loss of concurrency and a high locking overhead. This isolation level relaxes this property.
You may want to check out the Wikipedia article on READ UNCOMMITTED for a few examples and further reading.
You may also be interested in checking out Jeff Atwood's blog article on how he and his team tackled a deadlock issue in the early days of Stack Overflow. According to Jeff:
But is nolock dangerous? Could you end
up reading invalid data with read uncommitted on? Yes, in theory. You'll
find no shortage of database
architecture astronauts who start
dropping ACID science on you and all
but pull the building fire alarm when
you tell them you want to try nolock.
It's true: the theory is scary. But
here's what I think: "In theory there
is no difference between theory and
practice. In practice there is."
I would never recommend using nolock
as a general "good for what ails you"
snake oil fix for any database
deadlocking problems you may have. You
should try to diagnose the source of
the problem first.
But in practice adding nolock to queries that you absolutely know are simple, straightforward read-only affairs never seems to lead to problems... As long as you know what you're doing.
One alternative to the READ UNCOMMITTED level that you may want to consider is the READ COMMITTED SNAPSHOT. Quoting Jeff again:
Snapshots rely on an entirely new data change tracking method ... more than just a slight logical change, it requires the server to handle the data physically differently. Once this new data change tracking method is enabled, it creates a copy, or snapshot of every data change. By reading these snapshots rather than live data at times of contention, Shared Locks are no longer needed on reads, and overall database performance may increase.
My favorite use case for read uncommited is to debug something that is happening inside a transaction.
Start your software under a debugger, while you are stepping through the lines of code, it opens a transaction and modifies your database. While the code is stopped, you can open a query analyzer, set on the read uncommited isolation level and make queries to see what is going on.
You also can use it to see if long running procedures are stuck or correctly updating your database using a query with count(*).
It is great if your company loves to make overly complex stored procedures.
This can be useful to see the progress of long insert queries, make any rough estimates (like COUNT(*) or rough SUM(*)) etc.
In other words, the results the dirty read queries return are fine as long as you treat them as estimates and don't make any critical decisions based upon them.
The advantage is that it can be faster in some situations. The disadvantage is the result can be wrong (data which hasn't been committed yet could be returned) and there is no guarantee that the result is repeatable.
If you care about accuracy, don't use this.
More information is on MSDN:
Implements dirty read, or isolation level 0 locking, which means that no shared locks are issued and no exclusive locks are honored. When this option is set, it is possible to read uncommitted or dirty data; values in the data can be changed and rows can appear or disappear in the data set before the end of the transaction. This option has the same effect as setting NOLOCK on all tables in all SELECT statements in a transaction. This is the least restrictive of the four isolation levels.
When is it ok to use READ UNCOMMITTED?
Rule of thumb
Good: Big aggregate reports showing constantly changing totals.
Risky: Nearly everything else.
The good news is that the majority of read-only reports fall in that Good category.
More detail...
Ok to use it:
Nearly all user-facing aggregate reports for current, non-static data e.g. Year to date sales.
It risks a margin of error (maybe < 0.1%) which is much lower than other uncertainty factors such as inputting error or just the randomness of when exactly data gets recorded minute to minute.
That covers probably the majority of what an Business Intelligence department would do in, say, SSRS. The exception of course, is anything with $ signs in front of it. Many people account for money with much more zeal than applied to the related core metrics required to service the customer and generate that money. (I blame accountants).
When risky
Any report that goes down to the detail level. If that detail is required it usually implies that every row will be relevant to a decision. In fact, if you can't pull a small subset without blocking it might be for the good reason that it's being currently edited.
Historical data. It rarely makes a practical difference but whereas users understand constantly changing data can't be perfect, they don't feel the same about static data. Dirty reads won't hurt here but double reads can occasionally be. Seeing as you shouldn't have blocks on static data anyway, why risk it?
Nearly anything that feeds an application which also has write capabilities.
When even the OK scenario is not OK.
Are any applications or update processes making use of big single transactions? Ones which remove then re-insert a lot of records you're reporting on? In that case you really can't use NOLOCK on those tables for anything.
Use READ_UNCOMMITTED in situation where source is highly unlikely to change.
When reading historical data. e.g some deployment logs that happened two days ago.
When reading metadata again. e.g. metadata based application.
Don't use READ_UNCOMMITTED when you know souce may change during fetch operation.
Regarding reporting, we use it on all of our reporting queries to prevent a query from bogging down databases. We can do that because we're pulling historical data, not up-to-the-microsecond data.
This will give you dirty reads, and show you transactions that's not committed yet. That is the most obvious answer. I don't think its a good idea to use this just to speed up your reads. There is other ways of doing that if you use a good database design.
Its also interesting to note whats not happening. READ UNCOMMITTED does not only ignore other table locks. It's also not causing any locks in its own.
Consider you are generating a large report, or you are migrating data out of your database using a large and possibly complex SELECT statement. This will cause a shared lock that's may be escalated to a shared table lock for the duration of your transaction. Other transactions may read from the table, but updates are impossible. This may be a bad idea if its a production database since the production may stop completely.
If you are using READ UNCOMMITTED you will not set a shared lock on the table. You may get the result from some new transactions or you may not depending where it the table the data were inserted and how long your SELECT transaction have read. You may also get the same data twice if for example a page split occurs (the data will be copied to another location in the data file).
So, if its very important for you that data can be inserted while doing your SELECT, READ UNCOMMITTED may make sense. You have to consider that your report may contain some errors, but if its based on millions of rows and only a few of them are updated while selecting the result this may be "good enough". Your transaction may also fail all together since the uniqueness of a row may not be guaranteed.
A better way altogether may be to use SNAPSHOT ISOLATION LEVEL but your applications may need some adjustments to use this. One example of this is if your application takes an exclusive lock on a row to prevent others from reading it and go into edit mode in the UI. SNAPSHOT ISOLATION LEVEL does also come with a considerable performance penalty (especially on disk). But you may overcome that by throwing hardware on the problem. :)
You may also consider restoring a backup of the database to use for reporting or loading data into a data warehouse.
It can be used for a simple table, for example in an insert-only audit table, where there is no update to existing row, and no fk to other table. The insert is a simple insert, which has no or little chance of rollback.
I always use READ UNCOMMITTED now. It's fast with the least issues. When using other isolations you will almost always come across some Blocking issues.
As long as you use Auto Increment fields and pay a little more attention to inserts then your fine, and you can say goodbye to blocking issues.
You can make errors with READ UNCOMMITED but to be honest, it is very easy make sure your inserts are full proof. Inserts/Updates which use the results from a select are only thing you need to watch out for. (Use READ COMMITTED here, or ensure that dirty reads aren't going to cause a problem)
So go the Dirty Reads (Specially for big reports), your software will run smoother...

Database deadlocks

One of the classical reasons we have a database deadlock is when two transactions are inserting and updating tables in a different order.
For example, transaction A inserts in Table A then Table B.
And transaction B inserts in Table B followed by A.
Such a scenario is always at risk of a database deadlock (assuming you are not using serializable isolation level).
My questions are:
What kind of patterns do you follow in your design to make sure that all transactions are inserting and updating in the same order.
A book I was reading- had a suggestion that you can sort the statements by the name of the table. Have you done something like this or different - which would enforce that all inserts and updates are in the same order?
What about deleting records? Delete needs to start from child tables and updates and inserts need to start from parent tables. How do you ensure that this would not run into a deadlock?
All transactions are
inserting\updating in the same order.
Deletes; identify records to be
deleted outside a transaction and
then attempt the deletion in the
smallest possible transaction, e.g.
looking up by the primary key or similar
identified during the lookup stage.
Small transactions generally.
Indexing and other performance
tuning to both speed transactions
and to promote index lookups over
tablescans.
Avoid 'Hot tables',
e.g. one table with incrementing
counters for other tables primary
keys. Any other 'switchboard' type
configuration is risky.
Especially if not using Oracle, learn
the looking behaviour of the target
RDBMS in detail (optimistic /
pessimistic, isolation levels, etc.)
Ensure you do not allow row locks to
escalate to table locks as some
RDMSes will.
Deadlocks are no biggie. Just be prepared to retry your transactions on failure.
And keep them short. Short transactions consisting of queries that touch very few records (via the magic of indexing) are ideal to minimize deadlocks - fewer rows are locked, and for a shorter period of time.
You need to know that modern database engines don't lock tables; they lock rows; so deadlocks are a bit less likely.
You can also avoid locking by using MVCC and the CONSISTENT READ transaction isolation level: instead of locking, some threads will just see stale data.
Carefully design your database processes to eliminate as much as possible transactions that involve multiple tables. When I've had database design control there has never been a case of deadlock for which I could not design out the condition that caused it. That's not to say they don't exist and perhaps abound in situations outside my limited experience; but I've had no shortage of opportunities to improve designs causing these kinds of problems. One obvious strategy is to start with a chronological write-only table for insertion of new complete atomic transactions with no interdependencies, and apply their effects in an orderly asynchronous process.
Always use the database default isolation levels and locking settings unless you are absolutely sure what risks they incur, and have proven it by testing. Redesign your process if at all possible first. Then, impose the least increase in protection required to eliminate the risk (and test to prove it.) Don't increase restrictiveness "just in case" - this often leads to unintended consequences, sometimes of the type you intended to avoid.
To repeat the point from another direction, most of what you will read on this and other sites advocating the alteration of database settings to deal with transaction risks and locking problems is misleading and/or false, as demonstrated by how they conflict with each other so regularly. Sadly, especially for SQL Server, I have found no source of documentation that isn't hopelessly confusing and inadequate.
I have found that one of the best investments I ever made in avoiding deadlocks was to use a Object Relational Mapper that could order database updates. The exact order is not important, as long as every transaction writes in the same order (and deletes in exactly the reverse order).
The reason that this avoids most deadlocks out of the box is that your operations are always table A first, then table B, then table C (which perhaps depends on table B).
You can achieve a similar result as long as you exercise care in your stored procedures or data layer's access code. The only problem is that it requires great care to do it by hand, whereas a ORM with a Unit of Work concept can automate most cases.
UPDATE: A delete should run forward to verify that everything is the version you expect (you still need record version numbers or timestamps) and then delete backwards once everything verifies. As this should all happen in one transaction, the possibility of something changing out from under you shouldn't exist. The only reason for the ORM doing it backwards is to obey the key requirements, but if you do your check forward, you will have all the locks you need already in hand.
I analyze all database actions to determine, for each one, if it needs to be in a multiple statement transaction, and then for each such case, what the minimum isolation level is required to prevent deadlocks... As you said serializable will certainly do so...
Generally, only a very few database actions require a multiple statement transaction in the first place, and of those, only a few require serializable isolation to eliminate deadlocks.
For those that do, set the isolation level for that transaction before you begin, and reset it whatever your default is after it commits.
Your example would only be a problem if the database locked the ENTIRE table. If your database is doing that...run :)

Resources