Mnesia transactions - database

accidentally some code that I'm writing is slowly turning into a DB system on its own, with incremental indexing, freeform "documents" (aka CouchDB kind) which can have arbitrary properties... annyyywaay... I decided to keep evolving it, mainly for educational purposes, and also to really tightly customize it just for my needs and keep it lean since Im not trying to make it useful for anyone but my own needs (how generous :) )...
anyway, I was wondering if anyone has oppinions/more info on how Mnesia implements transactions "under the hood".
Ulf W., I always appreciate your posts on the net so maybe you have some deeper info about this?

Mnesia uses a two-phase commit protocol to manage distributed transactions.

so I've been thinking a bit more about everything...
transaction locks could be kind of hacked on by having a "Lock" element in each tuple that represents the row of a table... that element would contain the Pid of the process thats holding the lock executing the current transaction (and that was spawned by the transaction manager)(or the Pid would be stored someplace else for efficiency reasons, the point is that there's a Pid per row). If another transaction wanted to write/read from a locked row the transaction manager would just not execute it, and leave it in the queue for later attempts (next time it tail recurses). I would have to think more about how Checkpoints would work as well... but overall Im starting to understand how things are structured, at least conceptually... its gonna be ugly ;)) and probably orders of magnitude slower than what Mnesia pulls off, but at least I'll learn plenty...
about distributed transactions... Im guessing that the transaction fun is sent over the wire to another node by converting it into a binary first and then reconstructing it on the other end... now, a question about that. Since the fun is a closure, say that Im using a variable in the fun thats bound outside the fun, with say a list of 10 elements, and then the the closure is passed in as a transaction that is to be executed on another node (transparently by the transaction manager) - Im assuming for the closure semantics to stand that the list with 10 elements would be sent as well as a part of the lexical environment that the closure "closes" over... am I missing something here? just thinking about how would one implemented distributed transactions...
thanks

As Mnesia is open source, you can have a look at the code itself. Similarly with CouchDB.

Related

Using SQLAlchemy sessions and transactions

While learning SQLAlchemy I came across two ways of dealing with SQLAlchemy's sessions.
One was creating the session once globally while initializing my database like:
DBSession = scoped_session(sessionmaker(extension=ZopeTransactionExtension()))
and import this DBSession instance in all my requests (all my insert/update) operations that follow.
When I do this, my DB operations have the following structure:
with transaction manager:
for each_item in huge_file_of_million_rows:
DBSession.add(each_item)
//More create, read, update and delete operations
I do not commit or flush or rollback anywhere assuming my Zope transaction manager takes care of it for me
(it commits at the end of the transaction or rolls back if it fails)
The second way and the most frequently mentioned on the web way was:
create a DBSession once like
DBSession=sessionmaker(bind=engine)
and then create a session instance of this per transaction:
session = DBSession()
for row in huge_file_of_million_rows:
for item in row:
try:
DBsesion.add(item)
//More create, read, update and delete operations
DBsession.flush()
DBSession.commit()
except:
DBSession.rollback()
DBSession.close()
I do not understand which is BETTER ( in terms of memory usage,
performance, and healthy) and how?
In the first method, I
accumulate all the objects to the session and then the commit
happens in the end. For a bulky insert operation, does adding
objects to the session result in adding them to the memory(RAM) or
elsewhere? where do they get stored and how much memory is consumed?
Both the ways tend to be very slow when I have about a
million inserts and updates. Trying SQLAlchemy core also takes the
same time to execute. 100K rows select insert and update takes about
25-30 minutes. Is there any way to reduce this?
Please point me in the right direction. Thanks in advance.
Here you have a very generic answer, and with the warning that I don't know much about zope. Just some simple database heuristics. Hope it helps.
How to use SQLAlchemy sessions:
First, take a look to their own explanation here
As they say:
The calls to instantiate Session would then be placed at the point in the application where database conversations begin.
I'm not sure I understand what you mean with method 1.; just in case, a warning: you should not have just one session for the whole application. You instantiate Session when the database conversations begin, but you surely have several points in the application in which you have different conversations beginning. (I'm not sure from your text if you have different users).
One commit at the end of a huge number of operations is not a good idea
Indeed it will consume memory, probably in the Session object of your python program, and surely in the database transaction. How much space? That's difficult to say with the information you provide; it will depend on the queries, on the database...
You could easily estimate it with a profiler. Take into account that if you run out of resources everything will go slower (or halt).
One commit per register is also not a good idea when processing a bulk file
It means you are asking the database to persist changes every time for every row. Certainly too much. Try with an intermediated number, commit every n hundreds of rows. But then it gets more complicated; one commit at the end of the file assures you that the file is either processed or not, while intermediate commits force you to take into account, when something fails, that your file is half through - you should reposition.
As for the times you mention, it is very difficult with the information you provide + what is your database + machine. Anyway, the order of magnitude of your numbers, a select+insert+update per 15ms, probably plus commit, sounds pretty high but more or less on the expected range (again it depends on queries + database + machine)... If you have to frequently insert so many registers you could consider other database solutions; it will depend on your scenario, and probably on dialects and may not be provided by an orm like SQLAlchemy.

How Indices Cope with MVCC?

Greetings Overflowers,
To my understanding (and I hope I'm not right) changes to indices cannot be MVCCed.
I'm wondering if this is also true with big records as copies can be costly.
Since records are accessed via indices (usually), how MVCC can be effective ?
Do, for e.g., indices keep track of different versions of MVCCed records ?
Any recent good reading on this subject ? Really appreciated !
Regards
Index itself can have both the records which can be pruned before returning. So in this case ondex alone can't be used to get the records (the MVCC done by PostGres). InnoDB/Oracle keeps only one version of data/index and using undo section it rebuilds the older versions for older transactions.
You wont have too many copies when DB is use in general as periodically copies will be garbage collected (In PostGres) and Oracle/InnoDB will have undo sections which will be reused when transaction is aborted/committed. If you have too many long running transaction obviously you will have problems.
Indexes are to speed up the access, find the record faster, without touch all of them, Index need not be accurate in first pass, you may need to look at tuple to see if its valid in one particular transaction or not (like in PostGres). racle or InnoDB even index is versioned so you can get data from indexes itself.
Read this to get detail idea of two ways of implementing MVCC (PresGres and Oracle/InnoDB).
InnoDB MVCC and comments here are useful too
PS: I'm not expert in mysql/oracle/postgres internal, still learning how things work.

Synchronizing Database Access in a Distributed App

A common bit of programming logic I find myself implementing often is something like the following pseudo-code:
Let X = some value
Let Database = some external Database handle
if !Database.contains(X):
SomeCalculation()
Database.insert(X)
However, in a multi-threaded program we have a race condition here. Thread A might check if X is in Database, find that it's not, and then proceed to call SomeCalculation(). Meanwhile, Thread B will also check if X is in Database, find that it's not, and insert a duplicate entry.
So of course, this needs to be synchronized like:
Let X = some value
Let Database = some external Database handle
LockMutex()
if !Database.contains(X):
SomeCalculation()
Database.insert(X)
UnlockMutex()
This is fine, except what if the application is a distributed app, running across multiple computers, all of which communicate with the same back-end database machine? In this case, a Mutex is useless, because it only synchronizes a single instance of the app with other local threads. To make this work, we'd need some kind of "global" distributed synchronization technique. (Assume that simply disallowing duplicates in Database is not a feasible strategy.)
In general, what are some practical solutions to this problem?
I realize this question is very generic, but I don't want to make this a language-specific question because this is an issue that comes up across multiple languages and multiple Database technologies.
I intentionally avoided specifying whether I'm talking about an RDBMS or SQL Database, versus something like a NoSQL Database, because again - I'm looking for generalized answers based on industry practices. For example, is this situation something that Atomic Stored Procedures might solve? Or Atomic Transactions? Or is this something that requires something like a "Distributed Mutex"? Or more generally, is this problem generally addressed by the Database system, or is it something the Application itself should handle?
If it turns out this question is impossible to answer at all without further information, please tell me so I can modify it.
One sure way to ensure against data stomping is to lock the data row. Many databases allow you to do that, via transactions. Some don't support transactions.
However, this is overkill for most cases, where contention is low in general. You might want to read up on Isolation levels to get more background on the topic.
A better general approach is often Optimistic Concurrency. The idea behind it is that each data row includes a signature, a timestamp works fine but the signature need not be time oriented. It could be a hash value, for example. This is a general concurrency management approach and is not limited to relational stores.
The app that changes data first reads the row, and then performs whatever calculations it requires, and then at some point, writes the updated data back to the data store. Via Optimistic concurrency, the app writes the update with the stipulation (expressed in SQL if it is a SQL database) that the data row must be updated only if the signature has not changed in the interim. And, each time a data row is updated, the signature must be updated as well.
The result is that updates don't get stomped on. But for a more rigorous explanation of the concurrency issues, refer to that article on DB Isolation levels.
All distributed updaters must follow the OCC convention (or something stronger, like transactional locking) in order for this to work.
You can obviously move the "synch" part to the DB layer itself, using an exclusive lock on a specific resource.
This is a bit extreme (in most cases, attempting the insert and managing the exception when you actually discover that someone already inserted the row) would be more adequate, I think.
Well, since you ask an general question, I will try to provide another option. Its not very orthodox, but may be useful: You could "define" a machine or a process responsible for doing that. For example:
Let X = some value
Let Database = some external Database handle
xResposible = Definer.defineResponsibleFor(X);
if(xResposible == me)
if !Database.contains(X):
SomeCalculation()
Database.insert(X)
The trick here is to make defineResponsibleFor always return the same value independent of who is calling. So, if you have a fair distributed range of X and a fair Definer all machines will have work to do. And you could use simple threading mutex to avoid race conditions. Of course, now you have to take care of fail-tolerance (if a machine or process is out of business your Definer must know and not define any job for it). But you should do this anyawy... :)

app engine data pipelines talk - for fan-in materialized view, why are work indexes necessary?

I'm trying to understand the data pipelines talk presented at google i/o:
http://www.youtube.com/watch?v=zSDC_TU7rtc
I don't see why fan-in work indexes are necessary if i'm just going to batch through input-sequence markers.
Can't the optimistically-enqueued task grab all unapplied markers, churn through as many of them as possible (repeatedly fetching a batch of say 10, then transactionally update the materialized view entity), and re-enqueue itself if the task times out before working through all markers?
Does the work indexes have something to do with the efficiency querying for all unapplied markers? i.e., it's better to query for "markers with work_index = " than for "markers with applied = False"? If so, why is that?
For reference, the question+answer which led me to the data pipelines talk is here:
app engine datastore: model for progressively updated terrain height map
A few things:
My approach assumes multiple workers (see ShardedForkJoinQueue here: http://code.google.com/p/pubsubhubbub/source/browse/trunk/hub/fork_join_queue.py), where the inbound rate of tasks exceeds the amount of work a single thread can do. With that in mind, how would you use a simple "applied = False" to split work across N threads? Probably assign another field on your model to a worker's shard_number at random; then your query would be on "shard_number=N AND applied=False" (requiring another composite index). Okay that should work.
But then how do you know how many worker shards/threads you need? With the approach above you need to statically configure them so your shard_number parameter is between 1 and N. You can only have one thread querying for each shard_number at a time or else you have contention. I want the system to figure out the shard/thread count at runtime. My approach batches work together into reasonably sized chunks (like the 10 items) and then enqueues a continuation task to take care of the rest. Using query cursors I know that each continuation will not overlap the last thread's, so there's no contention. This gives me a dynamic number of threads working in parallel on the same shard's work items.
Now say your queue backs up. How do you ensure the oldest work items are processed first? Put another way: How do you prevent starvation? You could assign another field on your model to the time of insertion-- call it add_time. Now your query would be "shard_number=N AND applied=False ORDER BY add_time DESC". This works fine for low throughput queues.
What if your work item write-rate goes up a ton? You're going to be writing many, many rows with roughly the same add_time. This requires a Bigtable row prefix for your entities as something like "shard_number=1|applied=False|add_time=2010-06-24T9:15:22". That means every work item insert is hitting the same Bigtable tablet server, the server that's currently owner of the lexical head of the descending index. So fundamentally you're limited to the throughput of a single machine for each work shard's Datastore writes.
With my approach, your only Bigtable index row is prefixed by the hash of the incrementing work sequence number. This work_index value is scattered across the lexical rowspace of Bigtable each time the sequence number is incremented. Thus, each sequential work item enqueue will likely go to a different tablet server (given enough data), spreading the load of my queue beyond a single machine. With this approach the write-rate should effectively be bound only by the number of physical Bigtable machines in a cluster.
One disadvantage of this approach is that it requires an extra write: you have to flip the flag on the original marker entity when you've completed the update, which is something Brett's original approach doesn't require.
You still need some sort of work index, too, or you encounter the race conditions Brett talked about, where the task that should apply an update runs before the update transaction has committed. In your system, the update would still get applied - but it could be an arbitrary amount of time before the next update runs and applies it.
Still, I'm not the expert on this (yet ;). I've forwarded your question to Brett, and I'll let you know what he says - I'm curious as to his answer, too!

Django data creation and commits

I'm not sure I 100% understand what the database does. If I just have some misconception, please point it out.
Let's say I have a function that wants to create 100 new entry in the database with has 100,000 entries.
It seems a lot faster when those 100 entries get create and the commit is made after the last entry is created.
Now, if those 100 entries get created by different users, is there a easy way to commit only after 100 entries are created?
Edit:
Should I maybe write some sort of buffer?
Databases are optimized for set-based operations, so yes it wouldbe faster to insert 100 records in a set than one at a time. However, when you are talking about users entering records one ata atime, you would not want to group them together under any circumstances that I can think of. Why?
First, if there was one bad record, the others would fail. This would make for 99 cranky users out of 100 (actually 100, but one would not really have reason to be cranky becasue he did the bad data entry to begin with).
Second, users would not see the records immediately after being entered. It is also true that they would not be able to do something further with those records until they are entered such as enter data into related tables. Having a delay like this would make users cranky. If users are entering data from customers through a phone call, they will be especially cranky at the wait (I worked at a call center with a horribly slow commercial product and believe me I know how upset the users used to get!)
Third, users will have gone on to something else and would not realize that their data was rejected for bad information, not a good thing at all.
How long are you going to wait to get your set number of records? 5 seconds, ten minutes?
What happens if for some reason the netwrok connection is lost during that time, wouldn;t the users lose the data they entered.
You might be able to hack something like that together, but you really shouldn't, because it wrecks your data integrity, which is the whole point of using transactions.
In your proposed solution, a problem with any insert in the batch would cause all the other (possibly totally valid) inserts from completely different users to fail. Also, users wouldn't be able to see the data they just tried to insert because the system was waiting to do the insert until the batch was full.
P.S. Here's a quick intro to transaction processing.
I think you do have a misconception. It sounds like you're looking at the database as something that is only for some sort of "long-term" memory. This is a bad concept; the database is the only memory your application has. Even when this isn't true, it's best to pretend that it is.
To go a little deeper, your application has:
scoped memory: variables that you define within view functions, for example. These all get destroyed when flow leaves the function.
globals: variables that are defined in the outermost part of your code. It is really important not to use these for any sort of state except perhaps configuration constants. The important thing is that you should rely on any dynamic behavior. Otherwise you will have to battle concurrency and forked processes (depending on server gateway) that aren't aware of each other. Just don't do it.
a caching scheme, if you choose to implement one. This is entirely optional in django, and there are many ways to do it. However, one typically uses some scheme to ensure that even if the cache crashes, the database reflects the current state of the data accurately.
your local filesystem. From a design point of view, most ways of taking advantage of this will either resemble a caching system (above) or be clumsy and fragile. From a performance point of view, it might be about as slow as a database.
your database.
So you see that there's not much place for you to put your data besides the database.

Resources