MPI how to send and receive SQLite database - database

I have a big SQLite database to process, so I would like to use MPI for parallelization to accelerate the speed. What I want to do is sending a database from root to every slave, and sending the modified databases to root after slave add some table into it. I want to use MPI_Type_create_struct to create a datatype to store database, but the database is too complicated. IS there any other way to handle this situation? Thank you in advance!

I recently dealt with a similar problem - I have a large MPI application that uses SQLite as a configuration store. Handling multi-process writes is a challenge with an embedded SQL database. My experience with this involves a massively parallel application (running up to 65,535 ranks) with a shared filesystem.
Based on the FAQ from SQLite and some experience with database engines, there are a few ways to approach this problem. I am making the assumption that you are operating with a shared distributed file system, and multiple separate computers (a standard HPC cluster setup).
Since SQLite will block when multiple processes write to the database (but not read), reads will most likely not be an issue. Each process can run multiple SELECT commands at the same time without issue.
The challenge will be in the writing. Disk I/O is several orders of magnitude slower than computation, so generally this will be the bottleneck. Having said that, network communication may also be a significant slowdown, so how you approach the problem really depends on where the weakest link of your running environment will be.
If you have a fast network and slow disk speed, or if you want to implement this in the most straightforward way possible, your best bet is to have a single MPI rank in charge of writing to the database. Your compute processes would independently run SELECT commands until computation was complete, then send the new data to the MPI database process. The database control process would then write the new data to disk. I would not try to send the structure of the database across the network, rather I would send the data that should be written, along with (possibly) a flag that would identify what table/insert query the data should be written with. This technique is sort of similar to how a RDBMS works - while RDBMS servers do support concurrent writes, there is a "central" process in control of the ordering of write operations.
One thing to note is that if a process writes to the SQLite database, the file is locked for all processes that are trying to read or write to it. You will need to either handle the SQLITE_BUSY return code in your worker processes, register a callback to handle this, change the busy behavior, or use an alternate technique. In my application, I found that loading the database as an in-memory database, (https://www.sqlite.org/inmemorydb.html) for the readers provided a good workaround. Readers access the in-memory database, but sent results to the controlling process for writes. The downside is that you will have multiple copies of the database in memory.
Another option that might be less network intensive is to do the reads concurrently and have each worker process write out to their own file. You could write out to separate SQLite database files, or even export something like CSV (depending on the complexity of the data). When writes are complete, you would then have a single process merge the individual files into a single result database file - see How can I merge many SQLite databases?. This method has its own issues, but depending on where your bottlenecks are and how the system as a whole is laid out, this technique may work.
Finally, you might consider reading from the SQLite database and saving the data to a proper distributed file format, such as HDF5 (or using MPI IO). Once the computation is done, it would be pretty straightforward to write a script that would create a new SQLite database from this foreign file format.

Related

SQL to text archiving using .net and parallel processing

I have a table that has millions of rows. It has logging data. I want to move the data to text files. Each day's worth of data should go into its own text file. I'm in .net environment. What is the efficient way to achieve it ?
I want to use parallel processing because we have beefy servers with many cores. Some choices I can think of are :
Have parallel data readers. Each reader queries a portion of the data. How do I manage the total connections with this approach ? Also if I went this route, I will have to not disrupt the normal usage for the users. The other problem I can see with this approach is managing my own threads and setting an upper limit, whereas Parallel.ForEach would be much simpler.
Producer-consumer pattern: One thread reads the data and queues it in memory. Multiple writers consume the data from memory and write it out to text files.
I'm open to PetaPoco/NPoco. Ideally I want to use Parallel.ForEach without complicating the threading code too much.
Parallel processing helps when there is a lot of computing involved. However, here, you have mainly I/O involved. Harddisks can only write to one file at a time. So multithreading will not bring the hoped-speed growth. It could, in contrary, reduce speed, since the harddisk could be forced to move back and fourth when writing to the different files.

How to achieve row-level locking on an in-memory SQLite db?

I'm running SQLite v3.7.17 from my program in in-memory mode and using shared cache (as specified in Shared Cache And In-Memory Databases). My program is multi-threaded and all these threads access the same in-memory database.
Is there any way I can configure or use my SQLite database such that, when two threads run update query on same table (but different rows), one doesn't wait on another? That is, how can I achieve row-level locking on this in-memory db?
This should theoretically be possible as my SQLite data is not in a file (therefore filesystem writes do not apply).
It's not the filesystem that determines whether SQLite can lock rows. It's SQLite's architecture.
Even using write-ahead logging, you can only have one writer at a time.
Writers merely append new content to the end of the WAL file. Because
writers do nothing that would interfere with the actions of readers,
writers and readers can run at the same time. However, since there is
only one WAL file, there can only be one writer at a time.
SQLite3 has a kind of table locking now, but not row locking.
Sqlite does not support row lock feature. And i’ve just seen sqlumdash, which is based on Sqlite has row lock feature.
Please check it at:
https://github.com/sqlumdash/sqlumdash/
As I know, it’s developed by toshiba
Is this in a larger transnational scenario? Because if the situation is as simple as you describe, then there is no advantage to row-locking vs. table locking.
An in-memory DB isn't subject to I/O latency; it is CPU bound and the CPU can process the two writes sequentially faster than it could process them concurrently because the latter has all the same memory operations plus thread-swapping and row-locking overhead. Sure, in a multi-CPU system one could, theoretically, write to different rows simultaneously, but the necessary logic to support row-locking would actually take longer than the (trivial) operation of writing the record into memory.
In an IMDB of any size, table-locks on individual tables can be retained for efficiency while multiple CPUs can be employed simultaneously on multiple independent queries against multiple independent tables.
sqlite3 doesn't support row level lock. if you can guarantee item is unique or don't need to be unique then you can works with multiple files.
just make multiple sqlite3 database(files) and open it. if you make 50 db, then you can works with 50 rows in same time.

What are good algorithms to keep consistency across multiple files in a network?

What are good algorithms to keep consistency in multiple files?
This is a school project. I have to implement in C, some replication across a network.
I have 2 servers,
Server A1
Server A2
Both servers have their own file called "data.txt"
If I write something to one of them, I need the other to be updated.
I also have another scenario, with 3 Servers.
Server B1
Server B2
Server B3
I need these do do pretty much the same.
While this would be fairly simple to implement. If one, or two of the servers were to be down, When comming back up, they would have to update themselves.
I'm sure there are algorithms that solve this efficiently. I know what I want, I just don't know exactly what I'm looking for!
Can someone point me to the right direction please?
Thank you!
The fundamental issue here is known as the 'CAP theorem', which defines three properties that a distributed system can have:
Consistency: Reading data from the system always returns the most up-to-date data.
Availability: Every response either succeeds or fails (doesn't just keep waiting until things recover)
Partition tolerance: The system can operate when its servers are unable to communicate with each other (a server being down is one special case of this)
The CAP theorem states that you can only have two of these. If your system is consistent and partition tolerant, then it loses the availability condition - you might have to wait for a partition to heal before you get a response. If you have consistency and availability, you'll have downtime when there's a partition, or enough servers are down. If you have availability and partition tolerance, you might read stale data, or have to deal with conflicting writes.
Note that this applies separately between reads and writes - you can have an Available and Partition-Tolerant system for reads, but Consistent and Available system for writes. This is basically a master-slave system; in a partition, writes might fail (if they're on the wrong side of a partition), but reads will work (although they might return stale data).
So if you want to be Available and Partition Tolerant for reads, one easy option is to just designate one host as the only one that can do writes, and sync from it (eg, using rsync from a cron script or something - in your C project, you'd just copy the file over using some simple network code periodically, and do an extra copy just after modifying it).
If you need partition tolerance for writes, though, it's more complex. You can have two servers that can't talk to each other both doing writes, and later have to figure out what data wins. This basically means you'll need to compare the two versions when syncing and decide what wins. This can just be as simple as 'let the highest timestamp win', or you can use vector clocks as in Dynamo to implement a more complex policy - which is appropriate here depends on your application.
Check out rsync and how Dropbox works.
With every write on to server A, fork a process to write the same content to server B.
So that all the writes on to server A are replicated on to server B. If you have multiple servers, make the forked process to write across all the backup servers.

How to split DB2 load files by node on ETL server?

I'm building a DB2 "Infosphere" data warehouse and am expecting to have 8-16 nodes or partitions.
Since I'll be loading from 130-300 million rows a day, and my load process is also my recovery process - I want the loads to be as fast as possible. I'm not surprised to find this tip in the IBM "infocenter" documentation:
"Better performance can be expected if the database partitions participating in the distribution process are different from the loading database partitions, since there is less contention for CPU cycles."
I'd prefer not to dedicate an expensive DB2 node just to splitting load files by hashkey - since my ETL servers are so cheap (we use python, not a licensed commercial product). Plus, since I rely on archived loads for recovery - I may have to convert them in case we add nodes to the database. I'd like that also done on an ETL server. Note - I believe DataStage also performs this task on the ETL server rather than through DB2.
Can anyone suggest how our python ETL process can efficiently use the same hashing algorithm and mapping tables that DB2 will use? And other tips?
Thanks
First of all:
You do not need to pre-split the data inside your ETL process. The LOAD utility will handle splitting the data for you. Your python process can either write the data to load to a flat file or write directly to a pipe (that the LOAD utility reads from). In almost every case, it is easier to let the database handle partitioning the data for you.
The InfoCenter comment about the splitters taking up CPU cycles is probably not something you need to worry about. This generally applies only in extreme situations, where there are many more database partitions (i.e., when you need to have multiple processes splitting the data) and when CPU utilization on the database nodes is very high.
From a LOAD perspective, the amount of time you'll save by having pre-split data is negligible. The limiting factor when loading data is writing the data out to disk – not partitioning it. If reloading data is your primary method of recovery, then I wouldn't worry too much about this.
If all of this does not convince you and you really want to go down the path of having your ETL process split the data, DB2 does provide an API (in C) that applications can call to handle this: db2GetDistMap() and db2GetRowPartNum(). You may be able to write a native python module to handle this.
These are most useful in cases where an application is using SQL to INSERT rows into the table (as opposed to using the LOAD utility), and spawns multiple threads to write data to each partition independently (i.e., each thread is doing the transformation and loading in parallel). If you can't parallelize the transformation portion, then don't bother with this.
Obviously, there are a lot of variables, so YMMV.

performance of web app with high number of inserts

What is the best IO strategy for a high traffic web app that logs user behaviour on a website and where ALL of the traffic will result in an IO write? Would it be to write to a file and overnight do batch inserts to the database? Or to simply do an INSERT (or INSERT DELAYED) per request? I understand that to consider this problem properly much more detail about the architecture would be needed, but a nudge in the right direction would be much appreciated.
By writing to the DB, you allow the RDBMS to decide when disk IO should happen - if you have enough RAM, for instance, it may be effectively caching all those inserts in memory, writing them to disk when there's a lighter load, or on some other scheduling mechanism.
Writing directly to the filesystem is going to be bandwidth-limited more-so than writing to a DB which then writes, expressly because the DB can - theoretically - write in more efficient sizes, contiguously, and at "convenient" times.
I've done this on a recent app. Inserts are generally pretty cheap (esp if you put them into an unindexed hopper table). I think that you have a couple of options.
As above, write data to a hopper table, if what ever application framework supports batched inserts, then use these, it will speed it up. Then every x requests, do a merge (via an SP call) into a master table, where you can normalize off data that has low entropy. For example if you are storing if the HTTP type of the request (get/post/etc), this can only ever be a couple of types, and better to store as an Int, and get improved I/O + query performance. Your master tables can also be indexed as you would normally do.
If this isn't good enough, then you can stream the requests to files on the local file system, and then have an out of band (i.e seperate process from the webserver) suck these files up and BCP them into the database. This will be at the expense of more moving parts, and potentially, a greater delay between receiving requests and them finding their way into the database
Hope this helps, Ace
When working with an RDBMS the most important thing is optimizing write operations to disk. Something somewhere has got to flush() to persistant storage (disk drives) to complete each transaction which is VERY expensive and time consuming. Minimizing the number of transactions and maximizing the number of sequential pages written is key to performance.
If you are doing inserts sending them in bulk within a single transaction will lead to more effecient write behavior on disk reducing the number of flush operations.
My recommendation is to queue the messages and periodically .. say every 15 seconds or so start a transaction ... send all queued inserts ... commit the transaction.
If your database supports sending multiple log entries in a single request/command doing so can have a noticable effect on performance when there is some network latency between the application and RDBMS by reducing the number of round trips.
Some systems support bulk operations (BCP) providing a very effecient method for bulk loading data which can be faster than the use of "insert" queries.
Sparing use of indexes and selection of sequential primary keys help.
Making sure multiple instances either coordinate write operations or write to separate tables can improve throughput in some instances by reducing concurrency management overhead in the database.
Write to a file and then load later. It's safer to be coupled to a filesystem than to a database. And the database is more likely to fail than the your filesystem.
The only problem with using the filesystem to back writes is how you extend the log.
A poorly implemented logger will have to open the entire file to append a line to the end of it. I witnessed one such example case where the person logged to a file in reverse order, being the most recent entries came out first, which required loading the entire file into memory, writing 1 line out to the new file, and then writing the original file contents after it.
This log eventually exceeded phps memory limit, and as such, bottlenecked the entire project.
If you do it properly however, the filesystem reads/writes will go directly into the system cache, and will only be flushed to disk every 10 or more seconds, ( depending on FS/OS settings ) which has a negligible performance hit compared to writing to arbitrary memory addresses.
Oh yes, and whatever system you use, you'll need to think about concurrent log appending. If you use a database, a high insert load can cause you to have deadlock conditions, and on files, you need to make sure that you're not going to have 2 concurrent writes cancel each other out.
The insertions will generally impact the (read/update) performance of the table. Perhaps you can do the writes to another table (or database) and have batch job that processes this data. The advantages of the database approach is that you can query/report on the data and all the data is logically in a relational database and may be easier to work with. Depending on how the data is logged to text file, you could open up more possibilities for corruption.
My instinct would be to only use the database, avoiding direct filesystem IO at all costs. If you need to produce some filesystem artifact, then I'd use a nightly cron job (or something like it) to read DB records and write to the filesystem.
ALSO: Only use "INSERT DELAYED" in cases where you don't mind losing a few records in the event of a server crash or restart, because some records almost certainly WILL be lost.
There's an easier way to answer this. Profile the performance of the two solutions.
Create one page that performs the DB insert, another that writes to a file, and another that does neither. Otherwise, the pages should be identical. Hit each page with a load tester (JMeter for example) and see what the performance impact is.
If you don't like the performance numbers, you can easily tweak each page to try and optimize performance a bit or try new solutions... everything from using MSMQ backed by MSSQL to delayed inserts to shared logs to individual files with a DB background worker.
That will give you a solid basis to make this decision rather than depending on speculation from others. It may turn out that none of the proposed solutions are viable or that all of them are viable...
Hello from left field, but no one asked (and you didn't specify) how important is it that you never, ever lose data?
If speed is the problem, leave it all in memory, and dump to the database in batches.
Do you log more than what would be available in the webserver logs? It can be quite a lot, see Apache 2.0 log information for example.
If not, then you can use the good old technique of buffering then batch writing. You can buffer at different places: in memory on your server, then batch insert them in db or batch write them in a file every X requests, and/or every X seconds.
If you use MySQL there are several different options/techniques to load efficiently a lot of data: LOAD DATA INFILE, INSERT DELAYED and so on.
Lots of details on insertion speeds.
Some other tips include:
splitting data into different tables per period of time (ie: per day or per week)
using multiple db connections
using multiple db servers
have good hardware (SSD/multicore)
Depending on the scale and resources available, it is possible to go different ways. So if you give more details, i can give more specific advices.
If you do not need to wait for a response such as a generated ID, you may want to adopt an asynchronous strategy using either a message queue or a thread manager.

Resources