How to efficiently store hundrets of thousands of documents? - database

I'm working on a system that will need to store a lot of documents (PDFs, Word files etc.) I'm using Solr/Lucene to search for revelant information extracted from those documents but I also need a place to store the original files so that they can be opened/downloaded by the users.
I was thinking about several possibilities:
file system - probably not that good idea to store 1m documents
sql database - but I won't need most of it's relational features as I need to store only the binary document and its id so this might not be the fastest solution
no-sql database - don't have any expierience with them so I'm not sure if they are any good either, there are also many of them so I don't know which one to pick
The storage I'm looking for should be:
fast
scallable
open-source (not crucial but nice to have)
Can you recommend what's the best way of storing those files will be in your opinion?

A filesystem -- as the name suggests -- is designed and optimised to store large numbers of files in an efficient and scalable way.

You can follow Facebook as it stores a lot of files (15 billion photos):
They Initially started with NFS share served by commercial storage appliances.
Then they moved to their onw implementation http file server called Haystack
Here is a facebook note if you want to learn more http://www.facebook.com/note.php?note_id=76191543919
Regarding the NFS share. Keep in mind that NFS shares usually limits amount of files in one folder for performance reasons. (This could be a bit counter intuitive if you assume that all recent file systems use b-trees to store their structure.) So if you are using comercial NFS shares like (NetApp) you will likely need to keep files in multiple folders.
You can do that if you have any kind of id for your files. Just divide it Ascii representation in to groups of few characters and make folder for each group.
For example we use integers for ids so file with id 1234567891 is stored as storage/0012/3456/7891.
Hope that helps.

In my opinion...
I would store files compressed onto disk (file system) and use a database to keep track of them.
and posibly use Sqlite if this is its only job.

File System : While thinking about the big picture, The DBMS use the file system again. And the File system is dedicated for keeping the files, so you can see the optimizations (as LukeH mentioned)

Related

Store big amount of data

My question is what is the best way to store a lot of files on a server. I did some searching and what I know so far is that its a bad idea to store all files in a single directory. Also I know that some filesystems have a subdirectory limit so it is not a good idea to create for every file a new directory. I read also some approache about using the hash of the file und build the path to store the file from this string. But I think if I do this I will end up with a lot of subdirectorys wich is maybe not a perfect solution.
There are tons of storage options, available on the network to store a lot of data.The right solution can often depend on specific needs. So if you are looking for a cheap and effective solution I would suggest using RAID(Redundant Array of Independent Disks).
1)RAID is a way of storing your data across multiple disks so that if something happens to one hard drive, none of your data will be lost. You can actually build your own server that uses RAID to back up your data files.RAID-5 with onboard controllers(using a proper dedicated controller)
2)unRAID is no longer confined to the capabilities of a single OS. It lets you partition system resources, enabling you to store and protect data as well as run any application in isolated environments.
3)If you want to store large amount of files, then the approach should not let any file should have a size more than 3-5MB, the moment it crosses it should you create a new file with the next revision number. In that way, you can keep the chain on. The moment the folder size crosses 1GB, create a new folder with then the next revision number and make sure the disk is NTFS partitioned and have enough space as per your requirement.
Hope it helps.

How are Huge files stored in a database?

I was just wondering how exactly huge files are stored in databases. Most BLOBs are limeted to 1GB as far as I know but if you take youtube for example, they have multiple full-HD videos with over an hour of running time (I think that's a bit larger than 1GB).
Are they using some kind of special database, is there another datatype I've never heard of or are they just using a simple method like splitting the files?
If they use let's say a method where they split and rearrange the bits and bytes, how can the end user look a video without noticing.
It's just a question out of pure curiosity but I would be happy if you could answer it.
It is not really the best idea to store files into a database. Youtube and other websites are web applications that store files in files systems. Databases are then only necessary to store information allowing to retrieve the required files on the file system before providing them to users.
They could be stored on disk and use a DB to hold only the paths. I'm not sure what you're asking.
why do you want to store it as BLOB? you can just store it as a file ( FLV or whatever ) and just stream it from there.

how to store data crawled from website

I want to crawl a website and store the content on my computer for later analysis. However my OS file system has a limit on the number of sub directories, meaning storing the original folder structure is not going to work.
Suggestions?
Map the URL to some filename so can store flatly? Or just shove it in a database like sqlite to avoid file system limitations?
It all depends on the effective amount of text and/or web pages you intent to crawl. A generic solution is probably to
use an RDBMS (SQL server of sorts) to store the meta-data associated with the pages.
Such info would be stored in a simple table (maybe with a very few support/related tables) containing fields such as Url, FileName (where you'll be saving it), Offset in File where stored (the idea is to keep several pages in the same file) date of crawl, size, and a few other fields.
use a flat file storage for the text proper.
The file name and path matters little (i.e. the path may be shallow and the name cryptic/automatically generated). This name / path is stored in the meta-data. Several crawled pages are stored in the same flat file, to optimize the overhead in the OS to manage too many files. The text itself may be compressed (ZIP etc.) on a per-page basis (there's little extra compression gain to be had by compressing bigger chunks.), allowing a per-file handling (no need to decompress all the text before it!). The decision to use compression depends on various factors; the compression/decompression overhead is typically relatively minimal, CPU-wise, and offers a nice saving on HD Space and generally disk I/O performance.
The advantage of this approach is that the DBMS remains small, but is available for SQL-driven queries (of an ad-hoc or programmed nature) to search on various criteria. There is typically little gain (and a lot of headache) associated with storing many/big files within the SQL server itself. Furthermore as each page gets processed / analyzed, additional meta-data (such as say title, language, most repeated 5 words, whatever) can be added to the database.
Having it in a database will help search through the content and page matadata. You can also try in-memory databases or "memcached" like storage to speed in up.
Depending on the processing power of the PC which will do the data mining, you could add the scraped data to a compressible archive like a 7zip, zip, or tarball. You'll be able to keep the directory structure intact and may end up saving a great deal of disk space - if that happens to be a concern.
On the other hand, a RDBMS like SqLite will balloon out really fast but wont mind ridiculously long directory hierarchies.

What is the best way to associate a file with a piece of data?

I have an application that creates records in a table (rocket science, I know). Users want to associate files (.doc, .xls, .pdf, etc...) to a single record in the table.
Should I store the contents of the
file(s) in the database? Wouldn't this
bloat the database?
Should I store the file(s) on a file
server, and store the path(s) in the
database?
What is the best way to do this?
I think you've accurately captured the two most popular approaches to solving this problem. There are pros and cons to each:
Store the Files in the DB
Most rbms have support for storing blobs (or binary file data, .doc, .xls, etc.) in a db. So you're not breaking new ground here.
Pros
Simplifies Backup of the data: you backup the db you have all the files.
The linkage between the metadata (the other columns ABOUT the files) and the file itself is solid and built into the db; so its a one stop shop to get data about your files.
Cons
Backups can quickly blossom into a HUGE nightmare as you're storing all of that binary data with your database. You could alleviate some of the headaches by keeping the files in a separate DB.
Without the DB or an interface to the DB, there's no easy way to get to the file content to modify or update it.
In general, its harder to code and coordinate the upload and storage of data to a DB vs. the filesystem.
Store the Files on the FileSystem
This approach is pretty simple, you store the files themselves in the filesystem. Your database stores a reference to the file's location (as well as all of the metadata about the file). One helpful hint here is to standardize your naming schema for the files on disk (don't use the file that the user gives you, create one on your own and store theirs in the db).
Pros
Keeps your file data cleanly separated from the database.
Easy to maintain the files themselves (if you need to change out the file or update it), you do so in the file system itself. You can just as easily do it from the application as well via a new upload.
Cons
If you're not careful, your database about the files can get out of sync with the files themselves.
Security can be an issue (again if you're careless) depending on where you store the files and whether or not that filesystem is available to the public (via the web I'm assuming here).
At the end of the day, we chose to go the filesystem route. It was easier to implement quickly, easy on the backup, pretty secure once we locked down any holes and streamed the file out (instead of just serving directly from the filesystem). Its been operational in pretty much the same format for about 6 years in two different government applications.
J
How well you can store binaries, or BLOBs, in a database will be highly dependant on the DBMS you are using.
If you store binaries on the file system, you need to consider what happens in the case of file name collision, where you try and store two different files with the same name - and if this is a valid operation or not. So, along with the reference to where the file lives on the file system, you may also need to store the original file name.
Also, if you are storing a large amount of files, be aware of possible performance hits of storing all your files in one folder. (You didn't specify your operating system, but you might want to look at this question for NTFS, or this reference for ext3.)
We had a system that had to store several thousands of files on the file system, on a file system where we were concerned about the number of files in any one folder (it may have been FAT32, I think).
Our system would take a new file to be added, and generate an MD5 checksum for it (in hex). It would take the first two characters and make that the first folder, the next two characters and make that the second folder as a sub-folder of the first folder, and then the next two as the third folder as a sub-folder of the second folder.
That way, we ended up with a three-level set of folders, and the files were reasonably well scattered so no one folder filled up too much.
If we still had a file name collision after that, then we would just add "_n" to the file name (before the extension), where n was just an incrementing number until we got a name that didn't exist (and even then, I think we did atomic file creation, just to be sure).
Of course, then you need tools to do the occasional comparison of the database records to the file system, flagging any missing files and cleaning up any orphaned ones where the database record no longer exists.
You should only store files in the database if you're reasonably sure you know that the sizes of those files aren't going to get out of hand.
I use our database to store small banner images, which I always know what size they're going to be. Your database will store a pointer to the data inside a row and then plunk the data itself somewhere else, so it doesn't necessarily impact speed.
If there are too many unknowns though, using the filesystem is the safer route.
Use the database for data and the filesystem for files. Simply store the file path in the database.
In addition, your webserver can probably serve files more efficiently than you application code will do (in order to stream the file from the DB back to the client).
Store the paths in the database. This keeps your database from bloating, and also allows you to separately back up the external files. You can also relocate them more easily; just move them to a new location and then UPDATE the database.
One additional thing to keep in mind: In order to use most of the filetypes you mentioned, you'll end up having to:
Query the database to get the file contents in a blob
Write the blob data to a disk file
Launch an application to open/edit/whatever the file you just created
Read the file back in from disk to a blob
Update the database with the new content
All that as opposed to:
Read the file path from the DB
Launch the app to open/edit/whatever the file
I prefer the second set of steps, myself.
The best solution would be to put the documents in the database. This simplifies all the linking and backingup and restoring issues - But it might not solve the basic 'we just want to point to documents on our file server' mindset the users may have.
It all depends (in the end) on actual user requirements.
BUt my recommendation would be to put it all together in the database so you retain control of them. Leaving them in the file system leaves them open to being deleted, moved, ACL'd or anyone of hundreds of other changes that could render your linking to them pointless or even damaging.
Database bloat is only an issue if you haven't sized for it. Do some tests and see what effects it has. 100GB of files on a disk is probably just as big as the same files in a database.
I would try to store it all in the database. Haven't done it. But if not. There are a small risk that file names get out of sync with files on the disk. Then you have a big problem.
And now for the completely off the wall suggestion - you could consider storing the binaries as attachments in a CouchDB document database. This would avoid the file name collision issues as you would use a generated UID as each document ID (which you what you would store in your RDBMS), and the actual attachment's file name is kept with the document.
If you are building a web-based system, then the fact that CouchDB uses REST over HTTP could also be leveraged. And, there's also the replication facilities that could prove of use.
Of course, CouchDB is still in incubation, although there are some who are already using it 'in the wild'.
I would store the files in the filesystem. But to keep the linking to the files robust, i.e. to avoid the cons of this option, I would generate some hash for each file, and then use the hash to retrieve it from the filesystem, without relying on the filenames and/or their path.
I don't know the details, but I know that this can be done, because this is the way BibDesk works (a BibTeX app for the Mac OS). It is wonderful software, used to keep tracks of the pdf attachments to the database of scientific literature that it manages.

BLOB Storage - 100+ GB, MySQL, SQLite, or PostgreSQL + Python

I have an idea for a simple application which will monitor a group of folders, index any files it finds. A gui will allow me quickly tag new files and move them into a single database for storage and also provide an easy mechanism for querying the db by tag, name, file type and date. At the moment I have about 100+ GB of files on a couple removable hard drives, the database will be at least that big. If possible I would like to support full text search of the embedded binary and text documents. This will be a single user application.
Not trying to start a DB war, but what open source DB is going to work best for me? I am pretty sure SQLLite is off the table but I could be wrong.
I'm still researching this option for one of my own projects, but CouchDB may be worth a look.
Why store the files in the database at all? Simply store your meta-data and a filename. If you need to copy them to a new location for some reason, just do that as a file system copy.
Once you remove the file contents then any competent database will be able to handle the meta-data for a few hundred thousand files.
My preference would be to store the document with the metadata. One reason, is relational integrity. You can't easily move the files or modify the files without the action being brokered by the db. I am sure I can handle these problems but it isn't as clean as I would like and my experience has been that most vendors can handle huge amounts of binary data in the database these days. I guess I was wondering if PostgreSQL or MySQL have any obvious advantages in these areas, I am primarily familiar with Oracle. Anyway, thanks for the response, if the DB knows where the external file is it will also be easy to bring the file in at a later date if I want. Another aspect of the question was if either database is easier to work with when using Python. I'm assuming that is a wash.
I always hate to answer "don't", but you'd be better off indexing with something like Lucene (PyLucene). That and storing the paths in the database rather than the file contents is almost always recommended.
To add to that, none of those database engines will store LOBs in a separate dataspace (they'll be embedded in the table's data space) so any of those engines should perfom nearly equally as well (well except sqllite). You need to move to Informix, DB2, SQLServer or others to get that kind of binary object handling.
Pretty much any of them would work (even though SQLLite wasn't meant to be used in a concurrent multi-user environment, which could be a problem...) since you don't want to index the actual contents of the files.
The only limiting factor is the maximum "packet" size of the given DB (by packet I'm referring to a query/response). Usually these limit are around 2MB, meaning that your files must be smaller than 2MB. Of course you could increase this limit, but the whole process is rather inefficient, since for example to insert a file you would have to:
Read the entire file into memory
Transform the file in a query (which usually means hex encoding it - thus doubling the size from the start)
Executing the generated query (which itself means - for the database - that it has to parse it)
I would go with a simple DB and the associated files stored using a naming convention which makes them easy to find (for example based on the primary key). Of course this design is not "pure", but it will perform much better and is also easier to use.
why are you wasting time emulating something that the filesystem should be able to handle? more storage + grep is your answer.

Resources