I am implementing an on-disk B-tree, and I have a question about creating new pages. According to the little information I found, when need to add new page I should append a new block to the B-tree file and then read it through buffer manager. I read the code of some databases, and found that basically they all append a new block to file before writing the corresponding buffer. I am wondering can I defer writing to disk when creating new pages.
Can I just keep a next_page_id record for each file in memory, add new pages in buffer pool, write those pages, and append them to files in the order of page_id when flushing pages? Are there any problems in this approach?
Related
As far as I can tell, it is not really possible to "update" a single portion of a file. One must overwrite the entire thing or simply append. A database, however, usually has update functionality. How would one design a database to not append - because that causes tombstones - but rather update?
Files can be overwritten it just can be a bit of a tedious process. You will have to know the beginning index of whatever you want to update and set the file pointer to that index in the file before starting to write to that file.
Databases are easier to update because they are a combination of many data structures (Linked lists, Trees, Heaps, etc.) that all contain specific data and can be iterated through. For these data structures you just need to know which node in the structure you need to update and navigate to it and overwrite the data.
I'm working on a piece of software that stores files in a file system, as well as references to those files in a database. Querying the uploaded files can thus be done in the database without having to access the file system. From what I've read in other posts, most people say it's better to use a file system for file storage rather then storing binary data directly in a database as BLOB.
So now I'm trying to understand the best way to set this up so that both the database a file system stay in sync and I don't end up with references to files that don't exist, or files taking up space in the file system that aren't referenced. Here are a couple options that I'm considering.
Option 1: Add File Reference First
//Adds a reference to a file in the database
database.AddFileRef("newfile.txt");
//Stores the file in the file system
fileStorage.SaveFile("newfile.txt",dataStream);
This option would be problematic because the reference to the file is added before the actual file, so another user may end up trying to download a file before it is actually stored in the system. Although, since the reference to the the file is created before hand the primary key value could be used when storing the file.
Option 2: Store File First
//Stores the file
fileStorage.SaveFile("newfile.txt",dataStream);
//Adds a reference to the file in the database
//fails if reference file does not existing in file system
database.AddFileRef("newfile.txt");
This option is better, but would make it possible for someone to upload a file to the system that is never referenced. Although this could be remedied with a "Purge" or "CleanUpFileSystem" function that deletes any unreferenced files. This option also wouldn't allow the file to be stored using the primary key value from the database.
Option 3: Pending Status
//Adds a pending file reference to database
//pending files would be ignored by others
database.AddFileRef("newfile.txt");
//Stores the file, fails if there is no
//matching pending file reference in the database
fileStorage.SaveFile("newfile.txt",dataStream); database
//marks the file reference as committed after file is uploaded
database.CommitFileRef("newfile.txt");
This option allows the primary key to be created before the file is uploaded, but also prevents other users from obtaining a reference to a file before it is uploaded. Although, it would be possible for a file to never be uploaded, and a file reference to be stuck pending. Yet, it would also be fairly trivial to purge pending references from the database.
I'm leaning toward option 2, because it's simple, and I don't have to worry about users trying to request files before they are uploaded. Storage is cheap, so it's not the end of the world if I end up with some unreferenced files taking up space. But this also seems like a common problem, and I'd like to hear how others have solved it or other considerations I should be making.
I want to propose another option. Make the filename always equal to the hash of its contents. Then you can safely write any content at all times provided that you do it before you add a reference to it elsewhere.
As contents never change there is never a synchronization problem.
This gives you deduplication for free. Deletes become harder though. I recommend a nightly garbage collection process.
What is the real use of the database? If it's just a list of files, I don't think you need it at all, and not having it saves you the hassle of synchronising.
If you are convinced you need it, then options 1 and 2 are completely identical from a technical point of view - the 2 resources can be out of sync and you need a regular process to consolidate them again. So here you should choose the options that suits the application best.
Option 3 has no advantage whatsoever, but uses more resources.
Note that using hashes, as suggested by usr, bears a theoretical risk of collision. And you'd also need a periodical consolidation process, as for options 1 and 2.
Another questions is how you deal with partial uploads and uploads in progress. Here option 2 could be of use, but you could also use a second "flag" file that is created before the upload starts, and deleted when the upload is done. This would help you determine which uploads have been aborted.
To remedy the drawback you mentioned of option 1 I use something like fileStorage.FileExists("newfile.txt"); and filter out the result for which it returns a negative.
In Python lingo:
import os
op = os.path
filter(lambda ref: op.exists(ref.path()), database.AllRefs())
In a Windows Phone 7 application, I would like to query a big XML file (list of cities) stored using Isolated Storage. If I do that this way, will the file be loaded to memory (> 5 mo) ? If so, what other solution do I have?
Edit:
More details. I want to use AutoCompleteBox (http://www.jeff.wilcox.name/2008/10/introducing-autocompletebox/), but instead of using a web service (this is fixed data, no need to be online), I want to query a file/database/isolated storage... I have a fixed list of cities. I said in the comments it's 40k, but it finally seems closer to 1k rows.
instead of using isolatedstorage for this, would it be an option for you to use a webservice instead... or do you design your app for an offline approach?
querying a webservice, wcf or json enabled webservice is really simple, and will be easier for you to maintain :)
Rather than have a big file containing all the data can you not break it down into lots of smaller files. (One for each city?)
You could have a separate file to keep an index of them all if need be. Alternatively, depending on the naming of the files, you may be able to use IsolatedStorageFile.GetFileNames to get a list of all files.
I would create my own file format, using, for example, a separator between fields, with one row for each record.
That way you can read your file line-by-line to fill your data structure with these advantages:
no need to pull the whole file into memory
no XML overhead (in a desktop application it may not be a problem, but in the phone context a 5 MB text file may become quite a bit smaller)
Dumb example:
New York City; 12345
Berlin; 25635
...
EDIT: given that the volume is not that large you don't need any form of indexing or loading on-demand. I would store the cities as stated above -one record per line-, load them in a list and use LINQ to select the items you need. This will probably be fast and keep your application very responsive.
In this case, in my opinion, XML is not the best tool for the job. Your structure is very simple and storing in XML would probably double the file size, which is a concern for a mobile device, and would also slow the parsing, also a concern in this case.
I've been tasked with processing multiple terabytes worth of SCM data for my company. I set up a hadoop cluster and have a script to pull data from our SCM servers.
Since I'm processing data with batches through the streaming interface, I came across an issue with the block sizes that O'Reilly's Hadoop book doesn't seem to address: what happens to data straddling two blocks? How does the wordcount example get around this? To get around the issue so far, we've resorted to making our input files smaller than 64mb each.
The issue came up again when thinking about the reducer script; how is aggregated data from the maps stored? And would the issue come up when reducing?
This should not be an issue providing that each block can cleanly break a part the data for the splits (like by line break). If your data is not a line by line data set then yes this could be a problem. You can also increase the size of your blocks on your cluster too (dfs.block.size).
You can also customize in your streaming how the inputs are going into your mapper
http://hadoop.apache.org/common/docs/current/streaming.html#Customizing+the+Way+to+Split+Lines+into+Key%2FValue+Pairs
Data from the map step gets sorted together based on a partioner class against the key of the map.
http://hadoop.apache.org/common/docs/r0.15.2/streaming.html#A+Useful+Partitioner+Class+%28secondary+sort%2C+the+-partitioner+org.apache.hadoop.mapred.lib.KeyFieldBasedPartitioner+option%29
The data is then shuffled together to make all the map keys get together and then transferred to the reducer. Sometimes before the reducer step happens a combiner comes in if you like.
Most likely you can create your own custom -inputreader (here is example of how to stream XML documents http://hadoop.apache.org/common/docs/current/api/org/apache/hadoop/streaming/StreamXmlRecordReader.html)
If you have multiple terabytes input you should consider setting block size to even more then 128MB.
If file is bigger than one block it can either be split, so each block of file would go to different mapper, or whole file can go to one mapper (for example if this file is gzipped). But I guess you can set this using some configuration options.
Splits are taken care of automatically and you should not worry about it. Output from maps is stored in tmp directory on hdfs.
Your question about "data straddling two blocks" is what the RecordReader handles. The purpose of a RecordReader is 3 fold:
Ensure each k,v pair is processed
Ensure each k,v pair is only processed once
Handle k,v pairs which are split across blocks
What actually happens in (3) is that the RecordReader goes back to the NameNode, gets the handle of a DataNode where the next block lives, and then reaches out via RPC to pull in that full block and read the remaining part of that first record up to the record delimiter.
We have a table in mysql of 18GB which has a column "html_view" which stores HTML source data, which we are displaying on the page, but now its taking too much time to fetch html data from "html_view" column, which making the page load slow.
We want an approach which can simplify our existing structure to load the html data faster from db or from any other way.
One idea which we are planning is to store HTML data in .txt files and in db we'll just store path of the txt file and will fetch the data from that particular file by reading file. But we fear that it will make extensive read write operations n our server and may slowdown the server then.
Is there any better approach, for making this situation faster?
First of all, why store HTML in database? Why not render it on demand?
For big text tables, you could store compressed text in a byte array, or compressed and encoded in base64 as plain text.
When you have an array with large text column, how many other columns does the table have? If it's not too many, you could partition the table and create a two column key-value store. That should be faster and simpler than reading files from disk.
Have a look at the Apache Caching guide.
It explains disk and memory caching - from my pov if the content is static (as the databae table indicates), you should use Apaches capabilities instead of writing your own slower mechanisms because you add multiple layers on top.
The usual measure instead of estimating does still apply though ;-).