Multiple Json Files or A single file with multiple arrays - arrays

I have a large blob (azure) file with 10k json objects in a single array. This does not perform because of its size. As I look to re-architect it, I can either create multiple files with a single array in each of 500-1000 objects or I could keep the one file, but burst the single array into an array of arrays-- maybe 10 arrays of 1000 objects each.
For simplicity, I'd rather break into multiple files. However, I thought this was worth asking the question and seeing if there was something to be learned in the answers.

I would think this depends strongly on your use-case. The multiple files or multiple arrays you create will partition your data somehow: will the partitions be used mostly together or mostly separate? I.e. will there be a lot of cases in which you only read one or a small number of the partitions?
If the answer is "yes, I will usually only care about a small number of partitions" then creating multiple files will save you having to deal with most of your data on most of your calls. If the answer is "no, I will usually need either 1.) all/most of my data or 2.) data from all/most of my partitions" then you probably want to keep one file just to avoid having to open many files every time.
I'll add: in this latter case, it may well turn out that the file structure (one array vs an array-of-arrays) doesn't change things very much, since a full scan is a full scan is a full scan etc. If that's the case, then you may need to start thinking about how to move to the prior case where you partition your data so that your calls fall neatly within few partitions, or how to move to a different data format.

Related

SQLite vs Text file Database - Size comparision?

I am going to convert a text file in the SQLite db form; I am concerned about these points because giving any effort to write code for it:
Will both text file or its corresponding sqlite db be of same size?
SQLite would take less space than text file?
Or text file db is the one with lowest space?
"Hardware is cheap" - I'd strongly recommend not worrying about size differences, which will likely be insignificant anyway, and instead pick whichever solution best meets the rest of your needs. A text file can work just fine for simple projects, but a database has many more features that can help you organize, backup, and query your data much more efficiently and robustly.
For a more in-depth look at the pros and cons of both options, check out: database vs. flat files
Some things to keep in mind:
(NOTE about this answer: Files here references to internal/external storage, not SharedPrefs)
SQL:
Databases have overheads, which does take up size
If the database or a table goes corrupt, all data is lost(how bad this is depends on your app. Losing several thousand pictures: bad. Losing deletion log: not very bad)
Databases can be compressed(see this)
You can split up data into different tables, if you have issues with ID(or whatever way you identify row X), meaning one database can have several tables for each object where object X have identification conflicts with object Y. That basically means you can keep everything in one file, and still avoid conflicts with names. (Read more at the bottom of the answer)
Files:
Every single file has to be defined as its own separate file, which takes up space(name of the file)
You cannot store all attributes in one file without having to set up an advanced reader that determines the different types of data. If you don't do that, and have one file for each attribute, you will use a lot of space.
Reading thousands of lines can go slow, especially if you have several(say 100+) very big files
The OS uses space for each file, excluding the content. The name of the file for an instance, that takes up space. But something to keep in mind is you can keep all the data of an app in a single file. If you have an app where objects of two different types may have naming issues, you create a new database.
Naming conflicts
Say you have two objects, object X and Y.
Scenario 1'
Object X stores two variables. The file names are(x and y are in this case coordinates):
x.txt
y.txt
But in a later version, object Y comes in with the same two files.
So you have to assign an ID to object X and Y:
0-x.txt
0-y.txt
Every file uses 3 chars(7 total, including extension) on the name alone. This grows bigger the more complex the setup is. See scenario 2
But saving in the database, you get the row with ID 0 and find column X or Y.
You do not have to worry about the file name.
Further, if every object saves a lot of files, the reference to load or save each file will take up a lot of space. And that affects your APK file, and slowly pushes you up towards the 50 MB limit(google play limit).
You can create universal methods, but you can do the same with SQL and save space in the APK file. But compared to text files, SQL does save some space in terms of name.
Note though, that if you save 2-3 files(just to take a number) those few bytes going to names aren't going to matter
It is when you start saving hundreds of files, long names to avoid naming conflicts, that is when SQL saves you space. And if the table gets too big, you can compress it. You can zip text files to maybe save some space, but with one-liner files, there is not much to save.
Scenario 2
Object X and Y has three children each.
Every child has 3 variables it saves to the file system. If there was only one object with 3 children, it could have saved it like this
[id][variable name].txt
But because there is another parent with 3 children(of the same type, and they save the same files) the object's children who get saved last are the ones that stay saved. The first 3 get overwritten.
So you have to add the parent ID:
[parent ID][child ID][variable name].txt
And keep in mind, these examples are focused on a few objects. The amount of space saved is low, but when you save hundreds, if not thousands of files, that is when you start to save space.
Now, if you create a table, you can store your main objects(X and Y in this case). Then, you can either create the first table in a way that makes it recognisable whether the object is the parent or child, or you can create a second table. The second table have two ID values; One to identify the parent and one to identify the child. So if you want to find all the children of object 436, you simply write this query:
SELECT * FROM childrentable WHERE `parent_id`='436'
And that will give you all the attributes for all the children with object 436 as its parent.
And everything is stored in the Cursor when returned.
If you were to do the same with a file, this line(where Saver is the file saving and loading class):
Saver.load("0-436-file_name", context);
It is, of course, possible to use a for-loop to cycle the children ID(the 0 at the start), but you would also have to save how many children there are: You cannot get the files as easily, so you have to store values about thee amount of objects and the objects children.
This meaning you have to save more values in more files to be able to get the files you saved in the first place. And this is a really hard way to do things. A database would help you not have to write files to keep track of how many files you saved. The database would return [x] results on each query. So if object 436 has no children, SQL returns 0 rows. But in files, you would have to save 0 as the amount of children. Guessing file names lead to I/O exceptions.
I would expect the text file to be smaller as it has no overhead: all the things a Database gives you have a cost in terms of space.
It sounds like space is the only thing that matters to you, and that you expect to change the contents of the text file often (you call it a 'text file db'). Please note that there is no such thing as a 'text file db'. Reading and writing to it will be very slow compared to a proper db (such as SQLite). Adding different record types (Tables in a db) will complicate your like and I wouldn't want to try to maintain any sort of referential links between record types in a text file.
Hope that helps -

Is there a way of storing contents of a file in a database by using anything other than CLOB?

For a project, I want to store the contents of my file in a database.
I am aware that using CLOB is one of the options for storing large file contents. But I have heard that it is not an efficient way to do so.
Are there other alternatives.
Thank you for your answers.
CLOBs are inefficient because every access returns the entire contents of the field, and every modification rewrites the entire contents of the field. It also makes searching on the data difficult and inefficient. If you can break the data up into smaller units to save in multiple rows in a table, that can lead to better, more efficient programs.
That said, those inefficiencies come from misusing the feature. It sounds like what you have in mind is probably just fine (provided, as you say, that you can't know where the file will end up getting stored; typically in this case what I would do would be to store a path to the file in the database rather than the contents of the file itself).

Where should I store a list of stop words?

My function parses texts and removes short words, such as "a", "the", "in", "on", "at", etc.
The list of these words might be modified in the future. Also, switching between different lists (i.e., for different languages) might also be an option.
So, where should I store such a list?
About 50-200 words
Many reads every minute
Almost no writes (modifications) - for example, once in a few months
I have these options in my mind:
A list inside the code (fastest, but it doesn't sound like a good practise)
A seperate file "stop_words.txt" (how fast is reading from a file? should I read the same data from the same file every few seconds I call the same function?)
A database table. Would it be really efficient, when the list of words is supposed to be almost static?
I am using Ruby on Rails (if that makes any difference).
If it's only about 50-200 words, I'd store it in memory in a data structure that supports fast lookup, such as a hash map (I don't know what such a structure is called in Ruby).
You could use option 2 or 3 (persist the data in a file or database table, depending on what's easier for you), then read the data into memory at the start of your application. Store the time at which the data was read and re-read it from the persistent storage if a request comes in and the data hasn't been updated for X minutes.
That's basically a cache. It might be possible that Ruby on Rails already provides such a mechanism, but I know too little about it to answer that.
Since look-up of the stop-words needs to be fast, I'd store the stop-words in a hash table. That way, verifying if a word is a stop-word has amortized O(1) complexity.
Now, since the list of stop-words may change, it makes sense to persist the list in a text file, and read that file upon program start (or every few minutes / upon file modification if your program runs continuously).

Persisting a Large List for Membership Testing in C

Each item is an array of 17 32-bit integers. I can probably produce 120-bit unique hashes for them.
I have an algorithm that produces 9,731,643,264 of these items, and want to see how many of these are unique. I speculate that at most 1/36th of these will be unique but can't be sure.
At this size, I can't really do this in memory (as I only have 4 gigs), so I need a way to persist a list of these, do membership tests, and add each new one if it's not already there.
I am working in C(gcc) on Linux so it would be good if the solution can work from there.
Any ideas?
This reminds me of some of the problems I faced working on a solution to "Knight's Tour" many years ago. (A math problem which is now solved, but not by me.)
Even your hash isn't that much help . . . at the nearly the size of a GUID, they could easily be unique accross all the the known universe.
It will take approximately .75 Terrabytes just to hold the list on disk . . . 4 Gigs of memory or not, you'd still need a huge disk just to hold them. And you'd need double that much disk or more to do the sort/merge solutions I talk about below.
If you could SORT that list, then you could just go threw the list one item at a time looking for unique copies next to each other. Of course sorting that much data would required a custom sort routine (that you wrote) since it is binary (coverting to hex would double the size of your data, but would allow you to use standard routines) . . . though likely even there they would probably choke on that much data . . . so your are back to your own custom routines.
Some things to think about:
Sorting that much data will take weeks, months or perhaps years. While you can do a nice heap sort or whatever in memory, because you only have so much disk space, you will likely be doing a "bubble" sort of the files regardless of what you do in memory.
Depending on what your generation algorithm looks like, you could generate "one memory load" worth of data, sort it in place then write it out to disk in a file (sorted). Once that was done, you just have to "merge" all those individual sorted files, which is a much easier task (even thought there would be 1000s of files, it would still be a relatively easier task).
If your generator can tell you ANYTHING about your data, use that to your advantage. For instance in my case, as I processed the Knight's Moves, I know my output values were constantly getting bigger (because I was always adding one bit per move), that small knowledge allowed me to optimize my sort in some unique ways. Look at your data, see if you know anything similar.
Making the data smaller is always good of course. For instance you talk about a 120 hash, but is that has reversable? If so, sort the hash since it is smaller. If not, the hash might not be that much help (at least for my sorting solutions).
I am interested in the machanics of issues like this and I'd be happy to exchange emails on this subject just to bang around ideas and possible solutions.
You can probably make your life a lot easier if you can place some restrictions on your input data: Even assuming only 120 significant bits, the high number of duplicate values suggests an uneven distribution, as an even distribution would make duplicates unlikely for a given sample size of 10^10:
2^120 = (2^10)^12 > (10^3)^12 = 10^36 >> 10^10
If you have continuous clusters (instead of sparse, but repeated values), you can gain a lot by operating on ranges instead of atomic values.
What I would do:
fill a buffer with a batch of generated values
sort the buffer in-memory
write ranges to disk, ie each entry in the file consists of start and end value of a continuous group of values
Then, you need to merge the individual files, which can be done online - ie as the files become available - the same way a stack-based mergesort operates: associate to each file a counter equal to the number of ranges in the file and push each new file on a stack. When the file on top of the stack has a counter greater or equal to the previous file, merge the files into a new file whose counter is the number of ranges in the merged file.

How to organize a large number of objects

We have a large number of documents and metadata (xml files) associated with these documents. What is the best way to organize them?
Currently we have created a directory hierarchy:
/repository/category/date(when they were loaded into our db)/document_number.pdf and .xml
We use the path as a unique identifier for the document in our system.
Having a flat structure doesn't seem to a good option. Also using the path as an id helps to keep our data independent from our database/application logic, so we can reload them easily in case of failure, and all documents will maintain their old ids.
Yet, it introduces some limitations. for example we can't move the files once they've been placed in this structure, also it takes work to put them this way.
What is the best practice? How websites such as Scribd deal with this problem?
Your approach does not seem unreasonable, but might suffer if you get more than a few thousand documents added within a single day (file systems tend not to cope well with very large numbers of files in a directory).
Storing the .xml document beside the .pdf seems a bit odd - If it's really metadata about the document, should it not be in the database (which it sounds like you already have) where it can be easily queries and indexed etc?
When storing very large numbers of files I've usually taken the file's key (say, a URL), hashed it, and then stored it X levels deep in directories based on the first characters of the hash...
Say you started with the key 'How to organize a large number of objects'. The md5 hash for that is 0a74d5fb3da8648126ec106623761ac5 so you might store it at...
base_dir/0/a/7/4/http___stackoverflow.com_questions_2734454_how-to-organize-a-large-number-of-objects
...or something like that which you can easily find again given the key you started with.
This kind of approach has one advantage over your date one in that it can be scaled to suit very large numbers of documents (even per day) without any one directory becoming too large, but on the other hand, it's less intuitive to someone having to manually find a particular file.

Resources