I have a basic question with regards to FileSystem usage
I want to use a embedded KeyValue store, which is very write oriented. (persistent) Say my value size is
a) 10 K
b) 1 M
and read and updates are equal in number
Cant I simply create files containing the value and there name acting as keys.
Wont it as fast as using a KeyValue store as LevelDB or RocksDB.
Can anybody please help me understand .
In principle, yes, a filesystem can be used as a key-value store. The differences only come in when you look at individual use cases and limitations in the implementations.
Without going into too much details here, there are some things likely to be very different:
A filesystem splits data into fixed size blocks. Two files can't typically occupy parts of the same block. Common block sizes are 4-16 KiB; you can calculate how much overhead your 10 KiB example would cause. Key/value stores tend to account for smaller-sized pieces of data.
Directory indexes in filesystems are often not capable of efficiently iterating over the filenames/keys in sort order. You can efficiently look up a specific key, but you can't retrieve ranges without reading pretty much all of the directory entries. Some key/value stores, including LevelDB, support efficient ordered iterating.
Some key/value stores, including LevelDB, are transactional. This means you can bundle several updates together, and LevelDB will make sure that either all of these updates make it through, or none of them do. This is very important to prevent your data getting inconsistent. Filesystems make this much harder to implement, especially when multiple files are involved.
Key/value stores usually try to keep data contiguous on disk (so data can be retrieved with less seeking), whereas modern filesystems deliberately do not do this across files. This can impact performance rather severely when reading many records. It's not an issue on solid-state disks, though.
While some filesystems do offer compression features, they are usually either per-file or per-block. As far as I can see, LevelDB compresses entire chunks of records, potentially yielding better compression (though they biased their compression strategy towards performance over compression efficiency).
Lets try to build Minimal NoSQL DB server using Linux and modern File System in 2022, just for fun, not for serious environment.
DO NOT TRY THIS IN PRODUCTION
—————————————————————————————————————————————
POSIX file Api for read write,
POSIX ACL for native user accounts and group permission management.
POSIX filename as key ((root db folder)/(tablename folder)/(partition folder)/(64bitkey)). Per db and table we can define permission for read/write using POSIX ACL. (64bitkey) is generated in compute function.
Mount BTRFS/OpenZFS/F2fs as filesystem to provide compression (Lz4/zstd) and encryption (fscrypt) as native support. F2fs is more suitable as it implements LSM which many nosql db used in their low level architecture.
Meta data is handled by filesystem so no need to implement it.
Use Linux and/or filesystem to configure page or file or disk block cache according to read write patterns as implemented in business login written in compute function or db procedure.
Use RAID and sshfs for remote replication to create Master/Slave high availability and/or backup
Compute function or db procedure for writing logic could be NodeJS file or Go binary or whatever along with standard http/tcp/ws server module which reads and write contents to DB.
Related
Are there any examples with source code for high speed (at least 10,000 read/write of record/s) mongodb read/update of a single record at a time ?
Alternatively where could I look in the mongodb server code for a way to say inject a customised put/get record for example with the “wired tiger” storage system ?
For example say that mongo C interface is similar to oracle's sql*net client, I'd need something similar to sqlldr bulk insert/update tool.
Thank you for any hint where to start from.
Raw performance is highly dependent on hardware. If the only requirement is "10,000 reads/writes of one document per second", then all of the following would help:
Having an empty database.
Storing the database in memory (tmpfs).
Using linux rather than windows.
Using a CPU with high clock speed, at the expense of core count.
Using the fastest memory available.
You might notice that some of these conditions are mutually exclusive with many practical systems. For example, if you truly have an empty database or nearly-empty database that fits in memory, you could probably use something like redis instead which is much simpler (i.e. has WAY less functionality) and thus for a simple operation like read/write of a simple document would be way faster.
If you try to implement this requirement on a real database, things become way more complicated. For example, "10,000 reads/writes of one document which is part of a 10 GB collection which is also being used in various aggregation pipelines" is a very different problem to solve.
For a real-world deployment there are simply too many variables to account for, you need to look at your system and go through performance investigation or hire a consultant who can do this. For example, if you talk to your database over TLS (which is very common) that creates significant overhead in the wire protocol which would absolutely seriously affect your peak r/w performance of trivial document sizes.
We all know that most operating systems use file systems to store all data but don't you think it is more efficient to use databases as we use in websites/web apps?
tl;dr: Diversity.
First of all, if you look at the original FAT filesystem, and the original Unix filesystem, they were both key-value stores, they did not have a directory hierarchy.
Second, this link suggests there there are filesystems implemented with an RDBMS backend, which is tangential to your question.
Having said these, comparing RDBMS to a filesystem as storage for an OS, there are several drawbacks to using RDBMS:
First, RDBMS makes very strong guarantees (ACID) by means of locking, at the cost of performance. However, most programs do not require such guarantees (for examples, think of every program that works with a NoSQL DB). In comparison, POSIX makes strong-ish guarantees about metadata, but barely any guarantees about I/O. You can build an RDBMS on top of POSIX and add locking, but you can't build a filesystem on top of an RDBMS and remove locking.
Second, an RDBMS requires a schema. Imagine that you create a new storage volume for an OS. Instead of formatting a filesystem, you need to decide on a schema. What schema will be the most useful?
With filesystems, the "schema" is basically one table, with the columns "path", "data", and a column for each file attributes like modification time, type, and size. Using an RDBMS for this schema allows you to perform operations like mass truncate, mass rename, mass access control etc. atomically. However, it will not allow you to modify the data of the same record (file) concurrently. Nor will it allow you to implement hard links. Extended attributes or Alternate Data Streams will still have to be implemented as they are today rather than leveraging RDBMS capabilities, as well as special index logic for the path column in order to implement features like changing directory, listing directory, checking permissions for every directory in the path of a file etc., and special logic for the data column because files can be TBs in size. At that point the ROI of RDBMS is going down the more you add features.
Alternatively you can have the schema be per-program (i.e. every program can do CREATE TABLE etc.), but then your features are again limited by what the RDBMS can do. For example, how do you get the equivalent of find / -size +1GB or md5sum, or even cat or ls? which columns will these programs read? You'll find that all generic programs now need to take a set of columns that are of interest. It also makes scripting much harder.
Thirdly, Hierarchical systems are typically easier to scale.
One example is when you want to add storage. In a hierarchical filesystem, even without any fancy filesystem features, you can simply mount another filesystem onto a directory, and you have new storage. The tradeoff vs increasing the storage capacity for the current filesystem is that hard links & renames don't work across filesystem, and they don't share the storage capacity. However, on an RDBMS your options are either to create a new table and have your programs/scripts manage both tables, or to add more storage volume, for which you might need to do more advanced things like partitioning.
Another example is ecosystem requirements. As an end user wanting to put some order into their 60,000 pictures, 5000 songs, hundreds of work spreadsheets, 10,000 memes, hundreds of eBooks, videos etc. - things that are convenient to arrange in a hierarchy - you currently only need two programs - a file manager (Explorer, bash, Nautilus etc.), and a search capability (e.g. find(1)). On an RDBMS, you either have different tables with different columns, or one table with generic columns. Either way, you have to have a set of SQL scripts to work with these specific collections, which would be equivalent to having a shell script or a program for each type of collection. Meaning, managing large collections requires more programming.
Since hierarchical systems are useful in a generic context (which is the context the major OSes operate in), and since it's easier to build a non-hierarchical system on top of hierarchical one than doing the other way around (hierarchical filesystem cache even makes the job easier for libsqlfs), it is valuable for OSes to support hierarchical systems first-class.
The executive summary is: OSes serve many use cases, and storage access is a major part of that. It would be wise for an OS to build a storage access mechanism that's as minimal as possible, but that allows applications to build more specialized storage access mechanism on top of the OS.
That means providing a small but useful set of features (like permissions, locking, mounting, and symlinks) but not force too much requirements (like locking, or specifying the data format to the OS).
RDBMSes are just too specific.
I have a very simple program for data acquisition. The data comes frequently (around 5200 Hz). One piece of data has around 24 kB, so it is around 122 MB/s.
What would be more efficient only for storing this data? Saving it in raw binary files, or use the database? If the database, then which? SQLite, or maybe some other?
The database, of course, is more tempting, because when saving it to file I would have to separate them by delimiters (data can have different sizes), also processing data would be much easier with the database. I'm not sure about database performance compared to files though, I couldn't find any specific pieces of information about it.
[EDIT]
I am using Linux based OS and SSD disk which supports writing up to 350 MB/s. Data will be acquired with that frequency all the time (with a small service break every day to transfer the data to another machine)
The file system is useful if you are looking for a particular file, as operating systems maintain a sort of index. However, the contents of a txt file won't be indexed, which is one of the main advantages of a database.
Another point is understanding the relational model meaning how you design your database, so that data doesn't need to be repeated over and over.
Moreover understanding types is inportant as well. If you have a txt file, you'll need to parse numbers, dates, etc.
For the performance point of view I would say that DB are slower to start (is usually faster to open a file than open a connection to a db). However once they are open I can guarantee that DB is faster then XML or whatever file you are thinking to use. BTW this is the main purpose of a database: manage huge amount of data, filesystems are made for storing files.
Last points for DB is that they usually can handle multi-threading and concurrency problems, which a file cannot and last but not least important in a database you cannot delete a file by mistake and loose your data
So my choice would be a DB and anway I hope that providing you some info you can decide what is best for you
-- UPDATE --
Since you your needs are more specific now I tried to dig deeper: I found some solutions that could be interesting for you however I don't have experience in any of them to provide you a personal suggestion about them:
SharedHashFile: SharedHashFile is a lightweight NoSQL key value store / hash table, a zero-copy IPC queue, & a multiplexed IPC logging library written in C for Linux. There is no server process. Data is read and written directly from/to shared memory or SSD; no sockets are used between SharedHashFile and the application program. APIs for C, C++, & nodejs. However keep an eye out for issues because this project seems to be no longer maintained on Github
WhiteDB another NoSql database that claims to be really fast, go to the speed section of their website to consult it
Symas an extraordinarily fast, memory-efficient database
Just take a look at them and if you ever use them just provide here a feedback for the community
Database ultimately stores the data in files, whereas File system also stores the data in files. In this case what is the difference between DB and File System. Is it in the way it is retrieved or anything else?
A database is generally used for storing related, structured data, with well defined data formats, in an efficient manner for insert, update and/or retrieval (depending on application).
On the other hand, a file system is a more unstructured data store for storing arbitrary, probably unrelated data. The file system is more general, and databases are built on top of the general data storage services provided by file systems. [Quora]
The file system is useful if you are looking for a particular file, as operating systems maintain a sort of index. However, the contents of a txt file won't be indexed, which is one of the main advantages of a database.
For very complex operations, the filesystem is likely to be very slow.
Main RDBMS advantages:
Tables are related to each other
SQL query/data processing language
Transaction processing addition to SQL (Transact-SQL)
Server-client implementation with server-side objects like stored procedures, functions, triggers, views, etc.
Advantage of the File System over Data base Management System is:
When handling small data sets with arbitrary, probably unrelated data, file is more efficient than database.
For simple operations, read, write, file operations are faster and simple.
You can find n number of difference over internet.
"They're the same"
Yes, storing data is just storing data. At the end of the day, you have files. You can store lots of stuff in lots of files & folders, there are situations where this will be the way. There is a well-known versioning solution (svn) that finally ended up using a filesystem-based model to store data, ditching their BerkeleyDB. Rare but happens. More info.
"They're quite different"
In a database, you have options you don't have with files. Imagine a textfile (something like tsv/csv) with 99999 rows. Now try to:
Insert a column. It's painful, you have to alter each row and read+write the whole file.
Find a row. You either scan the whole file or build an index yourself.
Delete a row. Find row, then read+write everything after it.
Reorder columns. Again, full read+write.
Sort rows. Full read, some kind of sort - then do it next time all over.
There are lots of other good points but these are the first mountains you're trying to climb when you think of a file based db alternative. Those guys programmed all this for you, it's yours to use; think of the likely (most frequent) scenarios, enumerate all possible actions you want to perform on your data, and decide which one works better for you. Think in benefits, not fashion.
Again, if you're storing JPG pictures and only ever look for them by one key (their id maybe?), a well-thought filesystem storage is better. Filesystems, btw, are close to databases today, as many of them use a balanced tree approach, so on a BTRFS you can just put all your pictures in one folder - and the OS will silently implement something like an early SQL query each time you access your files.
So, database or files?...
Let's see a few typical examples when one is better than the other. (These are no complete lists, surely you can stuff in a lot more on both sides.)
DB tables are much better when:
You want to store many rows with the exact same structure (no block waste)
You need lightning-fast lookup / sorting by more than one value (indexed tables)
You need atomic transactions (data safety)
Your users will read/write the same data all the time (better locking)
Filesystem is way better if:
You like to use version control on your data (a nightmare with dbs)
You have big chunks of data that grow frequently (typically, logfiles)
You want other apps to access your data without API (like text editors)
You want to store lots of binary content (pictures or mp3s)
TL;DR
Programming rarely says "never" or "always". Those who say "database always wins" or "files always win" probably just don't know enough. Think of the possible actions (now + future), consider both ways, and choose the fastest / most efficient for the case. That's it.
Something one should be aware of is that Unix has what is called an inode limit. If you are storing millions of records then this can be a serious problem. You should run df -i to view the % used as effectively this is a filesystem file limit - EVEN IF you have plenty of disk space.
The difference between file processing system and database management system is as follow:
A file processing system is a collection of programs that store and manage files in computer hard-disk. On the other hand, A database management system is collection of programs that enables to create and maintain a database.
File processing system has more data redundancy, less data redundancy in dbms.
File processing system provides less flexibility in accessing data, whereas dbms has more flexibility in accessing data.
File processing system does not provide data consistency, whereas dbms provides data consistency through normalization.
File processing system is less complex, whereas dbms is more complex.
Context: I've written a filesystem that has been running in production for 7 years now. [1]
The key difference between a filesystem and a database is that the filesystem API is part of the OS, thus filesystem implementations have to implement that API and thus follow certain rules, whereas databases are built by 3rd parties having complete freedom.
Historically, databases where created when the filesystem provided by the OS were not good enough for the problem at hand. Just think about it: if you had special requirements, you couldn't just call Microsoft or Apple to redesign their filesystem API. You would either go ahead and write your own storage software or you would look around for existing alternatives. So the need created a market for 3rd party data storage software which ended up being called databases. That's about it.
While it may seem that filesystems have certain rules like having files and directories, this is not true. The biggest operating systems work like that but there are many mall small OSs that work differently. It's certainly not a hard requirement. (Just remember, to build a new filesystem, you also need to write a new OS, which will make adoption quite a bit harder. Why not focus on just the storage engine and call it a database instead?)
In the end, both databases and filesystems come in all shapes and sizes. Transactional, relational, hierarchical, graph, tabled; whatever you can think of.
[1] I've worked on the Boomla Filesystem which is the storage system behind the Boomla OS & Web Application Platform.
The main differences between the Database and File System storage is:
The database is a software application used to insert, update and delete
data while the file system is a software used to add, update and delete
files.
Saving the files and retrieving is simpler in file system
while SQL needs to be learn to perform any query on the database to
get (SELECT), add (INSERT) and update the data.
Database provides a proper data recovery process while file system did not.
In terms of security the database is more secure then the file system (usually).
The migration process is very easy in File system just copy and paste into the target
while for database this task is not as simple.
I am writing an application, which parses a large file, generates a large amount of data and do some complex visualization with it. Since all this data can't be kept in memory, I did some research and I'm starting to consider embedded databases as a temporary container for this data.
My question is: is this a traditional way of solving this problem? And is an embedded database (other than structuring data) supposed to manage data by keeping in memory only a subset (like a cache), while the rest is kept on disk? Thank you.
Edit: to clarify: I am writing a desktop application. The application will be inputted with a file of size of 100s of Mb. After reading the file, the application will generate a large number of graphs which will be visualized. Since, the graphs may have such a large number of nodes, they may not fit into memory. Should I save them into an embedded database which will take care of keeping only the relevant data in memory? (Do embedded databases do that?), or I should write my own sophisticated module which does that?
Tough question - but I'll share my experience and let you decide if it helps.
If you need to retain the output from processing the source file, and you use that to produce multiple views of the derived data, then you might consider using an embedded database. The reasons to use an embedded database (IMHO):
To take advantage of RDBMS features (ACID, relationships, foreign keys, constraints, triggers, aggregation...)
To make it easier to export the data in a flexible manner
To enable access to your processed data to external clients (known format)
To allow more flexible transformation of the data when preparing for viewing
Factors which you should consider when making the decision:
What is the target platform(s) (windows, linux, android, iPhone, PDA)?
What technology base? (Java, .Net, C, C++, ...)
What resource constraints are expected or need to be designed for? (RAM, CPU, HD space)
What operational behaviours do you need to take into account (connected to network, disconnected)?
On the typical modern desktop there is enough spare capacity to handle most operations. On eeePCs, PDAs, and other portable devices, maybe not. On embedded devices, very likely not. The language you use may have build in features to help with memory management - maybe you can take advantage of those. The connectivity aspect (stateful / stateless / etc.) may impact how much you really need to keep in memory at any given point.
If you are dealing with really big files, then you might consider a streaming process approach so you only have in memory a small portion of the overall data at a time - but that doesn't really mean you should (or shouldn't) use an embedded database. Straight text or binary files could work just as well (record based, column based, line based... whatever).
Some databases will allow you more effective ways to interact with the data once it is stored - it depends on the engine. I find that if you have a lot of aggregation required in your base files (by which I mean the files you generate initially from the original source) then an RDBMS engine can be very helpful to simplify your logic. Other options include building your base transform and then adding additional steps to process that into other temporary stores for each specific view, which are then in turn processed for rendering to the target (report?) format.
Just a stream-of-consciousness response - hope that helps a little.
Edit:
Per your further clarification, I'm not sure an embedded database is the direction you want to take. You either need to make some sort of simplifying assumptions for rendering your graphs or investigate methods like segmentation (render sections of the graph and then cache the output before rendering the next section).