sqlite3 when disk storage reached - database

I have a 2M bytes storage to store some logs in our embedded device (linux base). As the size is very limited, we have to implement some approach to handle the case that max size is reached. One option is circular buffer with mmap for persistence. The other option we are thinking is to use sqlite3 (when max size is reached, delete oldest entries, insert new ones).
However, as far as I understand, sqlite3 uses pages (limit 4096K or configurable). My questions are:
how to calculate disk usage from sqlite3? besides the database file size, what is also needed to count here?
what happens when 2M is reached? is there any particular info or error I could check to delete oldest entries?
is it a good approach (performance wise, data segmentation wise) to delete entries, then insert new ones?
Any suggestions or feedbacks are welcome.

It is not possible to calculated the disk usage; you have to monitor the file. Besides the actual database file, there is also the rollback journal, whose size corresponds to the amount of changed data in a transaction.
When the disk is full, you get an error code of SQLITE_FULL (or maybe SQLITE_IOERR_WRITE, depending on the OS).
You can limit the database size with PRAGMA max_page_count.
Deleted rows result in more free space in that particular database page. (This never changed the file size, unless you run VACUUM.)
When inserting new rows at the other end of the table, the space can get reused only when the entire page was freed because all its rows were deleted.
So you should try to delete rows in large chunks, if possible.

Related

What is best practice to use shrink SQL Server?

I have read a lot that shrinking database is not recommended practice as it causes a fragmentation that leads to slower performance.
ref :
https://www.brentozar.com/archive/2017/12/whats-bad-shrinking-databases-dbcc-shrinkdatabase/
https://straightpathsql.com/archives/2009/01/dont-touch-that-shrink-button/
But, it appeared to be the case for data file, if the log is full, shrinking log should not be a problem right?
if the data file is huge that takes a lot of space, I do need more space to insert and update some new data, shrinking apparently reduce the size of the file on the drive, which I assumed that I could use the free space to insert new data. but if shrinking is not recommended, how do I resolve this? and when is the best case to use shrink
if the data file is huge that takes a lot of space, I do need more
space to insert and update some new data, shrinking apparently reduce
the size of the file on the drive, which I assumed that I could use
the free space to insert new data.
If your data file takes a lot of space it does not mean that this space is empty.
You should use sp_spaceused to determine if there is unused space within data file.
If there is unused space, it will be already used "to insert and update some new data", and if there isn't doing shrink will change nothing: shrink does not delete your data, all it does is moving data at the beginning of the file to make space at the end in order to give it back to OS.
Shrinking data file can be usefull when you had a data file of 2Tb and 1 Tb of data was deleted and you don't plan to insert another Tb of data in next 10 years.
You can imagine your data file as a box 1m x 1m x 1m. If you have only a half of the box full of toys, even if you don't use shrink you can put other toys into this box (make insert/update). What instead shrink does, it gathers all the toys in 1 corner and then cut your box in order to make it 50cm x 50cm x 50cm. This way your room (OS) now has more free space because your toyb box takes only the half of space it took prior to shrink.
...And if your box was already full, you cannot add more toys even if you try to do shrink.
if the log is full, shrinking log should not be a problem right?
Shrinkig log is another process, nothing can be moved inside log file, in this sense of course shrink cannot make much harm as in the case of data file: it does not require server recourses, it does not cause any fragmentation, etc.
But if it succeeds or not depends on the cause of your "log is full".
If your log is full due to full model, shrinking log file will not change anything: the log is retained to give you the possibility of having the log backup chain (or to make possible mirroring, or log shipping, etc).
If instead your database recovery model is simple, and there was some trouble with a transaction that was open for long period of time, or there was a huge data loading (maybe with full logging such as insert into without tablock) and your log file became bigger than data file, and you found and fixed the problem and you don't need such a huge log file, yes you can shrink it to a reasonable size, and it's not harmful.

Why does SQLite store hundreds of null bytes?

In a database I'm creating, I was curious why the size was so much larger than the contents, and checked out the hex code. In a 4 kB file (single row as a test), there are two major chunks that are roughly 900 and 1000 bytes, along with a couple smaller ones that are all null bytes 0x0
I can't think of any logical reason it would be advantageous to store thousands of null bytes, increasing the size of the database significantly.
Can someone explain this to me? I've tried searching, and haven't been able to find anything.
The structure of a SQLite database file (`*.sqlite) is described in this page:
https://www.sqlite.org/fileformat.html
SQLite files are partitioned into "pages" which are between 512 and 65536 bytes long - in your case I imagine the page size is probably 1KiB. If you're storing data that's smaller than 1KiB (as you are with your single test row, which I imagine is maybe 100 bytes long?) then that leaves 900 bytes left - and unused (deallocated) space is usually zeroed-out before (and after) use.
It's the same way computer working memory (RAM) works - as RAM also uses paging.
I imagine you expected the file to be very compact with a terse internal representation; this is the case with some file formats - such as old-school OLE-based Office documents but others (and especially database files) require a different file layout that is optimized simultaneously for quick access, quick insertion of new data, and is also arranged to help prevent internal fragmentation - this comes at the cost of some wasted space.
A quick thought-experiment will demonstrate why mutable (i.e. non-read-only) databases cannot use a compact internal file structure:
Think of a single database table as being like a CSV file (and CSVs themselves are compact enough with very little wasted space).
You can INSERT new rows by appending to the end of the file.
You can DELETE an existing row by simply overwriting the row's space in the file with zeroes. Note that you cannot actually "delete" the space by "moving" data (like using the Backspace key in Notepad) because that means copying all of the data in the file around - this is largely a bad idea.
You can UPDATE a row by checking to see if the new row's width will fit in the current space (and overwrite the remaining space with zeros), or if not, then append a new row at the end and overwrite the existing row (a-la INSERT then DELETE)
But what if you have two database tables (with different columns) and need to store them in the same file? One approach is to simply mix each table's rows in the same flat file - but for other reasons that's a bad idea. So instead, inside your entire *.sqlite file, you create "sub-files", that have a known, fixed size (e.g. 4KiB) that store only rows for a single table until the sub-file is full; they also store a pointer (like a linked-list) to the next sub-file that contains the rest of the data, if any. Then you simply create new sub-files as you need more space inside the file and set-up their next-file pointers. These sub-files are what a "page" is in a database file, and is how you can have multiple read/write database tables contained within the same parent filesystem file.
Then in addition to these pages to store table data, you also need to store the indexes (which is what allows you to locate a table row near-instantly without needing to scan the entire table or file) and other metadata, such as the column-definitions themselves - and often they're stored in pages too. Relational (tabular) database files can be considered filesystems in their own right (just encapsulated in a parent filesystem... which could be inside a *.vhd file... which could be buried inside a varbinary database column... inside another filesystem), and even the database systems themselves have been compared to operating-systems (as they offer an environment for programs (stored procedures) to run, they offer IO services, and so on - it's almost circular if you look at the old COBOL-based mainframes from the 1970s when all of your IO operations were restricted to just computer record management operations (insert, update, delete).

Are table-scan results held in memory negating the benefit of indexes?

Theoretical SQL Server 2008 question:
If a table-scan is performed on SQL Server with a significant amount of 'free' memory, will the results of that table scan be held in memory, thereby negating the efficiencies that may be introduced by an index on the table?
Update 1: The tables in question contain reference data with approx. 100 - 200 records per table (I do not know the average size of each row), so we are not talking about massive tables here.
I have spoken to the client about introducing a memcached / AppFabric Cache solution for this reference data, however that is out of scope at the moment and they are looking for a 'quick win' that is minimal risk.
Every page read in the scan will be read into the buffer pool and only released under memory pressure as per the cache eviction policy.
Not sure why you think that would negate the efficiencies that may be introduced by an index on the table though.
An index likely means that many fewer pages need to be read and even if all pages are already in cache so no physical reads are required reducing the number of logical reads is a good thing. Logical reads are not free. They still have overhead for locking and reading the pages.
Besides the performance problem (even when all pages are in memory a scan is still going to be many many times slower than an index seek on any table of significant size) there is an additional issue: contention.
The problem with scans is that any operation will have to visit every row. This means that any select will block behind any insert/update/delete (since is guaranteed to visit the row locked by these operations). The effect is basically serialization of operations and adds huge latency, as SELECT now have to wait for DML to commit every time. Even under mild concurrency the effect is an overall sluggish and slow to respond table. With indexes present operations are only looking at rows in the ranges of interest and this, by virtue of simple probabilities, reduces the chances of conflict. The result is a much livelier, responsive, low latency system.
Full Table Scans also are not scalable as the data grows. It’s very simple. As more data is added to a table, full table scans must process more data to complete and therefore they will take longer. Also, they will produce more Disk and Memory requests, further putting strain on your equipment.
Consider a 1,000,000 row table that a full table scan is performed on. SQL Server reads data in the form of an 8K data page. Although the amount of data stored within each page can vary, let’s assume that on average 50 rows of data fit in each of these 8K pages for our example. In order to perform a full scan of the data to read every row, 20,000 disk reads (1,000,000 rows / 50 rows per page). That would equate to 156MB of data that has to be processed, just for this one query. Unless you have a really super fast disk subsystem, it might take it a while to retrieve all of that data and process it. Now then, let’s say assume that this table doubles in size each year. Next year, the same query must read 312MB of data just to complete.
Pls refer this link - http://www.datasprings.com/resources/articles-information/key-sql-performance-situations-full-table-scan

How to instantly query a 64Go database

Ok everyone, I have an excellent challenge for you. Here is the format of my data :
ID-1 COL-11 COL-12 ... COL-1P
...
ID-N COL-N1 COL-N2 ... COL-NP
ID is my primary key and index. I just use ID to query my database. The datamodel is very simple.
My problem is as follow:
I have 64Go+ of data as defined above and in a real-time application, I need to query my database and retrieve the data instantly. I was thinking about 2 solutions but impossible to set up.
First use sqlite or mysql. One table is needed with one index on ID column. The problem is that the database will be too large to have good performance, especially for sqlite.
Second is to store everything in memory into a huge hashtable. RAM is the limit.
Do you have another suggestion? How about to serialize everything on the filesystem and then, at each query, store queried data into a cache system?
When I say real-time, I mean about 100-200 query/second.
A thorough answer would take into account data access patterns. Since we don't have these, we just have to assume equal probably distribution that a row will be accessed next.
I would first try using a real RDBMS, either embedded or local server, and measure the performance. If this this gives 100-200 queries/sec then you're done.
Otherwise, if the format is simple, then you could create a memory mapped file and handle the reading yourself using a binary search on the sorted ID column. The OS will manage pulling pages from disk into memory, and so you get free use of caching for frequently accessed pages.
Cache use can be optimized more by creating a separate index, and grouping the rows by access pattern, such that rows that are often read are grouped together (e.g. placed first), and rows that are often read in succession are placed close to each other (e.g. in succession.) This will ensure that you get the most back for a cache miss.
Given the way the data is used, you should do the following:
Create a record structure (fixed size) that is large enough to contain one full row of data
Export the original data to a flat file that follows the format defined in step 1, ordering the data by ID (incremental)
Do a direct access on the file and leave caching to the OS. To get record number N (0-based), you multiply N by the size of a record (in byte) and read the record directly from that offset in the file.
Since you're in read-only mode and assuming you're storing your file in a random access media, this scales very well and it doesn't dependent on the size of the data: each fetch is a single read in the file. You could try some fancy caching system but I doubt this would gain you much in terms of performance unless you have a lot of requests for the same data row (and the OS you're using is doing poor caching). make sure you open the file in read-only mode, though, as this should help the OS figure out the optimal caching mechanism.

How do databases deal with data tables that cannot fit in memory?

Suppose you have a really large table, say a few billion unordered rows, and now you want to index it for fast lookups. Or maybe you are going to bulk load it and order it on the disk with a clustered index. Obviously, when you get to a quantity of data this size you have to stop assuming that you can do things like sorting in memory (well, not without going to virtual memory and taking a massive performance hit).
Can anyone give me some clues about how databases handle large quantities of data like this under the hood? I'm guessing there are algorithms that use some form of smart disk caching to handle all the data but I don't know where to start. References would be especially welcome. Maybe an advanced databases textbook?
Multiway Merge Sort is a keyword for sorting huge amounts of memory
As far as I know most indexes use some form of B-trees, which do not need to have stuff in memory. You can simply put nodes of the tree in a file, and then jump to varios position in the file. This can also be used for sorting.
Are you building a database engine?
Edit: I built a disc based database system back in the mid '90's.
Fixed size records are the easiest to work with because your file offset for locating a record can be easily calculated as a multiple of the record size. I also had some with variable record sizes.
My system needed to be optimized for reading. The data was actually stored on CD-ROM, so it was read-only. I created binary search tree files for each column I wanted to search on. I took an open source in-memory binary search tree implementation and converted it to do random access of a disc file. Sorted reads from each index file were easy and then reading each data record from the main data file according to the indexed order was also easy. I didn't need to do any in-memory sorting and the system was way faster than any of the available RDBMS systems that would run on a client machine at the time.
For fixed record size data, the index can just keep track of the record number. For variable length data records, the index just needs to store the offset within the file where the record starts and each record needs to begin with a structure that specifies it's length.
You would have to partition your data set in some way. Spread out each partition on a separate server's RAM. If I had a billion 32-bit int's - thats 32 GB of RAM right there. And thats only your index.
For low cardinality data, such as Gender (has only 2 bits - Male, Female) - you can represent each index-entry in less than a byte. Oracle uses a bit-map index in such cases.
Hmm... Interesting question.
I think that most used database management systems using operating system mechanism for memory management, and when physical memory ends up, memory tables goes to swap.

Resources