Are there drawbacks to the sqlite3 auto_vacuum pragma? - c

I'm working with a sqlite3 database that could conceivably become quite large. Storage space is a concern, so I was considering setting the auto_vacuum pragma to on so that space occupied by deleted rows would be actually freed instead of just marked as available for re-use.
In my scenario, the database could grow by several hundred MB per month, while rows older than ~6 months would decay in a granular fashion. This is achieved by a job queue that randomly tacks on the task of removing the nn oldest records in addition to the current task, where nn is determined by how many high priority tasks there are in the queue.
I'm hoping that this avoids having to write maintenance jobs that cause protracted RW starvation (in the order of minutes, to delete rows and then run VACUUM) when the same could be achieved a few MS at a time. This might mean that 'old' rows remain in the DB a few days longer than they would otherwise, but that is an acceptable trade off.
My question is, in your experience (and perhaps opinion), would turning on auto_vacuum be an unacceptable compromise given my description? If so, for what reasons? I have not used sqlite3 extensively, much less the various pragmas it presents for tweaking so I am hoping to solicit the experience I'm lacking prior to making a judgement call that I might regret a few months from now :)
I'm using the C interface, if it makes any difference.

A liferea developer explains why:
The problem with it is that it also takes very long. With a 50MB DB file I experienced a runtime of over 1 minute. This is why this can be only a tool for experienced users that know how to do it manually knowing what to expect. For executing such a long term operation automatically on runtime would surely be unacceptable to the unsuspecting user. Also there is no good way how to decide when to do a VACUUM to save disk space and improve performance.

Related

What operations are O(n) on the number of tables in PostgreSQL?

Let's say theoretically, I have database with an absurd number of tables (100,000+). Would that lead to any sort of performance issues? Provided most queries (99%+) will only run on 2-3 tables at a time.
Therefore, my question is this:
What operations are O(n) on the number of tables in PostgreSQL?
Please note, no answers about how this is bad design, or how I need to plan out more about what I am designing. Just assume that for my situation, having a huge number of tables is the best design.
pg_dump and pg_restore and pg_upgrade are actually worse than that, being O(N^2). That used to be a huge problem, although in recent versions, the constant on that N^2 has been reduced to so low that for 100,000 table it is probably not enough to be your biggest problem. However, there are worse cases, like dumping tables can be O(M^2) (maybe M^3, I don't recall the exact details anymore) for each table, where M is the number of columns in the table. This only applies when the columns have check constraints or defaults or other additional info beyond a name and type. All of these problems are particularly nasty when you have no operational problems to warn you, but then suddenly discover you can't upgrade within a reasonable time frame.
Some physical backup methods, like barman using rsync, are also O(N^2) in the number of files, which is at least as great as the number of tables.
During normal operations, the stats collector can be a big bottleneck. Everytime someone requests updated stats on some table, it has to write out a file covering all tables in that database. Writing this out is O(N) for the tables in that database. (It used to be worse, writing out one file for the while instance, not just the database). This can be made even worse on some filesystems, which when renaming one file over the top of an existing one, implicitly fsyncs the file, so putting it on a RAM disc can at least ameliorate that.
The autovacuum workers loop over every table (roughly once per autovacuum_naptime) to decide if they need to be vacuumed, so a huge number of tables can slow this down. This can also be worse than O(N), because for each table there is some possibility it will request updated stats on it. Worse, it could block all concurrent autovacuum workers while doing so (this last part fixed in a backpatch for all supported versions).
Another problem you might into is that each database backend maintains a cache of metadata on each table (or other object) it has accessed during its lifetime. There is no mechanism for expiring this cache, so if each connection touches a huge number of tables it will start consuming a lot of memory, and one copy for each backend as it is not shared. If you have a connection pooler which hold connections open indefinitely, this can really add up as each connection lives long enough to touch many tables.
pg_dump with some options, probably -s. Some other options make it depend more on size of data.

Table containing TEXT column growing continuously

We've got a table in a production system which (for legacy reasons) is running SQL 2005 (9.0.5266) and contains a TEXT column (along with a few other columns of various datatypes).
All of a sudden (since a week ago) we noticed the size of this one table increasing linearly by 10-15GB per day (whereas previously it has always remained at a constant size). The table is a queue for a messaging system, and as such the data in it completely refreshes itself every few seconds. At any one time there could be anywhere from 0 to around 1000 rows, but it fluctuates rapidly as messages are inserted, and sent (at which point they're deleted).
We can't find anything that was changed on the day the growth started - so have no obvious potential cause identified at this stage.
One "obvious" culprit is the TEXT column, and so we checked to see if any massive values were now being stored, but (using DATALENGTH) we found no single rows above around ~32k. We've run CHECKDB, updated space usage, rebuild all indexes, etc - nothing reduces the size (and CHECKDB showed no errors).
We've queried sys.allocation_units and the size increase is definitely LOB_DATA (which show total_pages and used_pages increasing together at a constant rate).
To reduce the database size last night we simple created a new table along-side the one in question (which is luckily referenced via a view by the application), dropped the old table, and renamed the new one. We left last night, taking comfort in the fact that we'd alleviated the space issues, and that we had a backup of the dodgy table to investigate further today.
However, this morning the table size is already up to 14GB (and growing), while there are only the usual ~500 rows in the table, and MAX(DATALENGTH(text_column)) is only showing around 35k.
Any ideas as to what could be causing this "runaway" growth, or anything else that we could try or query to get more information about what exactly is using the space?
Cheers,
Dave
This is a general problem in dealing with queues. The article linked talks about Service Broker queues, but the issue is the same for ordinary tables used as queues. If you have a busy system with generous resources (CPU, memory, disk IO) and you push a queue on this system to high throughput, then a large portion of these resources will be used to handle the two operations: enqueue (ie. INSERT) and dequeue (ie. DELETE). However, the full lifecycle of the record requires three operations: INSERT, DELETE and ghost purge. They cost roughly the same in terms on CPU/memory/disk IO needs, so if you use that queue for say 90% of the system resources then you should allocate 30% resources to each. But only the first two are under your control (ie. explicit statements running in user sessions). The third one, the ghost purge, is a background process controlled by SQL Server, and there is no chance the ghost cleanup process will be allowed to consume 30% resources. This is a fundamental issue and, if you push the pedal-to-the-metal for long enough time your *will hit it. Once ghost records accumulate and pass system/workload specific threshold the performance will degrade quickly and the symptoms will spiral to abysmal performance (a negative feedback loop forms).
Luckily, since you do not use Service Broker queues but real tables as queues, you have some better tools at your disposal, like ALTER TABLE REORGANIZE and ALTER TABLE REBUILD. By far the best solution is an online index/table rebuild. SQL Server 2012 supports online operations on tables containing BLOBs and you can eleverage that. Of course you would have to get rid of the deprecated obsolete TEXT type and use VARCHAR(MAX), but that goes w/o saying.
As a side note:
If you have pages with nothing but ghost records on them, then you
will not read those pages again and they won't get marked for cleanup
This is incorrect. Pages with nothing but ghosts will be detected and purged by scans. As I said, the issue is not detection, is resources. If you push your system enough, you will race ahead of the ghost cleanup and he will never catch up.
Early this morning I restarted the SQL service on the instance with this "problem queue table". It appears that this has fixed the issue. Immediately following the restart, I monitored the LOB_DATA page-in-use count, and it started dropping straight away. It was being cleaned up quite slowly, so probably took around an hour or two to reclaim the 60+GB of space being held (I went to bed after I'd made sure all was well).
At the moment the table is back to normal as far as in-use allocations (hovering around <100 pages), and is not showing any signs of re-growing.
Given the fact that we have used this table in the same way (i.e. as a queue) for at least 10 years, and it has had busier periods than what we've had over the past week or two, I would've been surprised if it was the issue described by Remus above (although I understand how that can occur; I guess this specific queue just isn't quite busy enough to swamp the ghost cleanup process?). Very strange...
Thanks again for the help guys!

Multiple writes in a relational database

I'm pretty sure that with a relational database, it's faster and better to read 50 records at once than to make 50 calls for one record each. Is there a performance benefit from performing multiple writes all at once? If not, why not?
Probably depends on the RDBMS and the storage engine, but at least in MySQL/InnoDB, multiple writes in one transaction (as well the multi-insert syntax, which, afaik, is MySQL extension) allows you not to update non-unique indexes before transaction is commited, and the update of the index happens at once with all new values (since it's a b-tree, in this way its much faster). It's possible that RDBMS optimizes other writes as well, to have sequential instead of random writes.
Also, if there is a table-level locking (as in MyISAM), locking the table once, writting multiple records and then unlocking removes the overhead of lock/unlock for every write.
So generally, there is performance gain, but it depends on the database server used.
Doing all your reads at once makes sense, although there are some problems in it which I'll touch on in a minute.
Doing all your writes at once poses a particular problem: the data is in the database until you put it there. If you're waiting for some optimization threshold (let's say 50) then transaction 1 is going to have to wait for (unrelated) transactions 2-50 to complete before it goes to the database. This means that in the mean time (which could be several [seconds, minutes, hours]) nobody knows what those records are, or if they're updated what the new values are. (Same with reads but the other way around. Your data may be out of date by the time you get to use it.)
Performance wise, I cannot imagine that combining writes closer together would not have some performance. (IF that was confusing to read, I meant "You should always get a performance boost by grouping.") If nothing else, you have a better chance to hit memory caches instead of disk caches than if you do them separately. #Darhazer brings up a good point about locking. So strictly from a total-time-spent-writing point of view, it would be better to group them. From an application performance point of view, it's difficult to say without an intricate knowledge of the business requirements of the app.

Why do DBS not adapt/tune their buffer sizes automatically?

Not sure whether there isn't a DBS that does and whether this is indeed a useful feature, but:
There are a lot of suggestions on how to speed up DB operations by tuning buffer sizes. One example is importing Open Street Map data (the planet file) into a Postgres instance. There is a tool called osm2pgsql (http://wiki.openstreetmap.org/wiki/Osm2pgsql) for this purpose and also a guide that suggests to adapt specific buffer parameters for this purpose.
In the final step of the import, the database is creating indexes and (according to my understanding when reading the docs) would benefit from a huge maintenance_work_mem whereas during normal operation, this wouldn't be too useful.
This thread http://www.mail-archive.com/pgsql-general#postgresql.org/msg119245.html in the contrary suggests a large maintenance_work_mem would not make too much sense during final index creation.
Ideally (imo), the DBS should know best what buffers size combination it could profit most given a limited size of total buffer memory.
So, are there some good reasons why there isn't a built-in heuristic that is able to adapt the buffer sizes automatically according to the current task?
The problem is the same as with any forecasting software. Just because something happened historically doesn't mean it will happen again. Also, you need to complete a task in order to fully analyze how you should have done it more efficient. Problem is that the next task is not necessarily anything like the previously completed task. So if your import routine needed 8gb of memory to complete, would it make sense to assign each read-only user 8gb of memory? The other way around wouldn't work well either.
In leaving this decision to humans, the database will exhibit performance characteristics that aren't optimal for all cases, but in return, let's us (the humans) optimize each case individually (if like to).
Another important aspect is that most people/companies value reliable and stable levels over varying but potentially better levels. Having a high cost isn't as big a deal as having large variations in cost. This is of course not true all the times as entire companies are based around the fact the once in a while hit that 1%.
Modern databases already make some effort into adapting itself to the tasks presented, such as increasingly more sofisticated query optimizers. At least Oracle have the option to keep track of some of the measures that are influencing the optimizer decisions (cost of single block read which will vary with the current load).
My guess would be it is awfully hard to get the knobs right by adaptive means. First you will have to query the machine for a lot of unknowns like how much RAM it has available - but also the unknown "what do you expect to run on the machine in addition".
Barring that, by setting a max_mem_usage parameter only, the problem is how to make a system which
adapts well to most typical loads.
Don't have odd pathological problems with some loads.
is somewhat comprehensible code without error.
For postgresql however the answer could also be
Nobody wrote it yet because other stuff is seen as more important.
You didn't write it yet.

Recommended Document Structure for CouchDB

We are currently considering a change from Postgres to CouchDB for a usage monitoring application. Some numbers:
Approximately 2000 connections, polled every 5 minutes, for approximately 600,000 new rows per day. In Postgres, we store this data, partitioned by day:
t_usage {service_id, timestamp, data_in, data_out}
t_usage_20100101 inherits t_usage.
t_usage_20100102 inherits t_usage. etc.
We write data with an optimistic stored proc that presumes the partition exists and creates it if necessary. We can insert very quickly.
For reading of the data, our use cases, in order of importance and current performance are:
* Single Service, Single Day Usage : Good Performance
* Multiple Services, Month Usage : Poor Performance
* Single Service, Month Usage : Poor Performance
* Multiple Services, Multiple Months : Very Poor Performance
* Multiple Services, Single Day : Good Performance
This makes sense because the partitions are optimised for days, which is by far our most important use case. However, we are looking at methods of improving the secondary requirements.
We often need to parameterise the query by hours as well, for example, only giving results between 8am and 6pm, so summary tables are of limited use. (These parameters change with enough frequency that creating multiple summary tables of data is prohibitive).
With that background, the first question is: Is CouchDB appropriate for this data? If it is, given the above use cases, how would you best model the data in CouchDB documents? Some options I've put together so far, which we are in the process of benchmarking are (_id, _rev excluded):
One Document Per Connection Per Day
{
service_id:555
day:20100101
usage: {1265248762: {in:584,out:11342}, 1265249062: {in:94,out:1242}}
}
Approximately 60,000 new documents a month. Most new data would be updates to existing documents, rather than new documents.
(Here, the objects in usage are keyed on the timestamp of the poll, and the values the bytes in and byes out).
One Document Per Connection Per Month
{
service_id:555
month:201001
usage: {1265248762: {in:584,out:11342}, 1265249062: {in:94,out:1242}}
}
Approximately 2,000 new documents a month. Moderate updates to existing documents required.
One Document Per Row of Data Collected
{
service_id:555
timestamp:1265248762
in:584
out:11342
}
{
service_id:555
timestamp:1265249062
in:94
out:1242
}
Approximately 15,000,000 new documents a month. All data would be an insert to a new document. Faster inserts, but I have questions about how efficient it's going to be after a year or 2 years with hundreds of millions of documents. The file IO would seem prohibitive (though I'm the first to admit I don't fully understand the mechanics of it).
I'm trying to approach this in a document-oriented way, though breaking the RDMS habit is difficult :) The fact you can only minimally parameterise views as well has me a bit concerned. That said, which of the above would be the most appropriate? Are there other formats that I haven't considered which will perform better?
Thanks in advance,
Jamie.
I don't think it's a horrible idea.
Let's consider your Connection/Month scenario.
Given that an entry is ~40 (that's generous) characters long, and you get ~8,200 entries per month, your final document size will be ~350K long at the end of the month.
That means, going full bore, you're be reading and writing 2000 350K documents every 5 minutes.
I/O wise, this is less than 6 MB/s, considering read and write, averaged for the 5m window of time. That's well within even low end hardware today.
However, there is another issue. When you store that document, Couch is going to evaluate its contents in order to build its view, so Couch will be parsing 350K documents. My fear is that (at last check, but it's been some time) I don't believe Couch scaled well across CPU cores, so this could easily pin the single CPU core that Couch will be using. I would like to hope that Couch can read, parse, and process 2 MB/s, but I frankly don't know. With all it's benefits, erlang isn't the best haul ass in a straight line computer language.
The final concern is keeping up with the database. This will be writing 700 MB every 5 minutes at the end of the month. With Couchs architecture (append only), you will be writing 700MB of data every 5 min, which is 8.1GB per hour, and 201GB after 24 hrs.
After DB compression, it crushes down to 700MB (for a single month), but during that process, that file will be getting big, and quite quickly.
On the retrieve side, these large documents don't scare me. Loading up a 350K JSON document, yes it's big, but it's not that big, not on modern hardware. There are avatars on bulletin boards bigger than that. So, anything you want to do regarding the activity of a connection over a month will be pretty fast I think. Across connections, obviously the more you grab, the more expensive it will get (700MB for all 2000 connections). 700MB is a real number that has real impact. Plus your process will need to be aggressive in throwing out the data you don't care about so it can throw away the chaff (unless you want to load up 700MB of heap in your report process).
Given these numbers, Connection/Day may be a better bet, as you can control the granularity a bit better. However, frankly, I would go for the coarsest document you can, because I think that gives you the best value from the database, solely because today all the head seeks and platter rotations are what kill a lot of I/O performance, many disk stream data very well. Larger documents (assuming well located data, since Couch is constantly compacted, this shouldn't be a problem) stream more than seek. Seeking in memory is "free" compared to a disk.
By all means run your own tests on our hardware, but take all these considerations to heart.
EDIT:
After more experiments...
Couple of interesting observations.
During import of large documents CPU is equally important to I/O speed. This is because of the amount of marshalling and CPU consumed by converting the JSON in to the internal model for use by the views. By using the large (350k) documents, my CPUs were pretty much maxed out (350%). In contrast, with the smaller documents, they were humming along at 200%, even though, overall, it was the same information, just chunked up differently.
For I/O, during the 350K docs, I was charting 11MB/sec, but with the smaller docs, it was only 8MB/sec.
Compaction appeared to be almost I/O bound. It's hard for me to get good numbers on my I/O potential. A copy of a cached file pushes 40+MB/sec. Compaction ran at about 8MB/sec. But that's consistent with the raw load (assuming couch is moving stuff message by message). The CPU is lower, as it's doing less processing (it's not interpreting the JSON payloads, or rebuilding the views), plus it was a single CPU doing the work.
Finally, for reading, I tried to dump out the entire database. A single CPU was pegged for this, and my I/O pretty low. I made it a point to ensure that the CouchDB file wasn't actually cached, my machine has a lot of memory, so a lot of stuff is cached. The raw dump through the _all_docs was only about 1 MB/sec. That's almost all seek and rotational delay than anything else. When I did that with the large documents, the I/O was hitting 3 MB/sec, that just shows the streaming affect I mentioned a benefit for larger documents.
And it should be noted that there are techniques on the Couch website about improving performance that I was not following. Notably I was using random IDs. Finally, this wasn't done as a gauge of what Couch's performance is, rather where the load appears to end up. The large vs small document differences I thought were interesting.
Finally, ultimate performance isn't as important as simply performing well enough for you application with your hardware. As you mentioned, you're doing you're own testing, and that's all that really matters.

Resources