Postgresql: database is not accepting commands to avoid wraparound data loss - database

Got the error upon create/delete/update queries:
ERROR: database is not accepting commands to avoid wraparound data
loss in database "mydb" HINT: Stop the postmaster and use a
standalone backend to vacuum that database. You might also need to
commit or roll back old prepared transactions.
So, the database is blocked and it is only possible to perform SELECT queries.
Database's size 350 GB. 1 table(my_table) has ~1 billion rows.
system: "PostgreSQL 9.3.4 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-4), 64-bit"
postgresq.conf some settings:
effective_io_concurrency = 15 # 1-1000; 0 disables prefetching
autovacuum_vacuum_cost_delay = -1
#vacuum_cost_delay = 0 # 0-100 milliseconds
#vacuum_cost_page_hit = 1 # 0-10000 credits
#vacuum_cost_page_miss = 10 # 0-10000 credits
#vacuum_cost_page_dirty = 20 # 0-10000 credits
#vacuum_cost_limit = 200
I do not use prepared transactions. But basic stored procedures are used(which means, automatic tranactions, right?) 50mln times per day.
Сurrently "autovacuum: VACUUM ANALYZE public.my_table (to prevent wraparound)" is perforing, it is almost 12 hours of that query activity.
As far as I understand, the problem with not-vacuumed dead touples, right?
How to resolve this problem and prevent this in the future? Please, help :)
The end of story( ~one month later)
Now my big table is partitioned by thousands of tables. Each small table is vacuumed much faster. Autovacuum configuration was set more closer to default. If needed, i could be set to more agressive again, but so far database with billions of rows works pretty well.
So, the problem of the topic should not appear again.
ps now i'm looking at Postgres-XL as a next step of data scalability.

The problem isn't dead tuples, it's transaction ids, which control row visibility. Each transaction gets a sequential XID, since they're 32 bit ints, they will eventually wrap around.
See here for more detail: http://www.postgresql.org/docs/9.3/static/routine-vacuuming.html#VACUUM-FOR-WRAPAROUND, but the short version is that all tables need to be VACUUMed (either manually or with autovacuum) at least every 2 billion transactions. The longer you go without vacuuming the longer it takes.
To fix your current problem you don't need to do a VACUUM ANALYZE, just a VACUUM - I am not sure how much of a speed difference there is, but it should be faster.
What kind of hardware is this running on, and what's your maintenance_work_mem set to? You may want to raise it (possibly temporarily) to complete the VACUUM faster.
In the future, you basically just need to VACUUM more: either increase autovacuum frequency (see here: https://dba.stackexchange.com/questions/21068/aggressive-autovacuum-on-postgresql, for example) or even schedule manual VACUUMs with cron. Also look at vacuum_freeze_min_age and related settings.
What kind of data is it, and what kind of transactions are you running? That's a pretty big table, can it be partitioned (by date, for instance)?
Edit
You may also want to enable log_autovacuum_min_duration (set it to a small value), to see what autovacuum is actually doing when the database is live, and if there are locking issues preventing it from running.
Responding to Comments
You don't have to run VACUUM standalone, you can run it now, unless that will interfere too much with your other databases. Just need to do it as superuser, so system tables are also vacuumed.
Doing a dump/restore seems drastic, and I can't imagine it would be faster than completing the VACUUM.
Switching away from stored procedures will not help: any queries that modify data will generate XIDs, it doesn't matter if you use transactions explicitly, they're still transactions.
You're on the right way - getting autovacuum to keep up with your inserts/updates is the best solution (logging its activity should help understand what's going wrong now).
Judging by your table structure, this may be the classic case for table partitioning (http://www.postgresql.org/docs/9.3/static/ddl-partitioning.html) - am I right in thinking that it's all inserts, rather than updates/deletes? If you're always writing to one small partition, you can vacuum it more aggressively (autovacuum can be configured per table), and VACUUM FREEZE the others.

I think you have no choice but to stop the database, restart in standalone mode, and do a vacuum. Letting the autovac complete will not help, because once it completes it will go to update the system catalog to reflect that completion, and that update will be rejected because it cannot acquire the needed transaction ID. At least that was my experience.
As for preventing it in the future, do you restart your database on a regular basis? If you restart your database every 24 hours, but you have a table that takes 30 hours to vacuum, then that table will never be vacuumed successfully, and you will get into trouble eventually.

Related

Query compilation and provisioning times

What does it mean there is a longer time for COMPILATION_TIME, QUEUED_PROVISIONING_TIME or both more than usual?
I have a query runs every couple of minutes and it usually takes less than 200 milliseconds for compilation and 0 for provisioning. There are 2 instances in the last couple of days the values are more than 4000 for compilation and more than 100000 for provisioning.
Is that mean warehouse was being resumed and there was a hiccup?
COMPILATION_TIME:
The SQL is parsed and simplified, and the tables meta data is loaded. Thus a compile for select a,b,c from table_name will be fractally faster than select * from table_name because the meta data is not needed from every partition to know the final shape.
Super fragmented tables, can give poor compile performance as there is more meta data to load. Fragmentation comes from many small writes/deletes/updates.
Doing very large INSERT statements can give horrible compile performance. We did a lift-and-shift and did all data loading via INSERT, just avoid..
PRIOVISIONING_TIME is the amount of time to setup the hardware, this occurs for two main reasons ,you are turning on 3X, 4X, 5X, 6X servers and it can take minutes just to allocate those volume of servers.
Or there is failure, sometime around releases there can be a little instability, where a query fails on the "new" release, and query is rolled back to older instances, which you would see in the profile as 1, 1001. But sometimes there has been problems in the provisioning infrastructure (I not seen it for a few years, but am not monitoring for it presently).
But I would think you will mostly see this on a on going basis for the first reason.
The compilation process involves query parsing, semantic checks, query rewrite components, reading object metadata, table pruning, evaluating certain heuristics such as filter push-downs, plan generations based upon the cost-based optimization, etc., which totally accounts for the COMPILATION_TIME.
QUEUED_PROVISIONING_TIME refers to Time (in milliseconds) spent in the warehouse queue, waiting for the warehouse compute resources to provision, due to warehouse creation, resume, or resize.
https://docs.snowflake.com/en/sql-reference/functions/query_history.html
To understand the reason behind the query taking long time recently in detail, the query ID needs to be analysed. You can raise a support case to Snowflake support with the problematic query ID to have the details checked.

Table containing TEXT column growing continuously

We've got a table in a production system which (for legacy reasons) is running SQL 2005 (9.0.5266) and contains a TEXT column (along with a few other columns of various datatypes).
All of a sudden (since a week ago) we noticed the size of this one table increasing linearly by 10-15GB per day (whereas previously it has always remained at a constant size). The table is a queue for a messaging system, and as such the data in it completely refreshes itself every few seconds. At any one time there could be anywhere from 0 to around 1000 rows, but it fluctuates rapidly as messages are inserted, and sent (at which point they're deleted).
We can't find anything that was changed on the day the growth started - so have no obvious potential cause identified at this stage.
One "obvious" culprit is the TEXT column, and so we checked to see if any massive values were now being stored, but (using DATALENGTH) we found no single rows above around ~32k. We've run CHECKDB, updated space usage, rebuild all indexes, etc - nothing reduces the size (and CHECKDB showed no errors).
We've queried sys.allocation_units and the size increase is definitely LOB_DATA (which show total_pages and used_pages increasing together at a constant rate).
To reduce the database size last night we simple created a new table along-side the one in question (which is luckily referenced via a view by the application), dropped the old table, and renamed the new one. We left last night, taking comfort in the fact that we'd alleviated the space issues, and that we had a backup of the dodgy table to investigate further today.
However, this morning the table size is already up to 14GB (and growing), while there are only the usual ~500 rows in the table, and MAX(DATALENGTH(text_column)) is only showing around 35k.
Any ideas as to what could be causing this "runaway" growth, or anything else that we could try or query to get more information about what exactly is using the space?
Cheers,
Dave
This is a general problem in dealing with queues. The article linked talks about Service Broker queues, but the issue is the same for ordinary tables used as queues. If you have a busy system with generous resources (CPU, memory, disk IO) and you push a queue on this system to high throughput, then a large portion of these resources will be used to handle the two operations: enqueue (ie. INSERT) and dequeue (ie. DELETE). However, the full lifecycle of the record requires three operations: INSERT, DELETE and ghost purge. They cost roughly the same in terms on CPU/memory/disk IO needs, so if you use that queue for say 90% of the system resources then you should allocate 30% resources to each. But only the first two are under your control (ie. explicit statements running in user sessions). The third one, the ghost purge, is a background process controlled by SQL Server, and there is no chance the ghost cleanup process will be allowed to consume 30% resources. This is a fundamental issue and, if you push the pedal-to-the-metal for long enough time your *will hit it. Once ghost records accumulate and pass system/workload specific threshold the performance will degrade quickly and the symptoms will spiral to abysmal performance (a negative feedback loop forms).
Luckily, since you do not use Service Broker queues but real tables as queues, you have some better tools at your disposal, like ALTER TABLE REORGANIZE and ALTER TABLE REBUILD. By far the best solution is an online index/table rebuild. SQL Server 2012 supports online operations on tables containing BLOBs and you can eleverage that. Of course you would have to get rid of the deprecated obsolete TEXT type and use VARCHAR(MAX), but that goes w/o saying.
As a side note:
If you have pages with nothing but ghost records on them, then you
will not read those pages again and they won't get marked for cleanup
This is incorrect. Pages with nothing but ghosts will be detected and purged by scans. As I said, the issue is not detection, is resources. If you push your system enough, you will race ahead of the ghost cleanup and he will never catch up.
Early this morning I restarted the SQL service on the instance with this "problem queue table". It appears that this has fixed the issue. Immediately following the restart, I monitored the LOB_DATA page-in-use count, and it started dropping straight away. It was being cleaned up quite slowly, so probably took around an hour or two to reclaim the 60+GB of space being held (I went to bed after I'd made sure all was well).
At the moment the table is back to normal as far as in-use allocations (hovering around <100 pages), and is not showing any signs of re-growing.
Given the fact that we have used this table in the same way (i.e. as a queue) for at least 10 years, and it has had busier periods than what we've had over the past week or two, I would've been surprised if it was the issue described by Remus above (although I understand how that can occur; I guess this specific queue just isn't quite busy enough to swamp the ghost cleanup process?). Very strange...
Thanks again for the help guys!

Terrible SQL reads performance (culprit update stats?)

I'm running on SQL Server 2008 R2 and am trying to fine-tune performance. I did everything I could from:
Code review of SQL code
Create or remove indexes as I think appropriate
Auto create stats ON
Auto update stats ON
Auto update stats async ON
I have a 24/7 system that constantly stores data. Sometimes we do reads and that's where the issue is. Sometimes the reads take a couple of seconds or less (which would be expected and acceptable to us). Other times, the reads take several seconds that could amount to a minute before the stored procedure completes and we render data on the UI.
If we do the read again, it would be faster. The SQL profiler would trace the particular stored procedure or query that took several seconds. We would zoom into that stored procedure, and do everything we can do to optimize it if we can.
I also traced the auto stats event and the recompile event. It's hard to tell if a stat is being updated causing the read to take a long time, or if a recompile caused it. Sometimes, I see that the profiler traced a recompile of the read query that took several unacceptable minutes, other times it doesn't trace a recompile.
I tried to prevent the query optimizer from blocking the read until it recompiles or updates stats by using option use plan XML, etc. But I ran into compile errors complaining that the query plan XML isn't valid; that could be true because the query is quiet involved: select + joins that involve a local table var. I sort of hacked the XML and maybe that's why it deemed it invalid. So I gave up on using plan hint.
We tried periodic (every 15 minutes) manual running update stats in order to keep stats up-to-date as much as we can, but that hurt performance. updatestats blocks writes, and I'm sure even reads; updatestats seemed to maintain a bunch of statistics and on average it was taking around 80-90 seconds. A read that waits that long is unacceptable.
So the idea is to let the reads happen and prevent a situation when a recompile/update stat blocks it, correct? Does it make sense to disable auto statistics altogether? Or perhaps disable auto create statistics after deleting all the auto created stats?
This goes against Microsoft recommendations perhaps, since they enable auto create statistics and auto update statistics by default, and performance may suffer, but any ideas/hints you can give would be appreciated.
From what you are explaining, it looks like the below (all or some) might be happening.
You are doing physical reads. The quick way you avoid this is by increasing the amount of RAM you throw at the box. You haven't mentioned the hardware specs of your server. Please add details.
If you trace the SQL calls then you can easily figure out why the RECOMPILE happened. Look at the EventSubClass to figure out the reason and work towards resolving that.
ref: http://msdn.microsoft.com/en-us/library/ms187105.aspx
You mentioned table variables. These are notorious for causing performance issues when NOT using at the right place. If you use table variables in a JOIN, parallel plan is out of the question and no stats also. I am NOT sure how and where you are using but try replacing them with temp tables. And starting from SQL Server 2005, you will get only STMT recompilation at best and NOT the complete SP recompile as it happened in 2000.
You mentioned Update Stats ASYNC option and this won't block the query.
What are the TOP WAIT STATS on this server? Have you identified the expensive procedures based on CPU, Logical reads & execution count?
Have you looked the Page Life Expectancy, amount of IO using virtual file stats DMV?
Updating Stats every 15 minutes is NOT a good plan. How often is data inserted into the system? What is the sample rate you are using? What is your index maintenance strategy?
Have you looked at the missing indexes DMV?
There are a bunch of good queries to identify problems in more granular fashion using the below queries.
ref: http://dl.dropbox.com/u/13748067/SQL%20Server%202008%20Diagnostic%20Information%20Queries%20%28April%202011%29.sql
There are so many other things to look at but the above is a good starting point.
OK, here is my IMHO catch on this:
DBCC INDEXDEFRAG is worth trying and is an ONLINE function hence can be used on a live system
You could be reaching the maximum capacity of your architectural design. You can scale up which can always help but more likely you have to change the architecture to achieve better scalability sacrificing simplicity
A common trick is partitioning. You are writing to a table whose index distribution looks nothing like it was a few hours ago - hence degrading performance. This is a massive write, such a table could be divided to daily write and the rest of the data with nightly batches of moving stuff across.
More and more, people are being converting to CQRS. You might be the next. This solves the problem by separating reads from writes (a very simplistic explanation).

The best way to design a Reservation based table

One of my Clients has a reservation based system. Similar to air lines. Running on MS SQL 2005.
The way the previous company has designed it is to create an allocation as a set of rows.
Simple Example Being:
AllocationId | SeatNumber | IsSold
1234 | A01 | 0
1234 | A02 | 0
In the process of selling a seat the system will establish an update lock on the table.
We have a problem at the moment where the locking process is running slow and we are looking at ways to speed it up.
The table is already efficiently index, so we are looking at a hardware solution to speed up the process. The table is about 5 mil active rows and sits on a RAID 50 SAS array.
I am assuming hard disk seek time is going to be the limiting factor in speeding up update locks when you have 5mil rows and are updating 2-5 rows at a time (I could be wrong).
I've herd about people using index partition over several disk arrays, has anyone had similar experiences with trying to speed up locking? can anyone give me some advise onto a possible solution on what hardware might be able to be upgraded or what technology we can take advantage of in order to speed up the update locks (without moving to a cluster)?
One last try…
It is clear that there are too many locks hold for too long.
Once the system starts slowing down
due to too many locks there is no
point in starting more transactions.
Therefore you should benchmark the system to find out the optimal number of currant transaction, then use some queue system (or otherwise) to limit the number of currant transaction. Sql Server may have some setting (number of active connections etc) to help, otherwise you will have to write this in your application code.
Oracle is good at allowing reads to bypass writes, however SqlServer is not as standared...
Therefore I would split the stored proc to use two transactions, the first transaction should just:
be a SNAPSHOT (or READ UNCOMMITTED) transaction
find the “Id” of the rows for the seats you wish to sell.
You should then commit (or abort) this transaction,
and use a 2nd (hopefully very short) transaction that
Most likcly is READ COMMITTED, (or maybe SERIALIZABLE)
Selects each row for update (use a locking hint)
Check it has not been sold in the mean time (abort and start again if it has)
Set the “IsSold” flag on the row
(You may be able to the above in a single update statement using “in”, and then check that the expected number of rows were updated)
Sorry sometimes you do need to understant what each time of transaction does and how locking works in detail.
If the table is smaller, then the
update is shorter and the locks are
hold for less time.
Therefore consider splitting the table:
so you have a table that JUST contains “AllocationId” and “IsSold”.
This table could be stored as a single btree (index organized table on AllocationId)
As all the other indexes will be on the table that contrains the details of the seat, no indexes should be locked by the update.
I don't think you'd getting anything out of table partitioning -- the only improvement you'd get would be in fewer disk reads from a smaller (shorter) index trees (each read will hit each level of the index at least once, so the fewer levels the quicker the read.) However, I've got a table with a 4M+ row partition, indexed on 4 columns, net 10 byte key length. It fits in three index levels, with the topmost level 42.6% full. Assuming you had something similar, it seems reasonable that partitioning might only remove one level from the tree, and I doubt that's much of an improvement.
Some off the-cuff hardward ideas:
Raid 5 (and 50) can be slower on writes, because of the parity calculation. Not an issue (or so I'm told) if the disk I/O cache is large enough to handle the workload, but if that's flooded you might want to look at raid 10.
Partition the table across multiple drive arrays. Take two (or more) Raid arrays, distribute the table across the volumes[files/file groups, with or without table partitioning or partitioned views], and you've got twice the disk I/O speed, depending on where the data lies relative to the queries retrieving it. (If everythings on array #1 and array #2 is idle, you've gained nothing.)
Worst case, there's probably leading edge or bleeding edge technology out there that will blow your socks off. If it's critical to your business and you've got the budget, might be worth some serious research.
How long is the update lock hold for?
Why is the lock on the “table” not just the “rows” being sold?
If the lock is hold for more then a
faction of a second that is likely to
be your problem. SqlServer does not
like you holding locks while users
fill in web forms etc.
With SqlServer, you have to implement a “shopping cart” yourself, by temporary reserving the seat until the user pays for it. E.g add a “IsReserved” and “ReservedAt” colunn, then any seats that has been reserved for more then n minutes should be automatically unreserved.
This is a hard problem, as a shopper does not expect a seat that is in stock to be sold to someone else where he is checking out. However you don’t know if the shopper will ever complete the checkout. So how do you show it on a UI. Think about having a look at what other booking websites do then copy one that your users already know how to use.
(Oracle can sometimes cope with lock being kept for a long time, but even Oracle is a lot faster and happier if you keep your locking short.)
I would first try to figure out why the you are locking the table rather than just a row.
One thing to check out is the Execution plan of the Update statement to see what Indexes it causes to be updated and then make sure that row_level_lock and page_level_lock are enabled on those indexes.
You can do so with the following statement.
Select allow_row_locks, allow_page_locks from sys.indexes where name = 'IndexNameHere'
Here are a few ideas:
Make sure your data and logs are on separate spindles, to maximize write performance.
Configure your drives to only use the first 30% or so for data, and have the remainder be for backups (minimize seek / random access times).
Use RAID 10 for the log volume; add more spindles as needed for performance (write performance is driven by the speed of the log)
Make sure your server has enough RAM. Ideally, everything needed for a transaction should be in memory before the transaction starts, to minimize lock times (consider pre-caching). There are a bunch of performance counters you can check for this.
Partitioning may help, but it depends a lot on the details of your app and data...
I'm assuming that the T-SQL, indexes, transaction size, etc, have already been optimized.
In case it helps, I talk about this subject in detail in my book (including SSDs, disk array optimization, etc) -- Ultra-Fast ASP.NET.

Maximum transaction size in PostgreSQL

I have a utility in my application where i need to perform bulk load of INSERT, UPDATE & DELETE operations. I am trying to create transaction around this so that once this system is invoke and the data is fed to it, it is ensured that it is either all or none added to the database.
The concern what is have is what is the boundary conditions here? How many INSERT, UPDATE & DELETE can i have in one transaction? Is transaction size configurable?
I don't think there's a maximum amount of work that can be performed in a transaction. Data keeps getting added to the table files, and eventually the transaction either commits or rolls backs: AIUI this result gets stored in pg_clog; if it rolls back, the space will eventually be reclaimed by vacuum. So it's not as if the ongoing transaction work is held in memory and flushed at commit time, for instance.
A single transaction can run approximately two billion commands in it (2^31, minus IIRC a tiny bit of overhead. Actually, come to think of it, that may be 2^32 - the commandcounter is unsigned I think).
Each of those commands can modify multiple rows, of course.
For a project I work on, I perform 20 millions of INSERT. I tried with one big transaction and with one transaction for every million of INSERT and the performances seem exactly the same.
PostgreSQL 8.3
I believe the maximum amount of work is limited by your log file size. The database will never allow itself to not be able to rollback, so if you consume all your log space during the transaction, it will halt until you give it more space or rollback. This is a generally true for all databases.
I would recommend chunking your updates into manageable chunks that take a most a couple of minutes of execution time, that way you know if there's a problem earlier (eg what normally takes 1 minute is still running after 10 minutes... hmmm, did someone drop an index?)

Resources