Postgres toast table grow out of proportion - database

We are seeing number of tuples deleted in dead tuple grow much faster than number of tuples updated+deleted in the regular table (conveniently named 'regular').
Why is this happening? What would you suggest us do to avoid bloated toast table?
select now(),relname,n_tup_upd,n_tup_del,n_tup_hot_upd,n_live_tup,n_dead_tup,n_mod_since_analyze from pg_stat_all_tables where relname in ('regular','pg_toast_16428');
-[ RECORD 1 ]-------+------------------------------
now | 2022-01-27 16:46:11.934005+00
relname | regular
n_tup_upd | 100724318
n_tup_del | 9818
n_tup_hot_upd | 81957705
n_live_tup | 3940453
n_dead_tup | 20268
n_mod_since_analyze | 98221
-[ RECORD 2 ]-------+------------------------------
now | 2022-01-27 16:46:11.934005+00
relname | pg_toast_16428
n_tup_upd | 0
n_tup_del | 12774108278
n_tup_hot_upd | 0
n_live_tup | 3652091
n_dead_tup | 3927007666
n_mod_since_analyze | 25550832222
fre=> select now(),relname,n_tup_upd,n_tup_del,n_tup_hot_upd,n_live_tup,n_dead_tup,n_mod_since_analyze from pg_stat_all_tables where relname in ('regular','pg_toast_16428');
-[ RECORD 1 ]-------+------------------------------
now | 2022-01-27 16:46:13.198182+00
relname | regular
n_tup_upd | 100724383
n_tup_del | 9818
n_tup_hot_upd | 81957761
n_live_tup | 3940453
n_dead_tup | 20333
n_mod_since_analyze | 98286
-[ RECORD 2 ]-------+------------------------------
now | 2022-01-27 16:46:13.198182+00
relname | pg_toast_16428
n_tup_upd | 0
n_tup_del | 12774129076
n_tup_hot_upd | 0
n_live_tup | 3652091
n_dead_tup | 3927028464
n_mod_since_analyze | 25550873818

Big values are divided into multiple rows in Toast table (depending on size and TOAST_MAX_CHUNK_SIZE setting). And thus a single row update may result in multiple row updates in the associated Toast table.
AFAIK Postgresql will not allow a row to exceed page size, which is 8Kb by default. This applies to both normal and Toast tables. And so for example a 8Mb value in a regular table will result in a thousand or so rows in Toast table. That's why Toast table is a lot bigger than the regular one.
For more information read the docs:
https://www.postgresql.org/docs/current/storage-toast.html
So how to deal with the bloat? The typical method is to vacuum full. That however locks entire table. There are methods to reduce bloat without locking but require a lot more space (typically twice the size of the table) and are harder to maintain. You create a clone table, you setup triggers to insert/update/delete to both tables, then you copy all of the data there, you switch tables and drop the old one. This of course gets messy when foreign keys (and other constraints) are involved.
There are tools that do that semi-automatically. You may want to read about pg_squeeze and/or pg_repack.

Related

How best to Calculate Nth subscription renewal?

So I have a transaction table (postgres) that inserts a new row whenever a user renews their subscription for our service. The table subscription looks like this:
+--------+--------+------------+
| userId | prodId | renew_date |
+--------+--------+------------+
| 1 | 1 | 2018-05-01 |
| 1 | 1 | 2018-06-01 |
| 1 | 1 | 2018-07-01 |
| 2 | 3 | 2017-04-16 |
| 2 | 3 | 2017-05-16 |
+--------+--------+------------+
If the analysts want to figure out the Nth renewal or latest renewal for a particular user or product, I have two solutions to give them that:
1.) During my ETL process, I truncate the DW warehouse target table and re-populate it with:
select *
, row_number() over (partition by userId, productId order by renew_date asc) as nth_renewal
from subscription
I can't think of a way where i can +1 to the previous renewal if I were to do incremental updates, what if this is the customers first renewal?
2.) I just copy the exact OLTP table over to the data warehouse and do incremental updates every day. This way, I let the analysts calculate the nth renewal themselves. (also as a follow up question: is it ever OK to have a duplicate copy of a transactional table in my data warehouse?)

Constraint to prevent overlapping time periods

Temporal tables and time periods are nothing special in the IT world. But somehow it seems my request is rather unique because I cannot find anything useful for it at all.
What I have are two tables, one containing static data and the other one dynamic data for entries of the first table which changes over time. For each row in the second table there are two columns ValidFrom and ValidUntil whereas the latter can be null if there is no planned end of validity.
Simplified, the schema looks like this:
tbStatic
+----+------------+------------+
| Id | Attribute1 | Attribute2 |
+----+------------+------------+
| 1 | foo | bar |
| 2 | baz | foo |
+----+------------+------------+
tbDynamic
+----+------------+---------------------+---------------------+------------+------------+
| Id | tbStaticId | ValidFrom | ValidUntil | Attribute1 | Attribute2 |
+----+------------+---------------------+---------------------+------------+------------+
| 1 | 1 | 2018-01-01 00:00:00 | 2018-01-31 23:59:59 | 1 | 0 |
| 2 | 2 | 2018-04-01 00:00:00 | 2018-04-02 11:59.59 | 2 | 1 |
| 3 | 1 | 2018-02-01 00:00:00 | null | 2 | 1 |
| 4 | 2 | 2018-05-01 00:00:00 | 2018-06-01 00:00:00 | 23 | 15 |
| 5 | 2 | 2018-07-01 01:23:45 | 2018-07-05 23:12:01 | 80 | 12 |
+----+------------+---------------------+---------------------+------------+------------+
As you might have spotted, there is the possibility that we have holes between time periods. What we cannot have is overlapping periods, though. This means it is impossible to have overlapping time periods for the same tbStaticId.
Unfortunately, this is only a requirement until now and although it is enforced in the application using the database, I would prefer having a constraint on the table that prevents new rows to be inserted or existing rows to be updated when they violate this time uniqueness.
As stated, my research up to this point was rather disappointing and that is also the reason I cannot really show any code I've tried yet. The most promising approach I followed yet was to create a function that takes a record or the two time period values and the foreign key as input and determines if they overlap with something else. This function could then be called in a check constraint. But after thinking about the amount of cases to check, I gave up because it seemed unreasonable (especially when considering updates as well, which require additional attention).
So my question is if there is some easy way to constraint time slices in SQL Server, without the use of temporal tables (not available in my SQL Server version)? And if yes, how?

duplicate amount column for payment transaction design

There are few payment methods: credit/debit card, cash, bitcoin
This is my payment transaction table:
Transaction:
| ID | AMOUNT | METHOD |
| 1 | 80 | credit |
| 2 | 100 | cash |
Transaction_credit:
| ID | AMOUNT | TYPE | TRANSACTION_ID |
| 1 | 80 | sale | 1 |
| 2 | -80 | reversal | 1 |
Transaction_cash:
| ID | AMOUNT | TYPE | TRANSACTION_ID |
| 2 | 100 | payment | 2 |
| 2 | -100 | refund | 2 |
Do you think it is a good idea to have amount in card, cash, and bitcoin sub table?
How can I solve the duplicate amount in sub table?
I think your database design needs some improvements.
Firstly: Transaction Entity (Table) in Accounting Systems holds all money transactions. If your sales are reversal, you should make a new Transaction row too. Also, if your Payment refunded, you should make a new Transaction row too.
Secondly: Details of all transactions should be saved in second level Entities (Tables). (as you design correctly). Transaction types (e.g. Card, Cash, Bitcoin and etc.) have many different attributes. So putting all types in one entity, make some bad design traps like Nullification.
Thirdly: If you want to have a complete Accounting System to supports all accounting parts (like generating Balance Sheet), you should add many other entities.
But in this case, you should hold Amount in Transaction. Fining Amount in other tables is so difficult when you want to perform some queries based on overall Amount on Transaction.

Reducing amount of http requests by grouping queries

So I have a bunch of chatty http requests in my Angular 1 app which are bottle necking many of the other requests.
Imagine I have a list of Users from a totally different data source and I make calls to 5 different tables such as:
user.signup
+-----+------------+
| uid | date |
+-----+------------+
| 1 | 2016-12-13 |
| 2 | 2016-12-01 |
+-----+------------+
user.favourite_color
+-----+-------+
| uid | color |
+-----+-------+
| 1 | red |
| 5 | blue |
| 7 | green |
+-----+-------+
user.location
+-----+-----------+
| uid | location |
+-----+-----------+
| 2 | uk |
| 3 | france |
| 9 | greenland |
+-----+-----------+
The reason they are in different tables are because the fields are optional.
The way I see it I have 3 options:
Put them in 1 table
So I could just group them all in 1 table and have a bunch of null columns but that just doesn't sit right with me in terms of DB design.
+-----+------------+-----------+-------+
| uid | date | location | color |
+-----+------------+-----------+-------+
| 1 | 2016-12-13 | null | red |
| 2 | 2016-12-01 | uk | null |
| 3 | null | greenland | null |
| 5 | null | null | blue |
+-----+------------+-----------+-------+
Join them all with 1 request
So I could just have one query that joins all these tables but the way I see it they would have to be full joins with the expectation that some uid's wouldn't exist in some tables. e.g.
+------+------------+-------+-----------+-------+-------+
| uid | date | l_uid | location | c_uid | color |
+------+------------+-------+-----------+-------+-------+
| 1 | 2016-12-13 | null | null | 1 | red |
| 2 | 2016-12-01 | 2 | uk | null | null |
| null | null | 3 | greenland | null | null |
| null | nul | null | null | 5 | blue |
+------+------------+-------+-----------+-------+-------+
which is probably even worse!
Change the way the requests are made?
Maybe make some clever changes how the requests are made:
function activate() {
$q.all([requestSignupDate(), requestFaveColor(), requestLocation(), ....])
.then(function (data) {
//do a bunch of stuff with the data
});
}
which I want to change to:
function activate() {
requestUserData();
}
Any suggestions?
This is a typical ORM problem - precisely this database does not provide a better way of storing the user as an entity.
The entity properties are spread out in multiple tables and due to being optional you are taxed to do left joins with multiple tables.
So you have to essentially solve that problem first. You have several options or (non-options without knowing requirements.)
Put them in one table
If you can use nullable columns and refactor your application - this is preferable. I see that your other tables have just one or two more fields. Heavily normalized tables saves some space and with no data repetition other normalization benefits are moot.
Join them all with 1 request
Only if your query stays performant. Can you use left join ?
You would do this if the above option is difficult. Use this only as a quick fix.
Other options To solve Database Problem
Use server side caching if feasible.
If feasible use a different NoSQL database (e.g. MongoDB)
Change the way the requests are made?
Do you really need all the properties upfront ? The web is asynchronous so why not keep things async. Use $q.all only if you need all the properties. For example the user may not even navigate/scroll to certain part of the page to see stuff so why may queries in the first place.
Along with this you can cluster your server side and the database so that these queries fall on multiple machines and load gets distributed. You may get some items retrieved in parallel.
If the number of columns are all that you have and the tables are all that you mentioned i.e. the supplemental tables have fewer properties I would go with Put them in 1 table option.
Why doesn't using nullable fields sit right with you? NULL exists because it's useful, and a profile table with optional values for a fixed set of fields is practically the textbook case for invoking it. If fields can be dynamically redefined (eg swapping out "favorite color" for "favorite food"), it's another story, but that's not a requirement in what you've described.

What's the fastest way to perform large inserts with foreign key relationships and preprocessing?

I need to regularly import large (hundreds of thousands of lines) tsv files into multiple related SQL Server 2008 R2 tables.
The input file looks something like this (it's actually even more complex and the data is of a different nature, but what I have here is analogous):
January_1_Lunch.tsv
+-------+----------+-------------+---------+
| Diner | Beverage | Food | Dessert |
+-------+----------+-------------+---------+
| Nancy | coffee | salad_steak | pie |
| Joe | milk | soup_steak | cake |
| Pat | coffee | soup_tofu | pie |
+-------+----------+-------------+---------+
Notice that one column contains a character-delimited list that needs preprocessing to split it up.
The schema is highly normalized -- each record has multiple many-to-many foreign key relationships. Nothing too unusual here...
Meals
+----+-----------------+
| id | name |
+----+-----------------+
| 1 | January_1_Lunch |
+----+-----------------+
Beverages
+----+--------+
| id | name |
+----+--------+
| 1 | coffee |
| 2 | milk |
+----+--------+
Food
+----+-------+
| id | name |
+----+-------+
| 1 | salad |
| 2 | soup |
| 3 | steak |
| 4 | tofu |
+----+-------+
Desserts
+----+------+
| id | name |
+----+------+
| 1 | pie |
| 2 | cake |
+----+------+
Each input column is ultimately destined for a separate table.
This might seem an unnecessarily complex schema -- why not just have a single table that matches the input? But consider that a diner may come into the restaurant and order only a drink or a dessert, in which case there would be many null rows. Considering that this DB will ultimately store hundreds of millions of records, that seems like a poor use of storage. I also want to be able to generate reports for just beverages, just desserts, etc., and I figure those will perform much better with separate tables.
The orders are tracked in relationship tables like this:
BeverageOrders
+--------+---------+------------+
| mealId | dinerId | beverageId |
+--------+---------+------------+
| 1 | 1 | 1 |
| 1 | 2 | 2 |
| 1 | 3 | 1 |
+--------+---------+------------+
FoodOrders
+--------+---------+--------+
| mealId | dinerId | foodId |
+--------+---------+--------+
| 1 | 1 | 1 |
| 1 | 1 | 3 |
| 1 | 2 | 2 |
| 1 | 2 | 3 |
| 1 | 3 | 2 |
| 1 | 3 | 4 |
+--------+---------+--------+
DessertOrders
+--------+---------+-----------+
| mealId | dinerId | dessertId |
+--------+---------+-----------+
| 1 | 1 | 1 |
| 1 | 2 | 2 |
| 1 | 3 | 1 |
+--------+---------+-----------+
Note that there are more records for Food because the input contained those nasty little lists that were split into multiple records. This is another reason it helps to have separate tables.
So the question is, what's the most efficient way to get the data from the file into the schema you see above?
Approaches I've considered:
Parse the tsv file line-by-line, performing the inserts as I go. Whether using an ORM or not, this seems like a lot of trips to the database and would be very slow.
Parse the tsv file to data structures in memory, or multiple files on disk, that correspond to the schema. Then use SqlBulkCopy to import each one. While it's fewer transactions, it seems more expensive than simply performing lots of inserts, due to having to either cache a lot of data or perform many writes to disk.
Per How do I bulk insert two datatables that have an Identity relationship and Best practices for inserting/updating large amount of data in SQL Server 2008, import the tsv file into a staging table, then merge into the schema, using DB functions to do the preprocessing. This seems like the best option, but I'd think the validation and preprocessing could be done more efficiently in C# or really anything else.
Are there any other possibilities out there?
The schema is still under development so I can revise it if that ends up being the sticking point.
You can import you file in the table of the following structure: Diner, Beverage, Food, Dessert, ID (identity, primary key NOT CLUSTERED - for performance issues).
After this simply add the following columns: Dinner_ID, Beverage_ID, Dessert_ID and fill them according to your separate tables (it's simple to group each of the columns and to add the missing data to lookup tables as Beverages, Desserts, Meals and, after this, to fix the imported table with the IDs for existent and newly added records).
The situation with Food table is more complex because of ability to combine the foods, but the same trick can be used: you can also add the data to your lookup table and, among this, store the combinations of foods in the additional temp table (with the unique ID) and separation on the single dishes.
When the parcing will be finished, you will have 3 temp tables:
table with all your imported data and IDs for all text columns
table with distinct food lists (with IDs)
table with IDs of food per food combination
From the above tables you can perform the insertion of the parsed values to either structure as you want.
In this case only 1 insert (bulk) will be done to the DB from the code side. All other data manipulations will be performed in the DB.

Resources