Postgres: How to clear transaction ID for anonymity and data reduction - database

We're running an evaluation platform where users can comment on certain things. A key feature is that people can comment only once, and every comment is made in anonymity.
We're using Postgres for all our data. We want to save a flag in the database that a user created a comment (so they cannot comment again). In a separate table but within the same transaction, we want to save the comment itself without any link to the user.
However, postgres saves the transaction ID of every tuple inserted into the database (xmin of the system columns). So now there's a link between the user and their comment which we have to avoid!
Possible (Non)Solutions
Vacuuming alone does not help as it does not clear the transaction ID. See the "Note" box in the "24.1.5. Preventing Transaction ID Wraparound Failures" section in the postgres docs.
Putting those inserts in different transactions, doesn't really solve anything since transaction IDs are consecutive.
We could aggregate comments from multiple users to one large text in the database with some separators, but since old versions of this large text would be kept by postgres at least until the next vacuum, that doesn't seem like a full solution. Also, we'd still have the order of when the user added their comment, which would be nice to not save as well.
Re-writing all the tuples in those tables periodically (by a dummy UPDATE to all of them), followed by a vacuum would probably erase the "insert history" sufficiently, but that too seems like a crude hack.
Is there any other way within postgres to make it impossible to reconstruct the insertion history of a table?

Perhaps you could use something like dblink or postgres_fdw to write to tables using a remote connection (either to the current database or another database), and thereby separate xmin values, even though you as a user think you are doing it all in the "same transaction."
Regarding the concerns about tracking via reverse-engineering sequential xmin values, since dblink is asynchronous, this issue may become moot at scale, when many users are simultaneously adding comments to the system. This might not work if you need to be able to rollback after encountering an error—it really depends on how important it is for you to confine the operations into one transaction.

I don't think there is a problem.
In your comment you write that you keep a flag with the user (however exactly you store it) that keeps track of which posting the user commented on. To keep that information private, you have to keep that flag private so that nobody except the user itself could read it.
If no other user can see that information, then no other user can see the xmin on the corresponding table entries. Then nobody could make a correlation with the xmin on the comment, so the problem is not there.
The difficult part is how you want to keep the information private which postings a user commented on. I see two ways:
Don't use database techniques to do it, but write the application so that it hides that information from the users.
Use PostgreSQL Row Level Security to do it.
There is no way you can keep the information from a superuser. Don't even try.

You could store the users with their flags and the comments on different database clusters (and use distributed transactions), then the xmins would be unrelated.
Make sure to disable track_commit_timestamp.
To make it impossible to correlate the transactions in the databases, you could issue random
SELECT txid_current();
which do nothing but increment the transaction counter.

Related

In merging database records, how much paper trail should I leave?

A lot of duplication has crept into a client's database through poor initial design. I am writing some stored procedures to merge users etc. It would be nice to accomplish the merge and still be able to do an undo or a rollback without doing a full database restore.
My original question was how much other housekeeping or record keeping do I need to do, and how do I do it? I think I've addressed that. The question now is whether there is anything beyond the following that needs to be done. I realize now this was a poor type of question for this site. In compensation for that, I'll offer to share my experience with anybody who also needs to build a duplicate record merging tool.
The basic pseudocode for a merge is:
Let from_id = the record to merge (the mergee). Let into_id = the record that all the from_id references need to point to after the merge.
Check the database schema against known parameters and return a schema_changed error if changed.
Use information in merge_config and merge_referrer_config tables to add an entry to merge_log and merge_referrer_log tables to give detailed instructions for every piece of data that needs to be changed to accomplish the merge. This log becomes the instructions for a rollback (undo). The config tables give complete information about everywhere the merging records are referenced in the database.
Follow the instructions just added to the merge log tables to update all pertinent (as defined in merge_config and merge_referrer_config tables) tables to set pertinent columns = into_id where pertinent column = from_id.
Mark the merged_to column of the record for from_id with the into_id.
Thanks,
Tom
Well you should do a back up anyway, in case something goes horribly wrong.
In terms of an audit trail, I'd be tempted by a duplicates table, with an extra column for when it got 'merged' out then house keep that. Say chuck any thing more than X old from duplicates before a merge run. Another option I've seen, is a weighting put on how different the records are. "Exact duplicate" is 0 everything different but the key is a 100. Then chuck / keep based on the weighting.
Whatever approach you take, look at it on the basis of you audit every sniff at the start and then as "you" get a feel for the data, you can silently bin it, or look at prioritising for critical weaknesses in the system

Unit of Work - What is the best approach to temporary object storage on a web farm?

I need to design and implement something similar to what Martin Fowler calls the "Unit of Work" pattern. I have heard others refer to it as a "Shopping Cart" pattern, but I'm not convinced the needs are the same.
The specific problem is that users (and our UI team) want to be able to create and assign child objects (with referential integrity constraints in the database) before the parent object is created. I met with another of our designers today and we came up with two alternative approaches.
a) First, create a dummy parent object in the database, and then create dummy children and dummy assignments. We could use negative keys (our normal keys are all positive) to distinguish between the sheep and the goats in the database. Then when the user submits the entire transaction we have to update data and get the real keys added and aligned.
I see several drawbacks to this one.
It causes perturbations to the indexes.
We still need to come up with something to satisfy unique constraints on columns that have them.
We have to modify a lot of existing SQL and code that generates SQL to add yet another predicate to a lot of WHERE clauses.
Altering the primary keys in Oracle can be done, but its a challenge.
b) Create Transient tables for objects and assignments that need to be able to participate in these reverse transactions. When the user hits Submit, we generate the real entries and purge the old.
I think this is cleaner than the first alternative, but still involves increased levels of database activity.
Both methods require that I have some way to expire transient data if the session is lost before the user executes submit or cancel requests.
Has anyone solved this problem in a different way?
Thanks in advance for your help.
I don't understand why these objects need to be created in the database before the transaction is committed, so you might want to clarify with your UI team before proceeding with a solution. You may find that all they want to do is read information previously saved by the user on another page.
So, assuming that the objects don't need to be stored in the database before the commit, I give you plan C:
Store initialized business objects in the session. You can then create all the children you want, and only touch the database (and set up references) when the transaction needs to be committed. If the session data is going to be large (either individually or collectively), store the session information in the database (you may already be doing this).

What will be the best way to keep track of modified tuples in a database?

I am currently working on a project in which I have to keep track of the tuples that are modified in a relational database. This should include updated tuples, but also inserted and deleted tuples. My question is what will be the best way to accomplish this? I have several ideas of my own, but maybe there are easier/better ways that I did not think of, or there already exists a project that exactly does this.
The final goal of the project is that it will work for relational databases of different vendors, but the first implementation will use a MySQL database. Other database systems can be supported later. But it would be nice if the solution that works for MySQL can be easily adapted to another database.
My first idea was to parse log files. However, I am not certain whether these logfiles contain the actual modified tuples, and furthermore I can imagine that these logfiles will not always be available (e.g. on shared hosting).
My second idea was to intercept the queries at the application level. When a INSERT, DELETE or UPDATE query is performed, these queries can be parsed, and the tuples that they will affect can be determined beforehand. For an INSERT operation this simply is the inserted tuple, and for a DELETE or UPDATE operation the tuples can be identified by applying the WHERE clause in a new SELECT statement.
As a last remark I want to add that performance is not an important factor at this stage of development.
If more details are needed I am happy to provide them.
Use triggers to capture the INSERT, UPDATE, and DELETE and log your entries to a new table. You can use a timestamp on that table to note when the transactions occurred. In the future you can query that table for your modification information.
This will require some database dependent features but you can encapsulate them depending on your architecture but you could use database triggers, which I normally advise against except for this very thing, auditing. In each kind of trigger, you could simply write to a log table whatever info you need. Just one suggestion.

Database design question. BIT column for deletions

Part of my table design is to include a IsDeleted BIT column that is set to 1 whenever a user deletes a record. Therefore all SELECTS are inevitable accompanied by a WHERE IsDeleted = 0 condition.
I read in a previous question (I cannot for the love of God re-find that post and reference it) that this might not be the best design and an 'Audit Trail' table might be better.
How are you guys dealing with this problem?
Update
I'm on SQL Server. Solutions for other DB's are welcome albeit not as useful for me but maybe for other people.
Update2
Just to encapsulate what everyone said so far. There seems to be basically 3 ways to deal with this.
Leave it as it is
Create an audit table to keep track of all the changes
Use of views with WHERE IsDeleted = 0
Therefore all SELECTS are inevitable accompanied by a WHERE IsDeleted = 0 condition.
This is not a really good way to do it, as you probably noticed, it is quite error-prone.
You could create a VIEW which is simply
CREATE VIEW myview AS SELECT * FROM yourtable WHERE NOT deleted;
Then you just use myview instead of mytable and you don't have to think about this damn column in SELECTs.
Or, you could move deleted records to a separate "archive" table, which, depending on the proportion of deleted versus active records, might make your "active" table a lot smaller, better cached in RAM, ie faster.
If you have to have this kind of Deleted Bit column, then you really should consider setting up some VIEWs with the WHERE clause in it, and use those rather than the underlying tables. Much less error prone.
For example, if you have this view:
CREATE VIEW [Current Product List] AS
SELECT ProductID,ProductName
FROM Products
WHERE Discontinued=No
Then someone who wants to see current products can simply write:
SELECT * FROM [Current Product List]
This is much less error prone than writing:
SELECT ProductID,ProductName
FROM Products
WHERE Discontinued=No
As you say, people will forget that WHERE clause, and get confusing and incorrect results.
P.S. the example SQL comes from Microsoft's Northwind database. Normally I would recommend NOT using spaces in column and table names.
We're actively using the "Deleted" column in our enterprise software. It is however a source of constant errors when forgetting to add "WHERE Deleted = 0" to an SQL query.
Not sure what is meant by "Audit Trail". You may wish to have a table to track all deleted records. Or there may be an option of moving the deleted content to paired tables (like Customer_Deleted) to remove the passive content from tables to minimize their size and optimize performance.
A while ago there was some blog uproar on this issue, Ayende and Udi Dahan both posted on this.
Nai this is totally up to you.
Do you need to be able to see who has deleted / modified / inserted what and when? If so, you should design the tables for this and adjust your procs to write these values when they are called.
If you dont need an audit trail, dont waste time with one. Just do as you are with IsDeleted.
Personally, I flag things right now, as an audit trail wasn't specified in my spec, but that said, I don't like to actually delete things. Hence, I chose to flag it. I'm not going to waste a clients time writing something they diddn't request. I wont mess about with other tables because that's another thing for me to think about. I'd just make sure my index's were up to the job.
Ask your manager or client. Plan out how long the audit trail would take so they can cost it and let them make the decision for you ;)
Udi Dahan said this:
Model the task, not the data
Looking back at the story our friend from marketing told us, his intent is to discontinue the product – not to delete it in any technical sense of the word. As such, we probably should provide a more explicit representation of this task in the user interface than just selecting a row in some grid and clicking the ‘delete’ button (and “Are you sure?” isn’t it).
As we broaden our perspective to more parts of the system, we see this same pattern repeating:
Orders aren’t deleted – they’re cancelled. There may also be fees incurred if the order is canceled too late.
Employees aren’t deleted – they’re fired (or possibly retired). A compensation package often needs to be handled.
Jobs aren’t deleted – they’re filled (or their requisition is revoked).
In all cases, the thing we should focus on is the task the user wishes to perform, rather than on the technical action to be performed on one entity or another. In almost all cases, more than one entity needs to be considered.
If you have Oracle DB, then you can use audit trail for auditing. Check the AUDIT VAULT tool form OTN, here. It even supports SQL Server.
Views (or stored procs) to get at the underlying table data are the best way. However, if you have the problem with "too many cooks in the kitchen" like we do (too many people have rights to the data and may just use the table without knowing enough to use the view/proc) you should try using another table.
We have a complete mimic of the base table with a few extra columns for tracking. So Employee table has an EmployeeDeleted table with the same schema but extra columns for when it was deleted and who deleted it and sometimes even the reason for deletion. You can even get fancy and have triggers do the insertion directly instead of going through applications/procs.
Biggest Advantage: no flag to worry about during selects
Biggest Disadvantage: any schema changes to the base table also have to be made on the "deleted" table
Best for: situations where for whatever reason (usually political with us) many not-as-experienced people have rights to the data but still expect it to be accurate without having to understand flags or schemas, etc
I've used soft deletes before on a number of applications I've worked on, and overall it's worked out quite well. Yes, there is the issue of always having to remember to add AND IsActive = 1 to all of your SELECT queries, but really that's not so bad. You can create views if you don't want to have to remember to always do that.
The reason we've done this is because we had very specific business needs to be able to report on records that have been deleted. The reporting needs varied widely - sometimes they'd need to see just the active records, or just the inactive records, or sometimes a mix of both - so pushing all the deleted records into an audit table wasn't a very good option.
So, depending on your particular business needs, I think this approach is certainly a viable option.

Database: To delete or not to delete records

I don't think I am the only person wondering about this. What do you usually practice about database behavior? Do you prefer to delete a record from the database physically? Or is it better to just flag the record with a "deleted" flag or a boolean column to denote the record is active or inactive?
It definitely depends on the actual content of your database. If you're using it to store session information, then by all means wipe it immediately when the session expires (or is closed), you don't want that garbage lying around. As it cannot really be used again for any practical purposes.
Basically, what you need to ask yourself, might I need to restore this information? Like deleted questions on SO, they should definitely just be marked 'deleted', as we're actively allowing an undelete. We also have the option to display it to select users as well, without much extra work.
If you're not actively seeking to fully restore the data, but you'd still like to keep it around for monitoring (or similar) purposes. I would suggest that you figure out (to the extent possible of course) an aggregation scheme, and shove that off to another table. This will keep your primary table clean of 'deleted' data, as well as keep your secondary table optimized for monitoring purposes (or whatever you had in mind).
For temporal data, see: http://talentedmonkeys.wordpress.com/2010/05/15/temporal-data-in-a-relational-database/
Pros of using a delete flag:
You can get the data back later if you need it,
Delete operation (updating the flag) is probably quicker than really deleting it
Cons of using a delete flag:
It is very easy to miss AND DeletedFlag = 'N' somewhere in your SQL
Slower for the database to find the rows that you are interested in amongst all the crap
Eventually, you'll probably want to really delete it anyway (assuming your system is successful. What about when that record is 10 years old and it was "deleted" 4 minutes after originally created)
It can make it impossible to use a natural key. You may have one or more deleted rows with the natural key and a real row wanting to use that same natural key.
There may be legal/compliance reasons why you are meant to actually delete data.
As a complement to all posts...
However, if you plan to mark the record, its good to consider making a view, for active records. This would save you from writing or forgetting the flag in your SQL query. You might consider a view for non-active records too, if you think that also would serve some purpose.
I am glad to have found this thread. I too was wondering what people thought about this issue. I have implemented the 'marked as deleted' for about 15 years on many systems. Whenever a user would call to say something was accidentally deleted it was certainly a lot easier to mark it un-deleted than recreate it or restore from a backup.
We are using postgresql and Ruby on rails it looks like we could do this in 1 of two ways, modify rails or add an ondelete trigger and does instead a pl/pgsql function to mark as deleted. I am leaning toward the latter.
As for performance hits, it will be interesting to see the results of EXPLAIN-ANALYZE on large tables to few deleted items as well as many deleted items.
In systems used over time I have found, new users tend to do silly things like delete things accidentally. So when people are new in a position they have all the access rights of the person previously in that position except with zero experience. Accidentally deleting something and being able to quickly recover gets everyone back to work quickly.
But as someone said, sometimes you may need that particular key back for some reason, at that point you would need to really delete it, then re-create the records (on undelete it and modify the record).
I mark them as deleted, and don't really delete. However every once in a while I sweep out all the junk and archive it, so it doesn't kill performance.
There are also legal issues either way if personal data is involved. I think it greatly depends on where you are (or where the database is), and what the terms of use are.
In some cases people can ask to be removed from your system, in which case a hard delete is needed (or at least clearing out all of the personal information).
I would check with your legal department before you adopt a strategy either way if personal information is involved.
If you are concerned about "dormant" records slowing down your database access, you may want to move those rows into another table acting as an "archive" table.
For user-entered/managed data I've used the flag method you describe and given the user an "empty the trash" interface to actually delete items if they choose to.
I have a database with lots of dependencies. Hence, I cannot delete some records because others still depend on the data. This is what I usually do; I try to delete the data, if it works, I know it didn't have any dependencies and didn't matter. If it doesn't, I catch the error and flag it as inactive:
try
{
_context.SomeTable.Remove(someEntity);
await _context.SaveChangesAsync();
}
catch (DbUpdateException ex) when (ex.InnerException is SqlException && (ex.InnerException as SqlException).Number == 547)
{
// Mark as inactive
someEntity.Active = false;
await _context.SaveChangesAsync();
}

Resources