Is this a functional syncing algorithm? - database

I'm working on a basic syncing algorithm for a user's notes. I've got most of it figured out, but before I start programming it, I want to run it by here to see if it makes sense. Usually I end up not realizing one huge important thing that someone else easily saw that I couldn't. Here's how it works:
I have a table in my database where I insert objects called SyncOperation. A SyncOperation is a sort of metadata on the nature of what every device needs to perform to be up to date. Say a user has 2 registered devices, firstDevice and secondDevice. firstDevice creates a new note and pushes it to the server. Now, a SyncOperation is created with the note's Id, operation type, and processedDeviceList. I create a SyncOperation with type "NewNote", and I add the originating device ID to that SyncOperation's processedDeviceList. So now secondDevice checks in to the server to see if it needs to make any updates. It makes a query to get all SyncOperations where secondDeviceId is not in the processedDeviceList. It finds out its type is NewNote, so it gets the new note and adds itself to the processedDeviceList. Now this device is in sync.
When I delete a note, I find the already created SyncOperation in the table with type "NewNote". I change the type to Delete, remove all devices from processedDevicesList except for the device that deleted the note. So now when new devices call in to see what they need to update, since their deviceId is not in the processedList, they'll have to process that SyncOperation, which tells their device to delete that respective note.
And that's generally how it'd work. Is my solution too complicated? Can it be simplified? Can anyone think of a situation where this wouldn't work? Will this be inefficient on a large scale?

Sounds very complicated - the central database shouldn't be responsible for determining which devices have recieved which updates. Here's how I'd do it:
The database keeps a table of SyncOperations for each change. Each SyncOperation is has a change_id numbered in ascending order (that is, change_id INTEGER PRIMARY KEY AUTOINCREMENT.)
Each device keeps a current_change_id number representing what change it last saw.
When a device wants to update, it does SELECT * FROM SyncOperations WHERE change_id > current_change_id. This gets it the list of all changes it needs to be up-to-date. Apply each of them in chronological order.
This has the charming feature that, if you wanted to, you could initialise a new device simply by creating a new client with current_change_id = 0. Then it would pull in all updates.
Note that this won't really work if two users can be doing concurrent edits (which edit "wins"?). You can try and merge edits automatically, or you can raise a notification to the user. If you want some inspiration, look at the operation of the git version control system (or Mercurial, or CVS...) for conflicting edits.

You may want to take a look at SyncML for ideas on how to handle sync operations (http://www.openmobilealliance.org/tech/affiliates/syncml/syncml_sync_protocol_v11_20020215.pdf). SyncML has been around for a while, and as a public standard, has had a fair amount of scrutiny and review. There are also open source implementations (Funambol comes to mind) that can also provide some coding clues. You don't have to use the whole spec, but reading it may give you a few "ahah" moments about syncing data - I know it helped to think through what needs to be done.
Mark
P.S. A later version of the protocol - http://www.openmobilealliance.org/technical/release_program/docs/DS/V1_2_1-20070810-A/OMA-TS-DS_Protocol-V1_2_1-20070810-A.pdf

I have seen the basic idea of keeping track of operations in a database elsewhere, so I dare say it can be made to work. You may wish to think about what should happen if different devices are in use at much the same time, and end up submitting conflicting changes - e.g. two different attempts to edit the same note. This may surface as a change to the user interface, to allow them to intervene to resolve such conflicts manually.

Related

Reusing tables for purposes other than the intended

We need to implement some new functionality for some clients. The functionality is essentially an EULA accept interface for the users. Users will open our app, will be presented with the corresponding EULA (varies from client to client). It needs to be able to store different versions of the EULA for the same client, and it also needs to store which users have accepted which version of the EULA. If a new version is saved, it will be presented to the users the next time they log in.
I've written a document suggesting to add two tables, EULAs and UserAcceptedEULA. That will allow us to store different EULAs and keep track of the accepted ones, current and previous ones.
My problem comes with how some people at the company want to do the implementation. They suggest to use a table ConstantGroups (which contains ConstantGroupID, Timestamp, ClientID and Name) that we use for grouping constants with their values that are stored in another table, e.g.: ConstantGroup would be Quality, and the values would be High, Medium, Low.
To me this is a horrible, incredibly wrong way to do it. They're suggesting it because we already have an endpoint where you pass the ClientID and you get back a string, so it "does what we need".
I wrote the document explaining the whole solution, covering DB changes, APIs needed and UI modifications, but they still don't want to accept it because they thing their way will save us time.
How do I make them understand how horribly wrong they are?
This depends somewhat on your assumptions about "good" design.
Many software folk have adopted the SOLID principles as being "good" (I am one of them). While the original thinking is about object oriented design, I think many apply to databases too.
The first element of that is "Single responsibility". A table should do one thing, and one thing only. Your colleagues are trying to get a single entity to manage different concepts; the Constants table suddenly is responsible for "constants" and "EULA acceptance".
That leads to "Open to extension, closed to change" - if you need to change the implementation of "constants" or "EULAs", you have to untangle the other. So any time you (might) save now will cost you later.
The second principle I like (especially in database design) is the Principle of Least Astonishment. Imagine a new developer joining the team, and having to figure out how EULAs work. They would naturally look for some kind of "EULA" and "Acceptance" tables, and would be astonished to learn that actually, this is managed in a thing called "constants". Any time you save now will be repaid by onboarding new people (or indeed, reminding yourself in 2 years time when you have to fix a bug).

Postgres: How to clear transaction ID for anonymity and data reduction

We're running an evaluation platform where users can comment on certain things. A key feature is that people can comment only once, and every comment is made in anonymity.
We're using Postgres for all our data. We want to save a flag in the database that a user created a comment (so they cannot comment again). In a separate table but within the same transaction, we want to save the comment itself without any link to the user.
However, postgres saves the transaction ID of every tuple inserted into the database (xmin of the system columns). So now there's a link between the user and their comment which we have to avoid!
Possible (Non)Solutions
Vacuuming alone does not help as it does not clear the transaction ID. See the "Note" box in the "24.1.5. Preventing Transaction ID Wraparound Failures" section in the postgres docs.
Putting those inserts in different transactions, doesn't really solve anything since transaction IDs are consecutive.
We could aggregate comments from multiple users to one large text in the database with some separators, but since old versions of this large text would be kept by postgres at least until the next vacuum, that doesn't seem like a full solution. Also, we'd still have the order of when the user added their comment, which would be nice to not save as well.
Re-writing all the tuples in those tables periodically (by a dummy UPDATE to all of them), followed by a vacuum would probably erase the "insert history" sufficiently, but that too seems like a crude hack.
Is there any other way within postgres to make it impossible to reconstruct the insertion history of a table?
Perhaps you could use something like dblink or postgres_fdw to write to tables using a remote connection (either to the current database or another database), and thereby separate xmin values, even though you as a user think you are doing it all in the "same transaction."
Regarding the concerns about tracking via reverse-engineering sequential xmin values, since dblink is asynchronous, this issue may become moot at scale, when many users are simultaneously adding comments to the system. This might not work if you need to be able to rollback after encountering an error—it really depends on how important it is for you to confine the operations into one transaction.
I don't think there is a problem.
In your comment you write that you keep a flag with the user (however exactly you store it) that keeps track of which posting the user commented on. To keep that information private, you have to keep that flag private so that nobody except the user itself could read it.
If no other user can see that information, then no other user can see the xmin on the corresponding table entries. Then nobody could make a correlation with the xmin on the comment, so the problem is not there.
The difficult part is how you want to keep the information private which postings a user commented on. I see two ways:
Don't use database techniques to do it, but write the application so that it hides that information from the users.
Use PostgreSQL Row Level Security to do it.
There is no way you can keep the information from a superuser. Don't even try.
You could store the users with their flags and the comments on different database clusters (and use distributed transactions), then the xmins would be unrelated.
Make sure to disable track_commit_timestamp.
To make it impossible to correlate the transactions in the databases, you could issue random
SELECT txid_current();
which do nothing but increment the transaction counter.

Using dependant types to provide a compile type proofe that some integer is a valid row-id in database?

In my never-ending wonder in dependent type land a strange idea came into my head. I do a lot of data base programming and it would be nice if I could get rid of all those sanity-checking and validity-checking. One specially annoying case is those functions that accept an Integer and expect that to be a valid row-id of some certain table. A very silly example is:
function loadStudent(studentId: Integer) : Student
Supposing my language of choice supports dependent types in their full glory, would it be possible to utilize the type system to make loadStudent accept only valid studentId values :
function loadStudent(studentId : ValidRowId("students_table") ) : Student
If yes, how do I write a data constructor for ValidRowId type? All the examples I have seen thus far were pure (no IO involved).
Maybe I'm misunderstanding the question, but I don't see how it's possible without doing IO. How can you know that an id is valid without searching the database to see if there is a record with that id?
I suppose that you could, at program start up time, read all the current IDs into a table in memory and then do your checks against that. But you would have to somehow know if another user had added or deleted records after you created the table.
Okay, you could have all threads on all computers that access the database communicate with some central server that keeps this master list so that it would always be current. But we already have a central place that keeps track of all the IDs currently in use in the database: it's called "the database". What would be the advantage of going to a whole bunch of work to maintain a duplicate copy of a subset of the data on the database? It's unlikely you'd get much performance gain, and you'd create the possibility that bugs in your code, bad connections, etc, would result in the data getting out of sync.

Collecting data which wasn't predicted when the system was designed

How do you go about collecting and storing data which was not part of the initial database and software design? For example, if you've come up with a pointing system, you have to collect the points for every user which has already been registered. For new users, that would be easy, because the changes of the business logic will reflect the pointing system ... but the old ones?
In general, how does one deal with data, which should have been there from the beginning, but wasn't? Writing manual queries to collect the missing pieces? Using crons?
Well, you are asking for something that is by definition not possible, I think.
deal with data hich should have been there from the beginning, but wasn't?
Because if you are able to deduce the number of points from the existing data in the database. If that were possible, there is obviously no missing data.... Storing the points separately would make it redundant (still a fine option in case you need that for performance).
For example: stackoverflow rewards number of consecutive visits. Let's say they did not do that from the start. If they were logging date-of-visit already, you can recalc the points. So no missing data.
So if that is not possible, you need another solution: either get data from other sources (parse a webserver log) or get the business to draft some extra business rules for the determination of the default values for the existing users (difficult in this particular example).
Writing manual queries to collect the missing pieces? Using crons?
I would populate that in a conversion script or even in a special conversion application if very complex.

Delete data or just flag it as deleted?

I'm building a website that lets people create vocabulary lessons. When a lesson is created, a news items is created that references the lesson. When another user practices the lesson, the user also stores a reference to it together with the practice result.
My question is what to do when a user decides to remove the lesson?
The options I've considered are:
Actually delete the lesson from
the database and remove all
referencing news items, practise
results etc.
Just flag it as deleted and
exclude the link from referencing
news items, results etc.
What are your thoughts? Should data never be removed, ala Facebook? Should references be avoided all together?
By the way, I'm using Google App Engine (python/datastore). A db.ReferenceProperty is not set to None when the referenced object is deleted as far as I can see?
Thanks!
Where changes to data need to be audited, marking data as deleted (aka "soft deletes") helps greatly particularly if you record the user that actioned the delete and the time when it occurred. It also allows data to be "un-deleted" very easily.
Having said that there is no reason to prevent "hard deletes" (where data is actually deleted) as an administrative function to help tidy up mistakes.
Marking the data as "deleted" is simplest. If you currently have no use for it, this keeps everything in your database very tidy and makes it easy to add new functionality.
On the other hand, if you're doing something like showing the user where their "vocabulary points" came from, or how many lessons they've completed, then the reference to soft deleted items might be necessary.
I'd start with the first one and change it later if you need to. Here's why:
If you're not using soft deletes, assume they won't work in the way that future requests actually want them to. You'll have to rewrite them anyway.
If you are using them, assume that nobody is using the feature which uses them. Now you've done a lot of work and tied yourself into maintenance of something nobody cares about.
If you create them, you'll find yourself creating a feature to use them. See the above.
If you don't create them, you can always create them later, once you have better knowledge about what the users of your system really want.
Not creating soft deletes gives you more options going forward. Options have value. Options expire. Never commit early unless you know why.

Resources