Database design for text revisions - database

This question contains some excellent coverage of how to design a database history/revision scheme for data like numbers or multiple choice fields.
However, there is not much discussion of large text fields, as often found in blog/Q&A/wiki/document type systems.
So, what would be considered good practice for storing the history of a text field in a database based editing system? Is storing it in the database even a good idea?

I develop a wiki engine and page/article revisions are stored in a database table. Each revision has a sequential revision number, while the "current" revision is marked with -1 (just to avoid NULL).
Revision text is stored as-is, not diffed or something like that.
I think that performance is not a problem because you are not likely to access older revisions very frequently.

Given the current state of HDD art, it just does not worth the effort trying to optimize text storage mechanisms: Document (ID, Name) and DocumentRevision (ID, DocumentID, Contents) tables will do the job. the ID in DocumentRevision may also serve as a "repository"-wide revision number. If this is not the behavior you want, assign a separate VersionID to each Document Revision.

Often the most sensible way of tracking the versions of a document is to keep track of the changes made to it. Then, if a particular version is requested it can be rebuilt from the current document and the partial set of changes.
So if you have a good method of describing the types of changes to a document (this will depend largely on what the document is and how it used) then by all means use a database to track the changes and therefore the versions.

Related

Simple version/history on ndb/Google App Engine

I am looking to create a system for tracking versions (history) of content of ndb.Models/Expandos on Google App Engine (Python).
The content can be relatively lengthy and there may be many versions, but diffs between versions may be quite small. I expect others have done something like this and I would like to know how they went about it and what principles may guide the design & development.
It is not known at the time of deployment what the attributes of the data models would be (e.g. "title", "content", "body", "dated", etc.), but the types are known (dates, text, etc).
My initial thought is to have the arranged something like this:
from google.appengine.ext import ndb
class Version(ndb.Expando):
version_id = ndb.IntegerProperty()
# dated, etc.
# data properties are not known in advance, hence Expando
class MyDoc(ndb.Model):
head = ndb.KeyProperty(kind=Version)
instance = ndb.kind=Property(kind=Version, repeated=True)
# ^^^ may be a StructuredProperty?
The overview of the algorithms is:
Saving
Every time a user saves a document, put all the latest data into a new Version and point head to that instance.
At that point, or some time after, go through old versions and change full saves to diffs (to save on space) with e.g. diff-match-patch. I would expect one full save per hour, day or some set time - or some set number of diffs.
Loading
Loading head is trivial.
Older versions would be marked as a full save or a diff, and depending on which the data may be returned directly or compiled from the diff.
Thoughts?
I am sure others have tackled this problem, and I would love to know what ideas and implementations are out there about it. Obviously, there are full version control systems such as Git, Mercurial and Subversion and CVS - but those are both overkill for the intended purpose and would not work on Google App Engine.
A few thoughts:
You'll want a monotonically increasing ID for the versions, so you can do range queries of Version entities. This probably means you'll want all historical data in the same entity group as the document, and keep the latest version ID on the document entity or in a separate entity in the same group. If you want a system-wide monotonically increasing ID (such as to associate or order changes made to multiple entities in different groups), you'll want to look into sharded counters and cross-group transactions.
If space is enough of a concern that you'll be storing diffs, I don't see why you'd reduce full versions to diffs with a background job, and not just on update. If space is not a big concern and a major feature is to be able to diff two arbitrary versions, then it might just be easier to store full data, so the cost of a diff isn't proportional to the number of intermediate versions (or all versions, if your diff is between historical versions). Assuming you don't want to perform queries on properties of past versions, you could save space by serializing the old entity in a compact form, and storing it in a non-indexed blob property. (I assume this is how you'd store each diff anyway, if you used diffs?) You could also keep full docs at milestones every n revisions, so a diff between two historical versions requires at most 2n versions to calculate.
From your description it sounds like you'd prefer MyDoc to be a reference to a Version entity, which would contain the headmost data. Maybe it'd be easier for MyDoc to contain the headmost data (and have its properties indexed with MyDoc keys, etc.), and an update just creates the Version with the previous data (diff or full).
Don't forget to accommodate deletes. Maybe MyDoc goes away (so it doesn't show up in key and property queries), and the most recent Version for the parent path contains the full last known document.
(This is just off the top of my head. I put a little thought into this for a CMS I work on, but I haven't built it.)

SQL Server Normalisation/Best Practices: Single Data Table

I have inherited the maintenance of a database from a former employee in another department and I believe their database development skills are not really up to snuff.
I have been asked to support or redevelop it.
It appears the database of the data for each record is in one single table, Yes I know and has hundreds of thousands of rows with empty fields.
TableData:
> RowID
> FieldID
> DateData
> NumberData
> TextData
> YesNoData
Only one field (dependent on the datatype required) appears to be populated in this instance for each row - the rest are empty.
There are two other tables which identify details of the Record (Created by etc) and the Field (Updated On, Field datatype)
Looking through the Access front-end code it appears that data for each field and record and field is stored by searching on record and field and then returning the appropriate field with the data.
My question: For what purpose does this achieve, or is this type of development considered the work of an inexperienced database developer?
My best guess is that a table like this is used to store arbitrary data (inferred from the other supporting tables) that won't require schema changes to store information that is "unplanned" or not yet implemented in the business logic of the application.
The questions I would start asking (yourself, any programmers, DBA's, project managers, etc.):
Were the requirements so abstract at the time that it was impossible to create a formal schema with data relationships? (Bad, bad, BAD)
Was the database designer lazy or inexperienced?
Was the programmer lazy or inexperienced? (Better yet, was the programmer the DBA?)
Is the reliability/availability of the data so sensitive that making formal schema changes is hard to do on a regular basis?
Has the project gone through plenty of people before you that simply inherited the problems, and this is a hack solution? (While maybe the original programmer knew where it was intended to go eventually...)
I think what you're really trying to get at here is "does this work, or should I change it?". I'd be shocked if the any read/search queries are optimized at all, as there couldn't be any indexes for such arbitrary data storage. If the application is simply logging information, it probably isn't as big of a deal, as the originator probably just didn't know yet how the data would be used later on, and writing a one-time applet to loop through and create formal objects out of the data would be better than trying to assume everything at the beginning.
Getting a little more targeted, are you running into any bottlenecks in your process because of this particular table, or are you concerned just out of surprise? If the former, I'd figure out how to change it right away. If the latter, I'd take my time figuring out the long-term requirements of the application first.

Alternatives to Entity-Attribute-Value (EAV)?

Our database is designed based on EAV (Entity-Attribute-Value) model. Those who have worked with EAV models know all the crap that comes with for the purpose of flexibility.
I asked my client about the reasons why using EAV model (flexibility), and their response was: Their entities change over time. So, today they may have a table with a few attributes, but in a month time, a few new attributes may be added, or an existing attribute may be renamed. They need to produce reports to get back to any stage in time and query the data based on the shape of entities at that stage.
I understand this is not feasible with a conventional relational model, but I personally see EAV as anti-pattern. Are there any other alternative models that enables us to capture the time dimension in changes to the entities and instances?
Cheers,
Mosh
There is a difference between EAV done faithfully or badly; 5NF done by skilled people or by those who are clueless.
Sixth Normal Form is the Irreducible Normal Form (no further Normalisation is possible). It eliminates many of the problems that are common, such as The Null Problem, and provides the ultimate method identifying missing values. It is the academically and technically robust NF. There are no products to support it, and it is not commonly used. To be implemented properly and consistently, it requires a catalogue for metadata to be implemented. Of course, the SQL required to navigate it becomes even more cumbersome (SQL already being cumbersome re joins), but this is easily overcome by automating the production of SQL from the metadata.
EAV is a partial set or a subset of 6NF. The problem is, usually it is done for a purpose (to allow columns to be added without having to make DDL changes), and by people who are not aware of the 6NF, and who do not implement metadata. The point is, 6NF and EAV as principles and concepts offer substantial benefits, and performance increases; but commonly it is not implemented properly, and the benefits are not realised. Quite a few EAV implementations are disasters, not because EAV is bad, but because the implementation is poor.
Eg. Some people think that the SQL required to construct the 3NF rows from the 6NF/EAV database is complex: no, it is cumbersome but not complex. More important, an ordinary SQL VIEW can be provided, so that all users and report tools see only the straight 3NF VIEW, and the 6NF/EAV issues are transparent to them. Last, the SQL required can be automated, so the labour cost that many people endure is quite unnecessary.
So the answer really is, Sixth Normal Form, being the father of EAV, and a purer form, is the replacement for it. The Caveat is, ensure it is done properly. I have one large 6NF db, and it suffers none of the problems people post about, it performs beautifully, the customer is very happy (no further work is a sign of complete functional satisfaction).
I have already posted a very detailed answer to another question which applies to your question as well, which you may be interested in.
Other EAV Question
Regardless of the kind of relational model you use, tracking field name changes requires a lot of meta data which you must keep track of in either transaction logs or audit tables. Unfortunately, querying either of those for state at a particular date is very complicated. If your client only requires state at a particular time date however, meaning the entire state, not just with respect to name changes, you can duplicate the database and roll back the transaction log to the particular time required and run your queries on the new instance. If entities added after the specified date need to show up in the query with the old field names however, you have a very large engineering problem ahead of you. In that case, with the information you provided in your question, I would suggest either negotiating alternatives with the client or getting more information about the use of the reports to find alternative solutions.
You could move to a document based datastore, but that still wouldn't solve the problem in the second case. Sorry this isn't really an answer, but having worked through similar situations, the client likely needs a more realistic reporting solution or a number of other investors willing to front the capital for the engineering.
When this problem came up for us, we kept the db schema constant and implemented an entity mapping factory based on a timestamp. In the end, the client continually changed requirements (on a weekly to monthly basis) as to how aggregate fields were calculated and were never fully satisfied.
To add to the answers from #NickLarsen and #PerformanceDBA
If you need to track historical changes to things like field name, you may want to look into something like Slowly Changing Dimensions. It appears to me like you are using the EAV to model dynamic dimensional models (probably lookup lists).
The simplest (and probably least efficient) way of achieving this would be to include an "as of" date field on EAV tables, and whenever a change occurs, insert a new record (instead of updating an existing record) with the current date. This means that you need to alter your queries to always include or look for an "as of" date, or deafult to "now" if none provided. Your base entity that joins to the EAV objects would then have to query "top 1" from the EAV table where "as of" date is less than or equal to the 'last updated' date of the row, ordered by "as of" descending. Worst case scenario, if you need to track the most recent change to a given row where both the name (stored in the 'attribute' table) and the value have changed, you would chain this logic to the value table using 'last modified' of the row to find the appropriate value for that particular date.
This obviously has the potential to generate LARGE amounts of data if there are a lot of changes. That's why this approach is referred to as "slowly" changing. It's intended for dimensional values that may change, but not very often. To help with query performance, indexes on the "as of" and "last modified" fields should help.
If your client needs such flexibility, then a relational database might not be the right match.
Consider MongoDB where JSON structures are stored. You can add or not add fields without limitations. You can even use nesting.
Create a new table description for each Entity description Version
and one additional table that tells you which table is which version.
The query system should be updated as well.
I think creating a script that generates, tables and queries is your best shot.

How to keep historic details of modification in a database (Audit trail)?

I'm a J2EE developer & we are using hibernate mapping with a PostgreSQL database.
We have to keep track of any changes occurs in the database, in others words all previous & current values of any field should be saved. Each field can be any type (bytea, int, char...)
With a simple table it is easy but we a graph of objects things are more difficult.
So we have, speaking in a UML point of view, a graph of objects to store in the database with every changes & the user.
Any idea or pattern how to do that?
A common way to do this is by storing versions of objects.
If add a "version" and a "deleted" field to each table that you want to store an audit trail on, then instead of doing normal updates and deletes, follow these rules:
Insert - Set the version number to 0 and insert as normal.
Update - Increment the version number and do an insert instead.
Delete - Increment the version number, set the deleted field to true and do an insert instead.
Retrieve - Get the record with the highest version number and return that.
If you follow this pattern, every time you update you will create a new record rather than overwriting the old data, so you will always be able to track back and see all the old objects.
This will work exactly the same for graphs of objects, just add the new fields to each table within the object graph, and handle each insert/update/delete for each table as described above.
If you need to know which user made the modification, you just add a "ModifiedBy" field as well.
(You can either do this processing in your DA layer code, or if you prefer you can use database triggers to catch your update/delete/retrieve calls and re-process them following the rules.)
Obviously, you need to consider space requirements, as every single update will result in a fully new record. If your application is update heavy, you are going to generate a lot of data. It's common to also include a "last modified time" fields so you can process the database off line and delete data older than required.
Current RDBMS implementations are not very good at handling temporal data. That's one reason why maintaining separate journalling tables through triggers is the usual approach. (The other is that audit trails frequently have different use cases to regular data, and having them in separate tables makes it easier to manage access to them). Oracle does a pretty slick job of hiding the plumbing in its Total Recall product, but being Oracle it charges $$$ for this.
Scott Bailey has published a presentation on temporal data in PostgreSQL. Alas it won't help you right now but it seems like some features planned for 8.5 and 8.6 will enable the transparent storage of time-related data. Find out more.

Adding relations to an Access Database

I have an MS Access database with plenty of data. It's used by an application me and my team are developing. However, we've never added any foreign keys to this database because we could control relations from the code itself. Never had any problems with this, probably never will either.
However, as development has developed further, I fear there's a risk of losing sight over all the relationships between the 30+ tables, even though we use well-normalized data. So it would be a good idea go get at least the relations between the tables documented.
Altova has created DatabaseSpy which can show the structure of a database but without the relations, there isn't much to display. I could still use to add relations to it all but I don't want to modify the database itself.
Is there any software that can analyse a database by it's structures and data and then do a best-guess about its relations? (Just as documentation, not to modify the database.)
This application was created more than 10 years ago and has over 3000 paying customers who all use it. It's actually document-based, using an XML document for it's internal storage. The database is just used as storage and a single import/export routine converts it back and to XML. Unfortunately, the XML structure isn't very practical to use for documentation and there's a second layer around this XML document to expose it as an object model. This object model is far from perfect too, but that's what 10 years of development can do to an application. We do want to improve it but this takes time and we can't disappoint the current users by delaying new updates.Basically, we're stuck with its current design and to improve it, we need to make sure things are well-documented. That's what I'm working on now.
Only 30+ tables? Shouldn't take but a half hour or an hour to create all the relationships required. Which I'd urge you to do. Yes, I know that you state your code checks for those. But what if you've missed some? What if there are indeed orphaned records? How are you going to know? Or do you have bullet proof routines which go through all your tables looking for all these problems?
Use a largish 23" LCD monitor and have at it.
If your database does not have relationships defined somewhere other than code, there is no real way to guess how tables relate to each other.
Worse, you can't know the type of relationship and whether cascading of update and deletion should occur or not.
Having said that, if you followed some strict rules for naming your foreign key fields, then it could be possible to reconstruct the structure of the relationships.
For instance, I use a scheme like this one:
Table Product
- Field ID /* The Unique ID for a Product */
- Field Designation
- Field Cost
Table Order
- Field ID /* the unique ID for an Order */
- Field ProductID
- Field Quantity
The relationship is easy to detect when looking at the Order: Order.ProductID is related to Product.ID and this can easily be ascertain from code, going through each field.
If you have a similar scheme, then how much you can get out of it depends on how well you follow your own convention, but it could go to 100% accuracy although you're probably have some exceptions (that you can build-in your code or, better, look-up somewhere).
The other solution is if each of your table's unique ID is following a different numbering scheme.
Say your Order.ID is in fact following a scheme like OR001, OR002, etc and Product.ID follows PD001, PD002, etc.
In that case, going through all fields in all tables, you can search for FK records that match each PK.
If you're following a sane convention for naming your fields and tables, then you can probably automate the discovery of the relations between them, store that in a table and manually go through to make corrections.
Once you're done, use that result table to actually build the relationships from code using the Database.CreateRelation() method (look up the Access documentation, there is sample code for it).
You can build a small piece of VBA code, divided in 2 parts:
Step 1 implements the database relations with the database.createrelation method
Step 2 deleted all created relations with the database.delete command
As Tony said, 30 tables are not that much, and the script should be easy to set. Once this set, stop the process after step 1, run the access documenter (tools\analyse\documenter) to get your documentation ready, launch step 2. Your database will then be unchanged and your documentation ready.
I advise you to keep this code and run it regularly against your database to check that your relational model sticks to the data.
There might be a tool out there that might be able to "guess" the relations but I doubt it. Frankly I am scared of databases without proper foreign keys in particular and multi user apps that uses Access as a DBMS as well.
I guess that the app must be some sort of internal tool, otherwise I would suggest that you move to a proper DBMS ( SQL Express is for free) and adds the foreign keys.

Resources