A) Is it possible to have Grandfather-Father-Son archiving? For example we would like to have the following precalculated at all times, and nothing else precalculated:
Daily totals of last week
Weekly totals of previous 5-6 weeks
Monthly totals of all previous months
Note that we don't want daily totals of a day that was 2 months ago for example. We want that daily total to be deleted.
Will indexed views be good enough for this purpose? We want all the fields to be precalculated and stored.
B) We would like to have some features like StackExchanges (and generally Wikis) versioning. Are there ways to archive the older versions somehow on the production environment, and keep the newer versions of stuff more readily available? We have been looking into Partitioning but it doesn't seem to handle such intricate scenario (we don't want ALL posts prior to X date to be partitioned, rather than we need all versions that are older than the newest version)
What are the best practices on these matter?
A: You are describing the storage of time-series data with a varying retention period/graularity, features SQL Server does not provide in any form natively. You can of course build this strategy yourself fairly easily, and luckily you have a few great examples in open source projects for guidance:
RRD Tool - time series database and graphing library.
Graphite - inspired by RRD Tool, built by
B: Its hard to speak in such general terms, but perhaps you can maintain an Version column on your table, where 0 always reflects the latest version. Each time you promote a new version you demote all the other versions by incrementing this value. Having versions increase as they get older allows you to create a filtered index on a deterministic value (Version = 0) making it very performant to get the current version.
You can then purge based on number of versions ago rather than date. Just delete yourTable where Version > 5. A partitioning scheme could work using this method as well.
Related
At what scale of database growth does archiving become a necessity, and are there guidelines to show when it is required?
I manage an intranet which provides short news articles via about 40 targeted news groups. I have been asked to remove browsing access to articles older than 2 years, but to maintain access to these by an existing search interface.
One proposal is to hide records by using scheduled overnight tasks to remove out old news items to parallel archive tables. Given that the entire database is only about 5Gb, the entire set of 13000 news articles take up 17Mb, and there are indexes on the publication dates, is this approach advisable or will WHERE clauses based on dates suffise? Is there a rule of thumb here?
The db in question is SQL 2008, we add maybe 2000 news items per year, and there are no reported performance issues at present - this is purely 'future proofing'.
This definitely is a candidate for doing the simplest possible thing, because the data involved is quite manageable. A where clause should be enough. You should have an index on the date column you want to use in the where clause, as that will probably be done in an online fashion.
IDK about the setup you have, but 5GB is enough to load in memory and still have room to spare. So you're well within what the system can handle.
I found there are great number of time series databases emerged these years. There is even A long list of emerged TS-databases, updated in Jan 2015.
I wonder what different capabilities they have, or what strengths and weaknesses do they have? - otherwise we just need one TS database, don't we?
http://go-lang.cat-v.org/pure-go-libs lists two PosgreSQL drivers, but they haven't been updated since months and looking like one man shows. So I wonder if they are reliable / ready for production or if there are other recommended drivers.
Would you use Go with PostgreSQL for production and with what driver?
In the year and a half since this question was asked, pq has matured significantly and is actively maintained (multiple commits by multiple people in the last week, consistent weekly updates for the last several months).
Docs are here: http://godoc.org/github.com/lib/pq
A list of SQL DB drivers can be found here
On this basis I probably wouldn't go for it in production...
The newest one seems to be https://github.com/jbarham/gopgsqldriver
But one of the advantages of open source is that you have all the source so you can maintain it yourself, contribute patches or even take over the maintainer's role.
I am working on a product (ASP.NET Web site) developed for educational institutions. There are around 20 educational inst. in my site. For each of them academic year start and end date varies. There are huge number of records in the database for attendance and results.
Now I need to show all previous years data (like attendance, results etc) whenever a student, teacher logs in. There are some reports which compares student performance in various academic years.
Now my problem is how to maintain that huge data ?
I wanted to go with 2 databases. 1 for current academic year, another for all previous yrs.
But my current year DB schema may change for enhancement. So whenever I move the current year data to archive database then it creates problem for me. Please suggest a good way to implement this.
Thanks,
seshu.
Have you thought about table partitioning? It allows you to rapidly move data through sliding windows - so that at the start of a new year, you slide last year's details into an archive partition. (You need to check the SQL Server edition you have to see whether it is enabled)
MSDN details:
http://msdn.microsoft.com/en-us/library/ms345146(SQL.90).aspx
If you want to keep two databases in sync, schema-wise, there are plenty of tools available for that. Here is mine, here is Red Gate's and here is Apex's. There are many more available, including one which comes with Visual Studio Team System Database edition (if you have that already - if you don't then one of the ones I have previously mentioned will be a lot cheaper).
I have a project involving a web voting system. The current values and related data is stored in several tables. Historical data will be an important aspect of this project so I've also created Audit Tables to which current data will be moved to on a regular basis.
I find this strategy highly inefficient. Even if I only archive data on a daily basis, the number of rows will become huge even if only 1 or 2 users make updates on a given day.
The next alternative I can think of is only storing entries that have changed. This will mean having to build logic to automatically create a view of a given day. This means less stored rows, but considerable complexity.
My final idea is a bit less conventional. Since the historical data will be for reporting purposes, there's no need for web users to have quick access. I'm thinking that my db could have no historical data in it. DB only represents current state. Then, daily, the entire db could be loaded into objects (number of users/data is relatively low) and then serialized to something like XML or JSON. These files could be diffed with the previous day and stored. In fact, SVN could do this for me. When I want the data for a given past day, the system has to retrieve the version for that day and deserialize into objects. This is obviously a costly operation but performance is not so much a concern here. I'm considering using LINQ for this which I think would simplify things. The serialization procedure would have to be pretty organized for the diff to work well.
Which approach would you take?
Thanks
If you're basically wondering how revisions of data are stored in relational databases, then I would look into how wikis do it.
Wikis are all about keeping detailed revision history. They use simple relational databases for storage.
Consider Wikipedia's database schema.
All you've told us about your system is that it involves votes. As long as you store timestamps for when votes were cast you should be able to generate a report describing the vote state tally at any point in time... no?
For example, say I have a system that tallies favorite features (eyes, smile, butt, ...). If I want to know how many votes there were for a particular feature as of a particular date, then I would simply tally all the votes for the feature with a timestamp smaller or equal to that date.
If you want to have a history of other things, then you would follow a similar approach.
I think this is the way it is done.
Have you considered using a real version control system rather than trying to shoehorn a database in its place? I myself am quite partial to git, but there are many options. They all have good support for differences between versions, and they tend to be well optimised for this kind of workload.