Altering database tables on updating website - database

This seems to be an issue that keeps coming back in every web application; you're improving the back-end code and need to alter a table in the database in order to do so. No problem doing manually on the development system, but when you deploy your updated code to production servers, they'll need to automatically alter the database tables too.
I've seen a variety of ways to handle these situations, all come with their benefits and own problems. Roughly, I've come to the following two possibilities;
Dedicated update script. Requires manually initiating the update. Requires all table alterations to be done in a predefined order (rigid release planning, no easy quick fixes on the database). Typically requires maintaining a separate updating process and some way to record and manage version numbers. Benefit is that it doesn't impact running code.
Checking table properties at runtime and altering them if needed. No manual interaction required and table alters may happen in any order (so a quick fix on the database is easy to deploy). Another benefit is that the code is typically a lot easier to maintain. Obvious problem is that it requires checking table properties a lot more than it needs to.
Are there any other general possibilities or ways of dealing with altering database tables upon application updates?

I'll share what I've seen work best. It's just expanding upon your first option.
The steps I've usually seen when updating schemas in production:
Take down the front end applications. This prevents any data from being written during a schema update. We don't want writes to fail because relationships are messed up or a table is suddenly out of sync with the application.
Potentially disconnect the database so no connections can be made. Sometimes there is code out there using your database you don't even know about!
Run the scripts as you described in your first option. It definitely takes careful planning. You're right that you need a pre-defined order to apply the changes. Also I would note often times you need two sets of scripts, one for schema updates and one for data updates. As an example, if you want to add a field that is not nullable, you might add a nullable field first, and then run a script to put in a default value.
Have rollback scripts on hand. This is crucial because you might make all the changes you think you need (since it all worked great in development) and then discover the application doesn't work before you bring it back online. It's good to have an exit strategy so you aren't in that horrible place of "oh crap, we broke the application and we've been offline for hours and hours and what do we do?!"
Make sure you have backups ready to go in case (4) goes really bad.
Coordinate the application update with the database updates. Usually you do the database updates first and then roll out the new code.
(Optional) A lot of companies do partial roll outs to test. I've never done this, but if you have 5 application servers and 5 database servers, you can first roll out to 1 application/1 database server and see how it goes. Then if it's good you continue with the rest of the production machines.
It definitely takes time to find out what works best for you. From my experience doing lots of production database updates, there is no silver bullet. The most important thing is taking your time and being disciplined in tracking changes (versioning like you mentioned).

Related

Adding primary keys to a production database

I just inherited a relatively small SQL Server database. We have a decentralized system operating on about ten sites, with each site being pounded all day by between sixty and one hundred clients. Upon inspecting the system, a couple of things jumped out at me: there are no maintenance plans or keys defined.
I have dozens of different applications that are already accessing the database. The majority of them are written in C with inline SQL. Part of what I was brought in to do was write stored procedures for everything and have our applications move to that. Before I do this, however, I really think I should be focusing on these seemingly glaring issues.
Also, we'll eventually be looking into replication to a central site, so I really think these things should be addressed before we even think of that.
Figuring out a redesign scheme and maintenance plan will be time-consuming but not problematic - I've done it before at single sites. But, how am I going to go about implementing these major changes to the database across ten (or more) production sites while ensuring data integrity and not breaking the applications?
I would suspect that with no keys officaly defined, that this database probaly has tons of data integrity problems. Lucky you.
For replication you will need GUIDs. I would do this, Add the GUIDs and PK definitions in the dev environment and test test test. You'll prbably find alot of crap where people did select * and adding the columns will cause probnalem or cause things to show up on reports that you don't want. Find and fix all these things. Be sure to script allthe changes to the data and put them in source control along with any code changes you need to make to the application. Then schedule down time for maintenance of the database during the lowest usage hours. Let the users know the application will be down ahead of time. During the down time, have the application show a down message, change the datbase to single user mode so no one except the team making this change can affect the database, make a fullbackup, run the scripts to make the changes to the database, run the code to change the application, test, take the database out of single user mode and turn the application back on.
Under no circumstances would I try to make a change this major without going to single user mode.
First ensure you have valid backups of every db, and test-restore them to make sure they restore OK.
Consider using Ola Hallengren's maintenance vs. Maintenance Plans if you need to deploy identical, consistent, scripted solutions to all your sites (Ola Hallengren's site)
Then I'd say look at getting some basic indexing in place, starting with heavy-hitter tables first. You can identify them with various methods - presume you know how, but just to throw a few out thoughts: code review, SQL Trace, Query Plan analysis, and then there are 3rd party tools e.g., Idera SQLdm, Confio Ignite, Quest's Spotlight on SQL Server or Foglight Performance Analysis for SQL Server.
I think this will get you rolling.
Some additional ideas.
One of the first thing's I'd check is: are all the database instances alike, as far as database objects are concerned? Do they all have the exact same tables, columns (and their order in the tables), nullability, etc. etc. Be sure to check pretty much everything listed in sys.objects. Once you know that the database structures are all in synch, then you know that any database modification scripts you generate will work on all the instances.
Once you modify your test environment with your planned changes, you have to ensure that they don't break existing functionality. Can you accurately emulate "...being pounded all day by between sixty and one hundred clients" on your test environment? If you can't, then you of course cannot know if your changes will break anything until they go live. (An assumption I'd avoid: just because a given instance has no duplicates in the columns you wish to build a primary key on does not mean that there are never any duplicates present...)

YAGNI and database creation scripts

Right now, I have code which creates the database (just a few CREATE queries on a SQLite database) in my main database access class. This seems unnecessary as I have no intention of ever using the code. I would just need it if something went wrong and I needed to recreate the database. Should I...
Leave things as they are, even though the database creation code is about a quarter of my file size.
Move the database-creation code to a separate script. It's likely I'll be running it manually if I ever need to run it again anyway, and that would put it out-of-sight-out-of-mind while working on the main code.
Delete the database-creation code and rely on revision control if I ever find myself needing it again.
I think it is best to keep the code. Even more importantly, you should maintain this code (or generate it) every time the database schema changes.
It is important for the following reasons.
You will probably be surprised how many times you need it. If you need to migrate your server, or setup another environment (e.g. TEST or DEMO), and so on.
I also find that I refer to the DDL SQL quite often when coding, particularly if I have not touched the system for a while.
You have a reference for the decisions you made, such as the indexes you created, unique keys, etc etc.
If you do not have a disciplined approach to this, I have found that the database schema can drift over time as ad-hoc changes are made, and this can cause obscure issues that are not found until you hit the database. Even worse without a disciplined approach (i.e. a reference definition of the schema) you may find that different databases have subtly different schema.
I would just need it if something went
wrong and I needed to recreate the
database.
Recreating the database is absolutely not an exceptional case. That code is part of your deployment process on a new / different system, and it represents the DB structure your code expects to work with. You should actually have integration tests that confirm this. Working indefinitely with a single DB server whose schema was created incrementally via manually dispatched SQL statements during development is not something you should rely on.
But yes, it should be separated from the access code; so option 2 is correct. The separate script can then be used by tests as well as for deployment.

Database Refresh

How often do you refresh your development database from production database?Since there are many types of projects (targeting different domains) I would like to know how it is being done and at what intervals(days/months/years) it is being done ?
thanks
While working at Callaway Golf we had an automated build that would completely refresh the database from a baseline. This baseline would be updated (from production) almost daily. We had a set up scripts (DTS) that would do this for us. So if there was some new and interesting information we could easily do it a couple times of day, once a week, etc. The key here is automation to perform the task. If it is easy to do then when it is done is really only dependent on how performing the task impacts the load on the production database, the network, and the amount of time it takes to complete it. This could of course be set up as a schedule task to run at off peak hours and before the dev team gets in in the morning.
The key things in refreshing your development database are:
(1) Automate the refresh through a script.
(2) Obfuscate the data in your development database since you do not want your developers to see the real data or you could do some sampling of your production database.
(3) Decide the frequency of the refresh -- I usually do it once a week.
Depends on what kind of work you're doing. If you're debugging issues that are closely related to the data, then frequent updates are good.
If you're doing data Quality Assurance (which often involves writing code to detect and repair it, that you have to develop and test away from the production server), then you need extremely fresh data. The bad data that is the most valuable to fix is the data that was just inserted or updated today.
If you are writing client code, then infrequent updates are good. Often when I'm writing C# UI code, I could care less what the data is, I just care if it shows up in the right box on the screen.
If you have data with any security issues, you should stop using production data--i.e. never update from production--and get a good sample data generator. Writing a good sample data generator is hard, so 3rd party products are the way to go. MS Data Dude comes to mind, and I recommend Sql RedGate's data generator.
And finally, how hard is it to get a copy of the production data? If it is cheap and automatable, just get a new copy every night. If it is expensive (requires the attention of a very busy DBA), well, resource constraints might answer the question for you regardless to these other concerns.
We tend to refresh every couple of days, or perhaps once a week or so if things are "normal," though if we're investigating something amiss we may do so more much more often.
Our production database is about 1GB, so it's not a trivial thing to copy around. Also, for us there's generally no burning need to get current data from production into the dev systems.
The "how" is simply a MySQL "backup" and "restore"
In a lot of cases, refreshing the dev database really isn't that important. Often production systems have far more data that required for development, and working with such a large dataset can be a hassle for several reasons. Examples include development on the interface, where it's more important to have some data instead of anything specific. In this case, it's more customary to thin out the production database to a smaller subset of real data. After you do this once, it's not really that important to update, as long as all schema changes are pushed through the dev database(s).
On the other hand, performance bugs may often require production-sized databases to be able to reproduce and identify bottlenecks, so in this scenario it is extremely useful to have an almost-realtime database. Many issues may only present themselves with the exact data used in production.
We tend to always go back to an on-demand schedule. We have many different databases that are used in a suite of applications. We stay away from automatic DEV databases b/c many of our code changes involve database changes and I don't want anything overwritten.
Not at all, the dev databases (one per developer) get setup by a script or similar a couple of times a day, possibly a couple hundred times when running db tests locally. This includes a couple of records for playing around
Of course we still need a database with at least parts of production in it, for integration and scalability tests. We are aiming for a daily automated refresh. But we aren't there yet.

django AuditTrail vs Reversion

I am working on an new web app I need to store any changes in database to audit table(s). Purpose of such audit tables is that later on in a real physical audit we can asecertain what happened in a situation, who edited what and what was the state of db at the time of e.g. a complex calculation.
So mostly audit table will be written and not read. Report may be generated though sometimes.
I have looked for available solution
AuditTrail - simple and that is why I am inclining towards it, I can understand it single file code.
Reversion - looks simple enough to use but not sure how easy it would be to modify it if needed.
rcsField seems to be very complex and too much for my needs
I haven't tried anyone of these, so I wanted to know some real experiences and which one I should be using. e.g. which one is faster uses less space, easy to extend and maintain?
Personally I prefer to create audit tables in the database and populate through triggers so that any change even ad hoc queries from the query window are stored. I would never consider an audit solution that is not based in the database itself. This is important because people who are making malicious changes to the database or committing fraud are not likely to do so through the web interface but on the backend directly. Far more of this stuff happens from disgruntled or larcenous employees than outside hackers. If you are using an ORM already, your data is at risk because the permissions are at the table level rather than the sp level where they belong. Therefore it is even more important that you capture any possible change to the dat not just what was from the GUI. WE have a dynamic proc to create audit tables that is run whenever new tables are added to the database. Since our audit tables populate only the changes and not the whole record, we do not need to change them every time a field is added.
Also when evaluating possible solutions, make sure you consider how hard it will be to revert the data to undo a specific change. Once you have audit tables, you will find that this is one of the most important things you need to do from them. Also consider how hard it will be to maintian the information as the database schema changes.
Choosing a solution because it appears to be the easiest to understand, is not generally a good idea. That should be lowest of your selction criteria after meeting the requirements, security, etc.
I can't give you real experience with any of them but would like to make an observation.
I assume by AuditTrail you mean AuditTrail on the Django wiki. If so, I think you'll want to instead look at HistoricalRecords developed by the same author (Marty Alchin aka #gulopine) in his book Pro Django. It should work better with Django 1.x.
This is the approach I'll be using on an upcoming project, not because it necessarily beats the others from a technical standpoint, but because it matches the "real world" expectations of the audit trail for that application.
As i stated in my question rcField seems to be to much for my needs, which is simple that i want store any changes to my table, and may be come back later to those changes to generate some reports.
So I tested AuditTrail and Reversion
Reversion seems to be a better full blown application with many features(which i do not need), Also as far as i know it saves data in a single table in XML or YAML format, which i think
will generate too much data in a single table
to read that data I may not be able to use already present db tools.
AuditTrail wins in that regard that for each table it generates a corresponding audit table and hence changes can be tracked easily, per table data is less and can be easily manipulated and user for report generation.
So i am going with AuditTrail.

Database triggers

In the past I've never been a fan of using triggers on database tables. To me they always represented some "magic" that was going to happen on the database side, far far away from the control of my application code. I also wanted to limit the amount of work the DB had to do, as it's generally a shared resource and I always assumed triggers could get to be expensive in high load scenarios.
That said, I have found a couple of instances where triggers have made sense to use (at least in my opinion they made sense). Recently though, I found myself in a situation where I sometimes might need to "bypass" the trigger. I felt really guilty about having to look for ways to do this, and I still think that a better database design would alleviate the need for this bypassing. Unfortunately this DB is used by mulitple applications, some of which are maintained by a very uncooperative development team who would scream about schema changes, so I was stuck.
What's the general consesus out there about triggers? Love em? Hate em? Think they serve a purpose in some scenarios?
Do think that having a need to bypass a trigger means that you're "doing it wrong"?
Triggers are generally used incorrectly, introduce bugs and therefore should be avoided. Never design a trigger to do integrity constraint checking that crosses rows in a table (e.g "the average salary by dept cannot exceed X).
Tom Kyte, VP of Oracle has indicated that he would prefer to remove triggers as a feature of the Oracle database because of their frequent role in bugs. He knows it is just a dream, and triggers are here to stay, but if he could he would remove triggers from Oracle, he would (along with the WHEN OTHERS clause and autonomous transactions).
Can triggers be used correctly? Absolutely.
The problem is - they are not used correctly in so
many cases that I'd be willing to give
up any perceived benefit just to get
rid of the abuses (and bugs) caused by
them. - Tom Kyte
Think of a database as a great big object - after each call to it, it ought to be in a logically consistent state.
Databases expose themselves via tables, and keeping tables and rows consistent can be done with triggers. Another way to keep them consistent is to disallow direct access to the tables, and only allowing it through stored procedures and views.
The downside of triggers is that any action can invoke them; this is also a strength - no-one is going to screw up the integrity of the system through incompetence.
As a counterpoint, allowing access to a database only through stored procedures and views still allows the backdoor access of permissions. Users with sufficient permissions are trusted not to break database integrity, all others use stored procedures.
As to reducing the amount of work: databases are stunningly efficient when they don't have to deal with the outside world; you'd be really surprised how much even process switching hurts performance. That's another upside of stored procedures: rather than a dozen calls to the database (and all the associated round trips), there's one.
Bunching stuff up in a single stored proc is fine, but what happens when something goes wrong? Say you have 5 steps and the first step fails, what happens to the other steps? You need to add a whole bunch of logic in there to cater for that situation. Once you start doing that you lose the benefits of the stored procedure in that scenario.
Business logic has to go somewhere, and there's a lot of implied domain rules embedded in the design of a database - relations, constraints and so on are an attempt to codify business rules by saying, for example, a user can only have one password. Given you've started shoving business rules onto the database server by having these relations and so on, where do you draw the line? When does the database give up responsibility for the integrity of the data, and start trusting the calling apps and database users to get it right? Stored procedures with these rules embedded in them can push a lot of political power into the hands of the DBAs. It comes down to how many tiers are going to exist in your n-tier architecture; if there's a presentation, business and data layer, where does the separation between business and data lie? What value-add does the business layer add? Will you run the business layer on the database server as stored procedures?
Yes, I think that having to bypass a trigger means that you're "doing it wrong"; in this case a trigger isn't for you.
I work with web and winforms apps in c# and I HATE triggers with a passion. I have never come across a situation where I could justify using a trigger over moving that logic into the business layer of the application and replicating the trigger logic there.
I don't do any DTS type work or anything like that, so there might be some use cases for using trigger there, but if anyone in any of my teams says that they might want to use a trigger they better have prepared their arguments well because I refuse to stand by and let triggers be added to any database I'm working on.
Some reasons why I don't like triggers:
They move logic into the database. Once you start doing that, you're asking for a world of pain because you lose your debugging, your compile time safety, your logic flow. It's all downhill.
The logic they implement is not easily visible to anyone.
Not all database engines support triggers so your solution creates dependencies on database engines
I'm sure I could think of more reasons off the top of my head but those alone are enough for me not to use triggers.
"Never design a trigger to do integrity constraint checking that crosses rows in a table" -- I can't agree. The question is tagged 'SQL Server' and CHECK constraints' clauses in SQL Server cannot contain a subquery; worse, the implementation seems to have a 'hard coded' assumption that a CHECK will involve only a single row so using a function is not reliable. So if I need a constraint which does legitimately involve more than one row -- and a good example here is the sequenced primary key in a classic 'valid time' temporal table where I need to prevent overlapping periods for the same entity -- how can I do that without a trigger? Remember this is a primary key, something to ensure I have data integrity, so enforcing it anywhere other than the DBMS is out of the question. Until CHECK constraints get subqueries, I don't see an alternative to using triggers for certain kinds of integrity constraints.
Triggers can be very helpful. They can also be very dangerous. I think they're fine for house cleaning tasks like populating audit data (created by, modified date, etc) and in some databases can be used for referential integrity.
But I'm not a big fan of putting lots of business logic into them. This can make support problematic because:
it's an extra layer of code to research
sometimes, as the OP learned, when you need to do a data fix the trigger might be doing things with the assumption that the data change is always via an application directive and not from a developer or DBA fixing a problem, or even from a different app
As for having to bypass a trigger to do something, it could mean you are doing something wrong, or it could mean that the trigger is doing something wrong.
The general rule I like to use with triggers is to keep them light, fast, simple, and as non-invasive as possible.
I find myself bypassing triggers when doing bulk data imports. I think it's justified in such circumstances.
If you end up bypassing the triggers very often though, you probably need to take another look at what you put them there for in the first place.
In general, I'd vote for "they serve a purpose in some scenarios". I'm always nervous about performance implications.
I'm not a fan, personally. I'll use them, but only when I uncover a bottleneck in the code that can be cleared by moving actions into a trigger. Generally, I prefer simplicity and one way to keep things simple is to keep logic in one place - the application. I've also worked on jobs where access is very compartmentalized. In those environments, the more code I pack into triggers the more people I have to engage for even the simplest fixes.
I first used triggers a couple of weeks ago. We changed over a production server from SQL 2000 to SQL 2005 and we found that the drivers were behaving differently with NText fields (storing a large XML document), dropping off the last byte. I used a trigger as a temporary fix to add an extra dummy byte (a space) to the end of the data, solving our problem until a proper solution could be rolled out.
Other than this special, temporary case, I would say that I would avoid them since they do hide what is going on, and the function they provide should be handled explictly by the developer rather then as some hidden magic.
Honestly the only time I use triggers to simulate a unique index that is allowed to have NULL that don't count for the uniqueness.
As to reducing the amount of work: databases are stunningly efficient when they don't have to deal with the outside world; you'd be really surprised how much even process switching hurts performance. That's another upside of stored procedures: rather than a dozen calls to the database (and all the associated round trips), there's one.
this is a little off topic, but you should also be aware that you're only looking at this from one potential positive.
Bunching stuff up in a single stored proc is fine, but what happens when something goes wrong? Say you have 5 steps and the first step fails, what happens to the other steps? You need to add a whole bunch of logic in there to cater for that situation. Once you start doing that you lose the benefits of the stored procedure in that scenario.
Total fan,
but really have to use it sparingly when,
Need to maintain consistency (especially when dimension tables are used in a warehouse and we need to relate the data in the fact table with their proper dimension . Sometime, the proper row in the dimension table can be very expensive to compute so you want the key to be written straight to the fact table, one good way to maintain that "relation" is with trigger.
Need to log changes (in a audit table for instance, it's useful to know what ##user did the change and when it occurred)
Some RDBMS like sql server 2005 also provide you with triggers on CREATE/ALTER/DROP statements (so you can know who created what table, when, dropped what column, when, etc..)
Honestly, using triggers in those 3 scenarios, I don't see why would you ever need to "disable" them.
The general rule of thumb is: do not use triggers. As mentioned before, they add overhead and complexity that can easily be avoided by moving logic out of the DB layer.
Also, in MS SQL Server, triggers are fired once per sql command, and not per row. For example, the following sql statement will execute the trigger only once.
UPDATE tblUsers
SET Age = 11
WHERE State = 'NY'
Many people, including myself, were under the impression that the triggers are fired on every row, but this isn't the case. If you have a sql statement like the one above that may change data in more than one row, you might want to include a cursor to update all records affected by the trigger. You can see how this can get convoluted very quickly.

Resources