NHibernate: Audit logging using Interceptor or Triggers? - database

Triggers seems like a simple solution for Audit logging. Why should I use Interceptors?
Database portability is one con of trigger...
what are others?

Con of using anything except a trigger is that not all data changes may take place through the GUI and therefore might not get logged. You have to consider that databases are changed from many sources including data imports and set-based queries from the query window (for instance when someone is asked to update all prices by 10%). If you use another method, you had better make sure that it captures any way data can be changed. If you use dynamic sql at all, then all your tables are open to the users to make changes directly in the database including fradulaent changes designed to steal from the company. Users committing fraud are one of the key things audit triggers are designed to catch. If you think your audit solution is ok becasue it captures evreything from the user interface and that it all it needs to capture, you are very, very wrong. I don't know how interceptors work, but you had better test with SSIS (or DTS) imports and queries from the query window before you think the solution will work. Also if it works just from the GUI, remember there might be more than one GUI connecting to a database.

I think that the reason for using interceptors is two fold:
So that you don't tie yourself to a particular database. the porting to different DBMS is significantly easier.
So that your domain model doesn't bleed into other areas of your code. ie the database needing to know about if a record has changed.
But all this depends on the context. If it is vital that all changes to particular records is neccessary then i think than HLGEM is correct. triggers are the best for handling that type of senario.

I agree with HLGEM.
A good alternative to having the advantages of having both triggers and portability of DBMS, is to use some auditing tool that:
Given an audit plan: generate the triggers for the appropriate DBMS
Pablo Javier

Another small issue is with triggers doing any DML. nHibernate uses the affected row count to determine success on a lot of its operations. If you are doing any inserts/updates/etc. inside your triggers, then you'll need to turn NOCOUNT on inside the triggers to prevent those false rowcounts from bubbling up.
Not that this, in any way, prevents you from making triggers work, but I've spent enough time refactoring away from this problem I thought it was worth mentioning. Interceptors, or EventListeners, are a simple, portable way to satisfy auditing requirements.
Plus, no more icky T-SQL code...

Triggers aren't easily testable and they're actually quite difficult to write properly. And if your audit data is to be consumed by business users, it's often difficult to translate from database row-level operations back into the domain model.
I'm of the opinion that a database is really just the persistence area of an application. Of a single application. In other words, I don't think that other systems should use my database directly and so I think auditing to be done outside of the database (i.e. not with triggers).

Related

How to update a bulk of data in Oracle db by not bypassing any trigger?

We are running a pretty uncommon erp-system of a small it-business which doesn't allow us to modify data in an extensive way. We thought about doing a data update by exporting the data we wanted to change directly from the db and by using Excel VBA to update a bunch of data of different tables. Now we got the data updated in excel which is supposed to be written into the Oracle DB.
The it-business support told us not to do so, because of all the triggers running in the background during a regular data update in their program. We are pretty afraid of damaging the db so we are looking for the best way to do the data update without bypassing any trigger. To be more specific there are some thousands of changes we've done in different columns and tables merged all together in one Excel-file. Now we have to be sure to insert the modified data into the db and firing all the triggers the erp-software does during data update.
Is there anyone who knows a good way to do so?
I don't know what ERP system you are using, but I can relate some experiences from Oracle's E-Business Suite.
Nowadays, Oracle's ERP includes a robust set of APIs that will allow your custom programs to safely maintain ERP data. For example, if you want to modify a sales order, you use Oracle's API for that purpose and it makes sure all the necessary, related validations and logic are applied.
So, step #1 -- find out if your ERP system offers any APIs to allow you to safely update your data.
Back in the early days of Oracle's ERP, there were not so many APIs. In those days, when we needed to update a lot of table and had no API available, the next approach would be to use some sort of data loader tool. The most popular was, in fact, called "Data Loader". What this would do is read your data from an Excel spreadsheet and send it to the ERP's user interface -- exactly as though it were being typed in by a user. Since the data went through the ERP's UI, all the necessary validations and logic would automatically be applied.
In really extreme cases, when there was no API and DataLoader was, for whatever reason, not practical, it was still sometimes deemed necessary and worth the risk to attempt our own direct update of the ERP tables. This is, in general, risky and a bad practice, but sometimes we do what we must.
In these cases, we would start a database trace going on a user's session as they keyed in a few updates via the ERP's user interface. Then, we would use the trace to figure out what validations and related logic we needed to apply during our custom direct updates. We would also analyze the source code of the ERP system (since we had it available in the case of Oracle's ERP). Then, we would test it extensively. And, after all that, it was still risky and also prone to break after upgrades. But, in general, it worked as a last resort.
No my problem is that I need to do the work fast by make some automation in my processes. The work is already done on excel that's true but it needed the modification anyway. It's only if I put it manually with c&p into the db over our ERP or all at once over I don't know what.
But I guess Mathew is right. There are validation processes in the ERP so we can't write it directly into the db.
I don't know maybe you could contact me if you have a clue to bypass the ERP in a non risky manner.

Using NOLOCK Hint in EF4?

We're evaluating EF4 and my DBA says we must use the NOLOCK hint in all our SELECT statements. So I'm looking into how to make this happen when using EF4.
I've read the different ideas on how to make this happen in EF4, but all seem like a work around and not sanctioned by Microsoft or EF4. What is the "official Microsoft" response to someone who wants their SELECT statement(s) to include the NOLOCK hint when using LINQ-to-SQL / LINQ-to-Entities and EF4?
By the way, the absolute best information I have found was right here and I encourage everyone interested in this topic to read this thread.
Thanks.
NOLOCK = "READ UNCOMMITTED" = dirty reads
I'd assume MS knows why they chose the default isolation level as "READ COMMITTED"
NOLOCK, in fact any hint, should be used very judiciously: not by default.
Your DBA is a muppet. See this (SO): What can happen as a result of using (nolock) on every SELECT in SQL Sever?. If you happen to work at a bank, or any institution where I may have an account please let me know so I can close it.
I'm a developer on a tools team in the SQL org at Microsoft. I'm in no way authorized to make any official statement, and I'm sure there are people on SO who know more about these things than I do. Nevertheless, I'll offer a friendly rule of thumb, along the theme of "Premature optimization is the root of all evil":
Don't use NOLOCK (or any other query hint for that matter), until you have to. If you have a select statement which has a decent query plan, and it runs fine when there is very little other load on the system, but then it slows down when other queries are accessing the same table, try adding some NOLOCK hints. But always understand that when you do, you run the risk of getting inconsistent data. If you are writing some mission critical app that does online banking or controls an aircraft, this may be unacceptable. However, for many applications the perf speedup is worth the risk. Evaluate on a case-by-case basis, though. Don't just use them willy nilly all over the place.
If you do choose to use NOLOCK, I have blogged a solution in C# using extension methods, so that you can easily change a LINQ query to use NOLOCK hints. If you can adapt this to EF4, please post your adaptation.
EF4 does not currently have a built in way to do it IF ef4 is generating all your queries.
There are ways around this such as using stored procedures or a more extended inline query model, however, this can be time consuming to say the least.
I believe (and I don't speak for Microsoft on this) that caching is Microsoft's intended solution for lightening the load on the server in EF4 sites. Having read uncommitted (or nolock) built into a framework would create unpredictable issues for the expected behaviour of EF4 when 2 contexts are run at the same time. That doesn't mean your situation needs that level of concurrency.
It sounds like you were asked for nolock on ALL selects. While I agree with earlier poster that this can be dangerous if you have ANY transactions that need to be transactions, I don't agree that automatically makes the DBA a muppet. You might just be running a CMS which is totally cool for dirty reads. You can change the ISOLATION LEVEL on your whole database which can have the same effect.
The DBA may have recommended nolock for operations that were ONLY selects (which is fine, especially if there's an ORM being misuesd and doing some dodgy data dumps). The funniest thing about that muppet comment is that Stack Overflow itself runs SQL server in a READ UNCOMMITTED mode. Guess you need to find somewhere else to get answers for your problems then?
Talk to your DBA about the posibility of setting this on a database level or consider a caching strategy if you only need it in a few places. The web is stateless after all so concurrency can often be an illusion anyway unless you address it direclty.
Info about isolation levels
Having worked with EF4 for over a year now, I will offer that using stored procedures for specific tasks is not a hack and absolutely necessary for performance under certain situations.
Our platform gets a lot of traffic through our web site, APIs and ETL data feeds. We use EF primarily on our web side, but also for some back-end processes. Sometimes EF does a great job with its query generation, sometimes it is terrible. You need to look at the queries being generated, load them into query analyzer, and decide whether you might be better off writing the operation in another way (stored procedure, etc.).
If you find that you need to make data available via EF and need NOLOCKs, you can always create views with the NOLOCK hints included, and expose the view to EF instead of the underlying table. The same can be done with Stored Procedures. These methods are probably a bit easier when you are using the Code First approach.
But I think that one mistake a lot of people make with EF is believing that the EF object model has to map directly to the physical (table) model in the database. It doesn't and this is where your DBA comes into play. Let him design your physical model and you work together to abstract your logical data model which is mapped to your object model in EF.
Although this would be a major PITA to do, you can always drop your SQL in a stored procedure and get the functionality you need (or are forced into). It's definitely a hack though!
I know this isn't an answer to your question, but I just wanted to throw this in.
It seems to me that this is (at least partially) the DBA's job. It's fine to say that an application should behave a certain way, and you can and should certainly attempt to program it the way that he would like.
The only way to be sure though, is for the DBA to work on the application with you and construct the DB surface that he would like to present to the app. If he wants critical tables to be queried as READ UNCOMMITTED, then he should help to provide a set of stored procedures with the correct access and isolation level.
Relying on the application code to construct every ad-hoc query correctly is not a scalable approach.

History tables pros, cons and gotchas - using triggers, sproc or at application level [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I am currently playing around with the idea of having history tables for some of my tables in my database. Basically I have the main table and a copy of that table with a modified date and an action column to store what action was preformed e.g., Update, Delete and Insert.
So far I can think of three different places that you can do the history table work.
Triggers on the main table for update, insert and delete. (Database)
Stored procedures. (Database)
Application layer. (Application)
My main question is; what are the pros, cons and gotchas of doing the work in each of these layers?
One advantage I can think of by using the triggers way is that integrity is always maintained no matter what is implemented on top of the database.
I'd put it this way:
Stored procs: they're bypassed if you modify the table directly. Security on the database can control this
Application: same deal. Also if you have multiple applications, possibly in different languages, it needs to be implemented in each stack, which is somewhat redundant; and
Triggers: transparent to the application and will capture all changes. This is my preferred method.
Triggers are the quickest and easiest way to achieve simple history. The following information assumes a more complex example where history processing may include some business rules and may require logging information not found in the table being tracked.
To those that think that triggers are safer than sprocs because they cannot be bypassed I remind them that they are making the following assumption:
!) Permissions exist that stop users from executing DISABLE TRIGGER [but then permissions could too exist to limit all access to the database except for EXECUTE on sprocs which is a common pattern for enterprise applications] - therefore one must assume correct permissions and therefore sprocs equal triggers in terms of security and ability to be bypassed
!) Depending on the database it may be possible to execute update statements that do not fire triggers. I could take advantage of knowledge of nested trigger execution depth to bypass a trigger. The only sure solution includes security in database and limiting access to data using only approved mechanisms - whether these be triggers, sprocs or data access layers.
I think the choices are clear here. If the data is being accessed by multiple applications then you want to control the history from the lowest common layer and this will mean the database.
Following the above logic, the choice of triggers or stored procedures depends again on whether the stored procedure is the lowest common layer. You should prefer the sproc over the trigger as you can control performance, and side effects better and the code is easier to maintain.
Triggers are acceptable, but try to make sure that you do not increase locks by reading data outside of the tables being updated. Limit triggers to inserts into the log tables, log only what you need to.
If the application uses a common logical access layer and it is unlikely that this would change over time I would prefer to implement the logic here. Use a Chain Of Responsibility pattern and a plug-in architecture, drive this from Dependency Injection to allow for all manner of processing in you history module, including logging to completely different types of technology, different databases, a history service or anything else that you could imagine.
Have been using the trigger based approach for years and it has definitely worked well for us, but then you do have the following points to ponder over:
Triggers on a heavily used (say, a multi-tenant SaaS based application) could be extremely expensive
In some scenarios, a few fields can get redundant. Triggers are good only when you are crystal clear on the fields to be logged; though using an application you could have an interceptor layer which could help you log certain fields based on the "configuration"; though with it's own share of overheads
Without adequate database control, a person could easily disable the triggers, modify the data and enable the triggers; all without raising any alarms
In case of web applications, where the connections are established from a pool, tracking the actual users who made the changes can be tedious. A possible solution would be to have the "EditedBy" field in every transaction table.
Late one but it adds couple more options that can be considered.
Change Data Capture: This feature is available in SQL Server 2008 R2+ but only in enterprise edition. It allows you to select tables you want to track and SQL Server will do the job for you. It works by reading transaction log and populating history tables with data.
Reading transaction log: If database is in full recovery mode then transaction log can be read and details on almost transactions can be found.
Downside is that this is not supported by default. Options are to read transaction log using undocumented functions like fn_dblog or third party tools such as ApexSQL Log.
Triggers: Works just fine for small number of tables where there are not too many triggers to manage. If you have a lot of tables you want to audit then you should consider some third party tool for this.
All of these work at the database level and are completely transparent to application.
Triggers are the only reliable way to capture changes. If you do it in Stored Procs or the App, you can always go in and SQL away a change that you don't have a log for (inadvertantly). Of course, somebody who doesn't want to leave a log can disable triggers. But you'd rather force somebody to disable the logging than hope that they remember to include it.
Usually if you choose the application layer, you can design your app code to do the logging in a single point, that will handle consistenly all your historical table. differently triggers are a more complicated approach to maintain because they are (depending on the db technology) replicated for every table: in case of hundred of tables the amount of code for the trigger coud be a problem.
if you have a support organization that will maintain the code you are writing now, and you don't know who will maintain your code (tipical for big industries) you cannot assume which is the skill level of the person who will do fix on your application, in that case it is better in my opinion to make the historical table working principle as simple as possible, and the application layer is probably the best place for this purpose.

django AuditTrail vs Reversion

I am working on an new web app I need to store any changes in database to audit table(s). Purpose of such audit tables is that later on in a real physical audit we can asecertain what happened in a situation, who edited what and what was the state of db at the time of e.g. a complex calculation.
So mostly audit table will be written and not read. Report may be generated though sometimes.
I have looked for available solution
AuditTrail - simple and that is why I am inclining towards it, I can understand it single file code.
Reversion - looks simple enough to use but not sure how easy it would be to modify it if needed.
rcsField seems to be very complex and too much for my needs
I haven't tried anyone of these, so I wanted to know some real experiences and which one I should be using. e.g. which one is faster uses less space, easy to extend and maintain?
Personally I prefer to create audit tables in the database and populate through triggers so that any change even ad hoc queries from the query window are stored. I would never consider an audit solution that is not based in the database itself. This is important because people who are making malicious changes to the database or committing fraud are not likely to do so through the web interface but on the backend directly. Far more of this stuff happens from disgruntled or larcenous employees than outside hackers. If you are using an ORM already, your data is at risk because the permissions are at the table level rather than the sp level where they belong. Therefore it is even more important that you capture any possible change to the dat not just what was from the GUI. WE have a dynamic proc to create audit tables that is run whenever new tables are added to the database. Since our audit tables populate only the changes and not the whole record, we do not need to change them every time a field is added.
Also when evaluating possible solutions, make sure you consider how hard it will be to revert the data to undo a specific change. Once you have audit tables, you will find that this is one of the most important things you need to do from them. Also consider how hard it will be to maintian the information as the database schema changes.
Choosing a solution because it appears to be the easiest to understand, is not generally a good idea. That should be lowest of your selction criteria after meeting the requirements, security, etc.
I can't give you real experience with any of them but would like to make an observation.
I assume by AuditTrail you mean AuditTrail on the Django wiki. If so, I think you'll want to instead look at HistoricalRecords developed by the same author (Marty Alchin aka #gulopine) in his book Pro Django. It should work better with Django 1.x.
This is the approach I'll be using on an upcoming project, not because it necessarily beats the others from a technical standpoint, but because it matches the "real world" expectations of the audit trail for that application.
As i stated in my question rcField seems to be to much for my needs, which is simple that i want store any changes to my table, and may be come back later to those changes to generate some reports.
So I tested AuditTrail and Reversion
Reversion seems to be a better full blown application with many features(which i do not need), Also as far as i know it saves data in a single table in XML or YAML format, which i think
will generate too much data in a single table
to read that data I may not be able to use already present db tools.
AuditTrail wins in that regard that for each table it generates a corresponding audit table and hence changes can be tracked easily, per table data is less and can be easily manipulated and user for report generation.
So i am going with AuditTrail.

Database triggers

In the past I've never been a fan of using triggers on database tables. To me they always represented some "magic" that was going to happen on the database side, far far away from the control of my application code. I also wanted to limit the amount of work the DB had to do, as it's generally a shared resource and I always assumed triggers could get to be expensive in high load scenarios.
That said, I have found a couple of instances where triggers have made sense to use (at least in my opinion they made sense). Recently though, I found myself in a situation where I sometimes might need to "bypass" the trigger. I felt really guilty about having to look for ways to do this, and I still think that a better database design would alleviate the need for this bypassing. Unfortunately this DB is used by mulitple applications, some of which are maintained by a very uncooperative development team who would scream about schema changes, so I was stuck.
What's the general consesus out there about triggers? Love em? Hate em? Think they serve a purpose in some scenarios?
Do think that having a need to bypass a trigger means that you're "doing it wrong"?
Triggers are generally used incorrectly, introduce bugs and therefore should be avoided. Never design a trigger to do integrity constraint checking that crosses rows in a table (e.g "the average salary by dept cannot exceed X).
Tom Kyte, VP of Oracle has indicated that he would prefer to remove triggers as a feature of the Oracle database because of their frequent role in bugs. He knows it is just a dream, and triggers are here to stay, but if he could he would remove triggers from Oracle, he would (along with the WHEN OTHERS clause and autonomous transactions).
Can triggers be used correctly? Absolutely.
The problem is - they are not used correctly in so
many cases that I'd be willing to give
up any perceived benefit just to get
rid of the abuses (and bugs) caused by
them. - Tom Kyte
Think of a database as a great big object - after each call to it, it ought to be in a logically consistent state.
Databases expose themselves via tables, and keeping tables and rows consistent can be done with triggers. Another way to keep them consistent is to disallow direct access to the tables, and only allowing it through stored procedures and views.
The downside of triggers is that any action can invoke them; this is also a strength - no-one is going to screw up the integrity of the system through incompetence.
As a counterpoint, allowing access to a database only through stored procedures and views still allows the backdoor access of permissions. Users with sufficient permissions are trusted not to break database integrity, all others use stored procedures.
As to reducing the amount of work: databases are stunningly efficient when they don't have to deal with the outside world; you'd be really surprised how much even process switching hurts performance. That's another upside of stored procedures: rather than a dozen calls to the database (and all the associated round trips), there's one.
Bunching stuff up in a single stored proc is fine, but what happens when something goes wrong? Say you have 5 steps and the first step fails, what happens to the other steps? You need to add a whole bunch of logic in there to cater for that situation. Once you start doing that you lose the benefits of the stored procedure in that scenario.
Business logic has to go somewhere, and there's a lot of implied domain rules embedded in the design of a database - relations, constraints and so on are an attempt to codify business rules by saying, for example, a user can only have one password. Given you've started shoving business rules onto the database server by having these relations and so on, where do you draw the line? When does the database give up responsibility for the integrity of the data, and start trusting the calling apps and database users to get it right? Stored procedures with these rules embedded in them can push a lot of political power into the hands of the DBAs. It comes down to how many tiers are going to exist in your n-tier architecture; if there's a presentation, business and data layer, where does the separation between business and data lie? What value-add does the business layer add? Will you run the business layer on the database server as stored procedures?
Yes, I think that having to bypass a trigger means that you're "doing it wrong"; in this case a trigger isn't for you.
I work with web and winforms apps in c# and I HATE triggers with a passion. I have never come across a situation where I could justify using a trigger over moving that logic into the business layer of the application and replicating the trigger logic there.
I don't do any DTS type work or anything like that, so there might be some use cases for using trigger there, but if anyone in any of my teams says that they might want to use a trigger they better have prepared their arguments well because I refuse to stand by and let triggers be added to any database I'm working on.
Some reasons why I don't like triggers:
They move logic into the database. Once you start doing that, you're asking for a world of pain because you lose your debugging, your compile time safety, your logic flow. It's all downhill.
The logic they implement is not easily visible to anyone.
Not all database engines support triggers so your solution creates dependencies on database engines
I'm sure I could think of more reasons off the top of my head but those alone are enough for me not to use triggers.
"Never design a trigger to do integrity constraint checking that crosses rows in a table" -- I can't agree. The question is tagged 'SQL Server' and CHECK constraints' clauses in SQL Server cannot contain a subquery; worse, the implementation seems to have a 'hard coded' assumption that a CHECK will involve only a single row so using a function is not reliable. So if I need a constraint which does legitimately involve more than one row -- and a good example here is the sequenced primary key in a classic 'valid time' temporal table where I need to prevent overlapping periods for the same entity -- how can I do that without a trigger? Remember this is a primary key, something to ensure I have data integrity, so enforcing it anywhere other than the DBMS is out of the question. Until CHECK constraints get subqueries, I don't see an alternative to using triggers for certain kinds of integrity constraints.
Triggers can be very helpful. They can also be very dangerous. I think they're fine for house cleaning tasks like populating audit data (created by, modified date, etc) and in some databases can be used for referential integrity.
But I'm not a big fan of putting lots of business logic into them. This can make support problematic because:
it's an extra layer of code to research
sometimes, as the OP learned, when you need to do a data fix the trigger might be doing things with the assumption that the data change is always via an application directive and not from a developer or DBA fixing a problem, or even from a different app
As for having to bypass a trigger to do something, it could mean you are doing something wrong, or it could mean that the trigger is doing something wrong.
The general rule I like to use with triggers is to keep them light, fast, simple, and as non-invasive as possible.
I find myself bypassing triggers when doing bulk data imports. I think it's justified in such circumstances.
If you end up bypassing the triggers very often though, you probably need to take another look at what you put them there for in the first place.
In general, I'd vote for "they serve a purpose in some scenarios". I'm always nervous about performance implications.
I'm not a fan, personally. I'll use them, but only when I uncover a bottleneck in the code that can be cleared by moving actions into a trigger. Generally, I prefer simplicity and one way to keep things simple is to keep logic in one place - the application. I've also worked on jobs where access is very compartmentalized. In those environments, the more code I pack into triggers the more people I have to engage for even the simplest fixes.
I first used triggers a couple of weeks ago. We changed over a production server from SQL 2000 to SQL 2005 and we found that the drivers were behaving differently with NText fields (storing a large XML document), dropping off the last byte. I used a trigger as a temporary fix to add an extra dummy byte (a space) to the end of the data, solving our problem until a proper solution could be rolled out.
Other than this special, temporary case, I would say that I would avoid them since they do hide what is going on, and the function they provide should be handled explictly by the developer rather then as some hidden magic.
Honestly the only time I use triggers to simulate a unique index that is allowed to have NULL that don't count for the uniqueness.
As to reducing the amount of work: databases are stunningly efficient when they don't have to deal with the outside world; you'd be really surprised how much even process switching hurts performance. That's another upside of stored procedures: rather than a dozen calls to the database (and all the associated round trips), there's one.
this is a little off topic, but you should also be aware that you're only looking at this from one potential positive.
Bunching stuff up in a single stored proc is fine, but what happens when something goes wrong? Say you have 5 steps and the first step fails, what happens to the other steps? You need to add a whole bunch of logic in there to cater for that situation. Once you start doing that you lose the benefits of the stored procedure in that scenario.
Total fan,
but really have to use it sparingly when,
Need to maintain consistency (especially when dimension tables are used in a warehouse and we need to relate the data in the fact table with their proper dimension . Sometime, the proper row in the dimension table can be very expensive to compute so you want the key to be written straight to the fact table, one good way to maintain that "relation" is with trigger.
Need to log changes (in a audit table for instance, it's useful to know what ##user did the change and when it occurred)
Some RDBMS like sql server 2005 also provide you with triggers on CREATE/ALTER/DROP statements (so you can know who created what table, when, dropped what column, when, etc..)
Honestly, using triggers in those 3 scenarios, I don't see why would you ever need to "disable" them.
The general rule of thumb is: do not use triggers. As mentioned before, they add overhead and complexity that can easily be avoided by moving logic out of the DB layer.
Also, in MS SQL Server, triggers are fired once per sql command, and not per row. For example, the following sql statement will execute the trigger only once.
UPDATE tblUsers
SET Age = 11
WHERE State = 'NY'
Many people, including myself, were under the impression that the triggers are fired on every row, but this isn't the case. If you have a sql statement like the one above that may change data in more than one row, you might want to include a cursor to update all records affected by the trigger. You can see how this can get convoluted very quickly.

Resources