Currently I have two triggers on the same table for the UPDATE action, one of them is responsible for the implementation of existing audit fields in the table. The other is responsible for checking the updated records to perform validation. The problem is that although the Trigger prioritize verification, above the audit as the two run the upgrade option comes into a small "loop" and makes perform the checks 2 times instead of one.
I know that if I point to clause IF (UPDATE (Camp1)) BEGIN .... It will allow me to run only when certain fields are modified, but is there any other way?
Related
I'm looking for changes to a very specific subset of records from one of our corporate tables. I realize that an update trigger would accomplish this, but it would be called thousands of times a day.
While I'm fairly proficient at SQL, T-SQL and triggers are a little outside of my normal playground. I don't want to write something that will be running constantly and accomplishing very little (or annoy our DBA).
It seems like the solution would be a view that only pulls the rows I'm interested in and then writing a trigger based on that. It seems like that's "bad form", which tells me there is a better way to accomplish the same thing.
I'm looking to use a trigger because I'm looking for a change in a value that will likely reset quickly. (e.g. it will change to 1, and then change back to 0 a few hours later).
This trigger would be running on SQL SERVER EXPRESS 10.50.4000
I mostly just need a direction - not so much specific code.
You can't create a trigger on a view that fires on changes to the base table rows.
It is not an issue of "bad form" it is simply impossible (the only triggers allowed on views are INSTEAD OF triggers that don't do what you need).
A table trigger checking if a column has changed from 0 to 1 shouldn't cause massive overhead - especially if only called a few thousand times daily.
It should call the UPDATE function to check if the column of interest has been changed - and if not exit immediately. Otherwise it should join the INSERTED and DELETED tables on primary key and check whether the new and old values for any row match the pattern you are trying to track.
I have an API that i'm trying to read that gives me just the updated field. I'm trying to take that and update my tables using a stored procedure. So far the only way I have been able to figure out how to do this is with dynamic SQL but i would prefer to not do that if there is a way not to.
If it was just a couple columns, I'd just write a proc for each but we are talking about 100 fields and any of them could be updated together. One ticket might just need a timestamp updated at this time, but the next ticket might be a timestamp and who modified it while the next one might just be a note.
Everything I've read and have been taught have told me that dynamic SQL is bad and while I'll write it if I have too, I'd prefer to have a proc.
YOU CAN PERHAPS DO SOMETHING LIKE THIS:::
IF EXISTS (SELECT * FROM NEWTABLE NOT IN (SELECT * FROM OLDTABLE))
BEGIN
UPDATE OLDTABLE
SET OLDTABLE.OLDRECORDS = NEWTABLE.NEWRECORDS
WHERE OLDTABLE.PRIMARYKEY= NEWTABLE.PRIMARYKEY
END
The best way to solve your problem is using MERGE:
Performs insert, update, or delete operations on a target table based on the results of a join with a source table. For example, you can synchronize two tables by inserting, updating, or deleting rows in one table based on differences found in the other table.
As you can see your update could be more complex but more efficient as well. Using MERGE requires some proficiency, but when you start to use it you'll use it with pleasure again and again.
I am not sure how your business logic works that determines what columns are updated at what time. If there are separate business functions that require updating different but consistent columns per function, you will probably want to have individual update statements for each function. This will ensure that each process updates only the columns that it needs to update.
On the other hand, if your API is such that you really don't know ahead of time what needs to be updated, then building a dynamic SQL query is a good idea.
Another option is to build a save proc that sets every user-configurable field. As long as the calling process has all of that data, it can call the save procedure and pass every updateable column. There is no harm in having a UPDATE MyTable SET MyCol = #MyCol with the same values on each side.
Note that even if all of the values are the same, the rowversion (or timestampcolumns) will still be updated, if present.
With our software, the tables that users can edit have a widely varying range of columns. We chose to create a single save procedure for each table that has all of the update-able columns as parameters. The calling processes (our web servers) have all the required columns in memory. They pass all of the columns on every call. This performs fine for our purposes.
I have a table that contains over than a million records (products).
Now, daily, I need to either update existing records, and/or add new ones.
Instead of doing it one-by-one (takes couple of hours), I managed to use SqlBulkCopy to work with bunch of records and managed to do my inserts in the matter of seconds, but it can handle only new inserts. So I am thinking about creating a new table that contains new records and old records; and then use that temporary table (on the SQL end) to update/add to the main table.
Any advice how can I perform that update?
One of the better ways to handle this is with the MERGE command in SQL. Mssqltips has a good tutorial on it, it can be a bit trickier to use than some of the other commands.
Also, due to locking you may want to break this up into multiple smaller transactions, unless you know you can tolerate blocking during the update.
We handle this situation in our code in the way you described; we have a temp table, then run an update where the ID in the temp table matches the table to be updated, then run an insert where the ID in the table to be updated is null. We normally do this for updates to library/program settings, though, so it is only run infrequently, on smaller tables. Performance may not be up to par for that many records, or daily runs.
The main "gotcha" I've encountered with this method is that for the update, we did a comparison to make sure at least one of several fields changed before actually running the update. (Our initial reason for this was to avoid overwriting some defaults, which could affect server behavior. Your reason for this might be performance, if your temp table could contain records that haven't actually changed). We encountered a case where we did actually want to update one of the defaults, but our old script didn't catch that. So if you do any comparisons to determine which products you want to update, make sure it is either complete from the start, or document well any fields you don't compare, and why.
In SQL Server 2008, I have a scenario where I have a table with complex validation upon insert / update. This includes needing to temp convert an XML input into a table in order to validate its data against a permanent table.
However, I also have the scenario where I will often update simple integer columns that require no validation. From what I have read here, it seems that SQL Server is going to return the entire row in the temp in-memory "inserted" table, not just the affected columns, when I perform an update. If this is so, then it means for every simply integer update I perform, complex XML validation will be needlessly done.
Do I understand this correctly and if so, how do I get around this short of requiring inserts / updates via a stored proc?
Yes, first off, the triggers will fire for EVERY INSERT or UPDATE operation - you cannot limit that to only fire when certain columns will be affected. You can check inside the trigger to see whether or not certain columns have been affected and make decisions based on that - but you cannot prevent the trigger from firing in the first place.
And secondly, yes, when the trigger fires, you will have ALL columns of the underlying table in the INSERTED and/or DELETED pseudo tables.
One way you might want to change this is by moving the large XML column into a separate table and put that big heavy XML validation trigger only on that table. In that case, if you update or insert into the base table only, you'll get less data, less validation logic. Only when you insert or update into the XML table, you'll get the whole big validation going.
In Microsoft SQL Server :
I've added an insert trigger my table ACCOUNTS that does an insert into table BLAH based upon the inserted values.
The inserts come from only one place, and those happen one at a time. (By that, I mean, that there's never two inserts in a transaction - two web users could, theoretically click submit and have their inserts done in a near-simulataneous way.)
Do I need to adapt the trigger to handle more than one row being in inserted, the special table created for triggers - or does each individual insert transaction launch the trigger separately?
Each insert calls the trigger. However, if a single insert adds more than one row the trigger is only called once, so your trigger has to be able to handle multiple records.
The granularity is at the INSERT statement level not at the transaction level.
So no, if you have two transactions inserting into the same table they will each call the trigger ATOMICALLY.
BOb
in your situation each insert happens in its own transaction and fires off the trigger individually, so you should be fine. if there was ever a circumstance where you had two inserts within the same transaction you would have to modify the trigger to do either a set based insert from the 'inserted' table or some kind of cursor if additional processing is necessary.
If you do only one insert in a transaction, I don't see any reason for more rows to be in inserted, except if there was a possibility of recursive trigger calls.
Still, it could cause you troubles if you'd change the behavior of your application in future and you forget to change the triggers. So just to be sure, I would rather implement the trigger as if it could contain multiple rows in inserted.