Suppose i am having stored procedures which performs Insert/update/delete operations on table.
Depending upon some criteria i want to perform some operations.
Should i create trigger or do the operation in stored procedure itself.
Does using the triggers decreases the performance?
Does these two tables viz Inserted and deleted exists(persistent) or are created dynamically?
If they are created dynamically does it have performance issue.
If they are persistent tables then where are they?
Also if they exixts then can i access Inserted and Deleted tables in stored procedures?
Will it be less performant than doing the same thing in a stored proc. Probably not but with all performance questions the only way to really know is to test both approaches with a realistic data set (if you have a 2,000,000 record table don't test with a table with 100 records!)
That said, the choice between a trigger and another method depends entirely on the need for the action in question to happen no matter how the data is updated, deleted, or inserted. If this is a business rule that must always happen no matter what, a trigger is the best place for it or you will eventually have data integrity problems. Data in databases is frequently changed from sources other than the GUI.
When writing a trigger though there are several things you should be aware of. First, the trigger fires once for each batch, so whether you inserted one record or 100,000 records the trigger only fires once. You cannot assume ever that only one record will be affected. Nor can you assume that it will always only be a small record set. This is why it is critical to write all triggers as if you are going to insert, update or delete a million rows. That means set-based logic and no cursors or while loops if at all possible. Do not take a stored proc written to handle one record and call it in a cursor in a trigger.
Also do not send emails from a cursor, you do not want to stop all inserts, updates, or deletes if the email server is down.
Yes, a table with a trigger will not perform as well as it would without it. Logic dictates that doing something is more expensive than doing nothing.
I think your question would be more meaningful if you asked in terms of whether it is more performant than some other approach that you haven't specified.
Ultimately, I'd select the tool that is most appropriate for the job and only worry about performance if there is a problem, not before you have even implemented a solution.
Inserted and deleted tables are available within the trigger, so calling them from stored procedures is a no-go.
It decreases performance on the query by definition: the query is then doing something it otherwise wasn't going to do.
The other way to look at it is this: if you were going to manually be doing whatever the trigger is doing anyway then they increase performance by saving a round trip.
Take it a step further: that advantage disappears if you use a stored procedure and you're running within one server roundtrip anyway.
So it depends on how you look at it.
Performance on what? the trigger will perform an update on the DB after the event so the user of your system won't even know it's going on. It happens in the background.
Your question is phrased in a manner quite difficult to understand.
If your Operation is important and must never be missed, then you have 2 choice
Execute your operation immediately after Update/Delete with durability
Delay the operation by making it loosely coupled with durability.
We also faced the same issue and our production MSSQL 2016 DB > 1TB with >500 tables and need to send changes(insert, update, delete) of few columns from 20 important tables to 3rd party. Number of business process that updates those few columns in 20 important tables were > 200 and it's a tedious task to modify them because it's a legacy application. Our existing process must work without any dependency of data sharing. Data Sharing order must be important. FIFO must be maintained
eg User Mobile No: 123-456-789, it change to 123-456-123 and again change to 123-456-456
order of sending this 123-456-789 --> 123-456-123 --> 123-456-456. Subsequent request can only be send if response of first previous request is successful.
We created 20 new tables with limited columns that we want. We compare main tables and new table (MainTable1 JOIN MainTale_LessCol1) using checksum of all columns and TimeStamp Column to Identify change.
Changes are logged in APIrequest tables and updated back in MainTale_LessCol1. Run this logic in Scheduled Job every 15 min.
Separate process will pick from APIrequest and send data to 3rd party.
We Explored
Triggers
CDC (Change Data Capture)
200+ Process Changes
Since our deadlines were strict, and cumulative changes on those 20 tables were > 1000/sec and our system were already on peak capacity, our current design work.
You can try CDC share your experience
Related
I have three tables Table1, Table2, Table3. Every day data will update in these tables. Table1 takes 30min to update completely, Table2 takes 45min and Table3 takes 1hr. I have to show updated data to user only after all tables update process completed.
What would be the possible way to achieve this?
Your most likely approach here is to use triggers on the tables to create logs. But there are some warnings that go along with that.
Firstly triggers are like a hidden functionality, there get commonly overlooked and in the future it's likely that someone will be scratching their head wondering where the log data came from. They aren't bad practice, but thats human nature.
Secondly triggers add overhead to the process. every insert/update/delete on the initial tables will trigger an event to write to the logs, and those writes do take resources.
Thirdly depending on how you decide to build your logs there will be the overhead to compare the "inserted" and "deleted" tables behind each initial write.
Things to consider are: Do you want to keep the whole previous record and whole new record? Do you want to only store changes? How much space is this log going to take in the db? Do you want these triggers to only be active during the process or 24/7?
Triggers can cause issues but are likely to be the best option provided you are careful.
I am trying to run a lot of update statements from code, and we have a requirement to summarize what changed for every operation for an audit log.
The update basically persists an entire graph consisting of dozens of tables to SQL Server. Right now, before we begin, we collect the data from all the tables, assemble the graph(s) as a "before" picture, apply the updates, then re-collect the data from all the tables, re-assemble the graph(s) for the "after", serialize the before and after graph(s) to JSON, then create a message to an ESB queue for an off-process consumer to crunch through the graphs, identify the deltas, and update the audit log. All the sql operations occur in a single transaction.
Needless to say, this is an expensive and time-consuming process.
I've been playing with the OUTPUT directive in T-SQL, I like the idea of getting the results of the operation in the same command as the update, but it seems to have some limitations. For example, ideally, it'd be great if I could get the INSERTED and DELETED result sets back at the same time, but there doesn't seem to be a concept of UNION between the two tablesets, so that gets unwieldy very quickly. Also, because the updates don't actually modify every column, I can't take the changes I made and compare them to the DELETED, since we'd show deltas for columns we didn't change.
...but maybe I'm missing some syntax with the OUTPUT command, or I'm not using it correctly, so I figured I'd ask the SO community.
What is the most efficient way to collect the deltas of an update operation in SQL Server? The goal is to minimize the calls to SQL Server, and collect the minimum necessary amount of information for writing an accurate audit log, without writing a bunch of custom code for every single operation.
We have a fairly large stored proc that merges two people found in our system with similar names. It deletes and updates on many different tables (about 10 or so, all very different tables). It's all wrapped in a transaction and rolls back if it fails of course. This may be a dumb question, but is it possible to somehow store and rollback just this specific transaction at a later time without having to create and insert into many "history" tables that keep track of exactly what happened? I don't want to restore the whole database, just the results of a stored procedures specific transaction, and at a later date.
It sounds like you may want to investigate Change Data Capture.
It will still be capturing data as it changes, and if you're only doing it for one execution or for a very small amount of data, other methods may be better.
Once a transaction has been committed, it's not possible to roll back just that one transaction at a later date. You're "committed" in quite a literal sense. You can obviously restore from a backup and roll back every transaction since a particular point, but that's probably not what you are looking for.
So making auditing tables are about your only option. As another answer pointed out you can use Data Change Capture, but unless you have forked out the big money for Enterprise Edition, this isn't an option for you. But if you're just interested in undoing this particular type of change, it's probably easiest to add some code to your procedure that does the merge to store the data records necessary to re-split them and create a procedure to do the actual split. But you have to keep in mind that you must handle any changes to the "merged" data that might break your ability to perform the split. This is why SQL can't do it for you automatically... it doesn't know how you might want to handle any changes to the data that might occur after your original transaction.
Suppose I have a table which contains relevant information. However, the data is only relevant for, let's say, 30 minutes.
After that it's just database junk, so I need to get rid of it asap.
If I wanted, I could clean this table periodically, setting an expiration date time for each record individually and deleting expired records through a job or something. This is my #1 option, and it's what will be done unless someone convince me otherwise.
But I think this solution may be problematic. What if someone stops the job from running and no one notices? I'm looking for something like a built-in way to insert temporary data into a table. Or a table that has "volatile" data itself, in a way that it automagically removes data after x amount of time after its insertion.
And last but not least, if there's no built-in way to do that, could I be able to implement this functionality in SQL server 2008 (or 2012, we will be migrating soon) myself? If so, could someone give me directions as to what to look for to implement something like it?
(Sorry if the formatting ends up bad, first time using a smartphone to post on SO)
As another answer indicated, TRUNCATE TABLE is a fast way to remove the contents of a table, but it's aggressive; it will completely empty the table. Also, there are restrictions on its use; among others, it can't be used on tables which "are referenced by a FOREIGN KEY constraint".
Any more targeted removal of rows will require a DELETE statement with a WHERE clause. Having an index on relevant criteria fields (such as the insertion date) will improve performance of the deletion and might be a good idea (depending on its effect on INSERT and UPDATE statements).
You will need something to "trigger" the DELETE statement (or TRUNCATE statement). As you've suggested, a SQL Server Agent job is an obvious choice, but you are worried about the job being disabled or removed. Any solution will be vulnerable to someone removing your work, but there are more obscure ways to trigger an activity than a job. You could embed the deletion into the insertion process-- either in whatever stored procedure or application code you have, or as an actual table trigger. Both of those methods increase the time required for an INSERT and, because they are not handled out of band by the SQL Server Agent, will require your users to wait slightly longer. If you have the right indexes and the table is reasonably-sized, that might be an acceptable trade-off.
There isn't any other capability that I'm aware of for SQL Server to just start deleting data. There isn't automatic data retention policy enforcement.
See #Yuriy comment, that's relevant.
If you really need to implement it DB side....
Truncate table is fast way to get rid of records.
If all you need is ONE table and you just need to fill it with data, use it and dispose it asap you can consider truncating a (permanent) "CACHE_TEMP" table.
The scenario can become more complicated you are running concurrent threads/jobs and each is handling it's own data.
If that data is just existing for a single "job"/context you can consider using #TEMP tables. They are a bit volatile and maybe can be what you are looking for.
Also you maybe can use table variables, they are a bit more volatile than temporary tables but it depends on things you don't posted, so I cannot say what's really better.
Greeting,
Recently I've started to work on an application, where 8 different modules are using the same table at some point in the workflow. This table have an Instead-Of trigger, which is 5,000 lines long (where first 500 and last 500 lines are common for all modules, and then each module has its own 500 lines of code).
Since the number of modules are going to grow and I want to keep thing as clear (and separate) as possible, I was wondering is there some sort of best practice to split trigger into stored procedures, or should I leave it all in one place?
P.S. Are there going to be any performance penalties for calling procedures from the trigger and passing 15+ parameters to them?
Bearing in mind that the inserted and deleted pseudo-tables are only accessible from within trigger code, and that they can contain multiple rows, you're facing two choices:
Process the rows in inserted and deleted in a RBAR1 fashion, to be able to pass scalar parameters to the stored procedures, or,
Copy all of the data from inserted and deleted into table variables that are then passed to the procedures as appropriate.
I'd expect either approach to impose some2 performance overhead, just from the copying
That being said, it sounds like too much is happening inside the triggers themselves - does all of this code have to be part of the same transaction that performed the DML statement? If not, consider using some form of queue (a table of requests or Service Broker, say) in which to place information on work to perform, and then process the data later - if you use Service Broker, you could have it inspect a shared message and then send appropriate messages to dedicated endpoints for each of your modules, as appropriate.
1 Row By Agonizing Row - using either a cursor of something else to simulate one to access each row in turn - usually frowned upon in a Set-based language like SQL.
2 How much is impossible to know without getting into the specifics of your code and probably trying all possible approaches and measuring the result.
I don't think there is a meaningful performance penalty in this case.
Any way, it is bad practice to write it all inside the trigger (when it is 5000 lines long...).
I think the main consideration is maintainability, which will be much better if you split it
To several SPs