I need to know what is the meaning of "asynchronous trigger" and is there difference between asynchronous triggers and the normal triggers that is used in SQL Server after or before inserting, updating, deleting.
I think you're getting confused with Service Brokers.
Triggers always execute synchronously, in the context of a given transaction. If you need to invoke an asynchronous process from within a trigger, use a Service Broker.
It's basically like a Queue - you send things to the queue, then can go on about your business without waiting for it to finish.
However, there is a lot more to it than that, have a read of the link.
Related
I'm optimizing a legacy application and only have access to the database, not the UI code.
There is a specific table insert occurring that I need to catch, so I can do some additional processing, but without adding any obvious lag.
I'm thinking of adding an INSERT trigger which places an entry on a queue, so that processing can continue after returning.
SQL queues using the Service Broker seem to require a 2-way conversation, therefore 2 queues. I don't need this so another possibility could be to use a table as a queue.
But, if I use a table, is there a way for entries to be processed very soon after they arrive, as would be the case with a Service Broker SQL queue? Is the only option to have checks running at a scheduled period? This would be a pain if I need to add more.
A trigger on that "Queue" table would also not be a great idea as it would add to performance lag.
Of course I could always just ignore the response queue, but that doesn't feel right.
This is more of a philosophical question than a programmatic one. I have been tasked with implementing various and sundry security enhancements to our MS SQL 2014 servers. I have a simple trigger as follows....
CREATE TRIGGER send_db_email_notification
ON ALL SERVER
AFTER CREATE_DATABASE,
ALTER_DATABASE,
DROP_DATABASE,
CREATE_FUNCTION,
ALTER_FUNCTION,
CREATE_TRIGGER,
ALTER_TRIGGER,
DROP_TRIGGER
AS
-- do stuff to send mail
ENABLE TRIGGER send_db_email_notification ON ALL SERVER
The trigger works as expected. I receive an email when one of the events is fired. Well, it does whenever a DB is created, altered or dropped, or a function is created or altered. The problem is the drop_trigger event.
If someone comes in and drops this same trigger, it cannot execute itself to notify that it was dropped, because it no longer exists. In other words, the trigger cannot signal its own deletion. At least that is what I assume is happening.
What is the best way to test whether or not my trigger that tests for trigger deletions is not itself deleted? Am I really supposed to create a second trigger to test for trigger deletions, so that if either one gets deleted, the other one can still fire? That seems rather awkward and clunky, not to mention I would get two notifications if some other trigger was deleted, unless I coded the second trigger to specifically only look for deletion of the first.
Is there a more elegant way to handle this scenario?
I need to POST (HTTP method) some info to an external URL when a trigger executes.
I know there are a lot of security and performance implications when using triggers, so I am afraid this is not the place to do this kind of processing. But anyway I am posting this to get some feedback or ideas on how to approach the problem. Some considerations :
The transaction fired in the trigger could be asynchronous.
The process must take care of the authorization
The end URL is a php script on the internet.
What really triggers this execution should be an insert or an update of one record to a table, so I must use this trigger since I can't touch the (third party) application.
On a side note, could the Service Broker be something to consider ?
Any ideas will be welcome.
You are right, this is not something you want to do in a trigger. The last thing you want in your application is to introduce the latency of a HTTP request in every update/insert/delete, which will be very visible even when things work well. But when things work bad, it will work very bad: the added coupling will cause your application to fail when the HTTP resource has availability problems, and even worse is the correctness issues related to rollbacks (your transaction that executed the trigger may rollback, but the HTTP call is already made).
This is why is paramount to introduce a layer that decouples the trigger from the HTTP call, and this is done via a queue. Whether is a table used as a queue, or a Service Broker queue, or even an MSMQ queue is up to you to make the call. The simplest solution is to use a table as a queue:
the trigger enqueues (inserts) a request for the HTTP call to be made
after the transaction that run the trigger commits, the request is available to dequeue
an external application that monitors (polls) the queue picks up the request and places the HTTP call
The advantage of Service Broker over custom tables-as-queues is Internal Activation, which would allow your HTTP handling code to run on-demand when there are items to be processed in the queue, instead of polling. But making the HTTP call from inside the engine, via SQLCLR, is something quite ill advised. An external process is much better for accessing something like HTTP and therefore the added complexity of Service Broker is not warranted.
I'm going to do async auditing on my SQL Server 2008 as shown here: http://auoracle.blogspot.com/2010/02/service-broker-master-audit-database.html
What it does is:
a trigger sends a message to a queue in the service broker
another SP in other database receives the messages and process them
The possible problem I see is that it's using a single conversation to send all the messages in order, which is a requirement.
I'm just a little concerned about the fact it's using a single conversation, I guess it's not the common usage. Do you know if there's any problem on doing so?
Thanks!
There's nothing wrong with using a single conversation. Some people use conversation pooling with several pre-created conversations, but unless you're hitting a performance bottleneck, I wouldn't worry about it.
One thing that you should get right is error handling, closing the conversation and opening a new one in case of error.
I have an application that consists of a database and several services. One of these services adds information to the database (triggered by a user).
Another service periodically queries the databases for changes and uses the new data as input for processing.
Until now I used a configurable timer that queries the database every 30 seconds or so. I read about Sql 2005 featuring Notification of changes. However, in Sql 2008 this feature is deprecated.
What is the best way of getting notified of changes that occurred in the database directly in code? What are the best practices?
Notification Services was deprecated, but you don't want to use that anyway.
You might consider Service Broker messages in some scenarios; the details depend on your app.
In most cases, you can probably use SqlDependency or SqlCacheDependency. The way they work is that you include a SqlDependency object with your query when you issue it. The query can be a single SELECT or a complex group of commands in a stored procedure.
Sometime later, if another web server or user or web page makes a change to the DB that might cause the results of the previous query to change, then SQL Server will send a notification to all servers that have registered SqlDependency objects. You can either register code to run when those events arrive, or the event can simply clear an entry in the Cache.
Although you need to enable Service Broker to use SqlDependency, you don't need to interact with it explicitly. However, you can also use it as an alternative mechanism; think of it more as a persistent messaging system that guarantees message order and once-only delivery.
The details of how to use these systems are a bit long for a forum post. You can either Google for them, or I also provide examples in my book (Ultra-Fast ASP.NET).
Yes, this blog post explains that Notification Services is now deprecated, and also what the replacements or alternatives are, going forward.
For your purposes - getting notified of changes that occurred in the dataase - it sounds like you want SQL Server Change Tracking. But the notification is a pull model - your app has to do the query on the change table.
I failed to figure out if SqlDependency continues to work with Notification Services deprecated.
There are a number of different ways of tracking changes in the database: either by triggers that maintain temporal structures such as backlogs, tracking logs (aka 'audit tables') or using the change-tracking facilities in SQL 2008 as references in another answer. Irrespective of whatever mechanism you use, you have the problem of notifying your homegrown service of the change. For this, you can use the Service Broker and event-based activation. From what you describe, it seems like having the application wait on an event from the queue.
http://msdn.microsoft.com/en-us/library/ms171581.aspx
If you don't wish to have the service hang around and sleep on the queue, you can investigate into firing the service into life 'on-demand' by using the external activation mechanism in service broker.
You can use the System.Data.SqlClient.SqlDependency (which works with Service Broker on) to subscribe to changes in a table.