How to avoid paradox of DDL trigger notifying of dropped triggers - sql-server

This is more of a philosophical question than a programmatic one. I have been tasked with implementing various and sundry security enhancements to our MS SQL 2014 servers. I have a simple trigger as follows....
CREATE TRIGGER send_db_email_notification
ON ALL SERVER
AFTER CREATE_DATABASE,
ALTER_DATABASE,
DROP_DATABASE,
CREATE_FUNCTION,
ALTER_FUNCTION,
CREATE_TRIGGER,
ALTER_TRIGGER,
DROP_TRIGGER
AS
-- do stuff to send mail
ENABLE TRIGGER send_db_email_notification ON ALL SERVER
The trigger works as expected. I receive an email when one of the events is fired. Well, it does whenever a DB is created, altered or dropped, or a function is created or altered. The problem is the drop_trigger event.
If someone comes in and drops this same trigger, it cannot execute itself to notify that it was dropped, because it no longer exists. In other words, the trigger cannot signal its own deletion. At least that is what I assume is happening.
What is the best way to test whether or not my trigger that tests for trigger deletions is not itself deleted? Am I really supposed to create a second trigger to test for trigger deletions, so that if either one gets deleted, the other one can still fire? That seems rather awkward and clunky, not to mention I would get two notifications if some other trigger was deleted, unless I coded the second trigger to specifically only look for deletion of the first.
Is there a more elegant way to handle this scenario?

Related

SQL table queue - run procedure on data receipt

I'm optimizing a legacy application and only have access to the database, not the UI code.
There is a specific table insert occurring that I need to catch, so I can do some additional processing, but without adding any obvious lag.
I'm thinking of adding an INSERT trigger which places an entry on a queue, so that processing can continue after returning.
SQL queues using the Service Broker seem to require a 2-way conversation, therefore 2 queues. I don't need this so another possibility could be to use a table as a queue.
But, if I use a table, is there a way for entries to be processed very soon after they arrive, as would be the case with a Service Broker SQL queue? Is the only option to have checks running at a scheduled period? This would be a pain if I need to add more.
A trigger on that "Queue" table would also not be a great idea as it would add to performance lag.
Of course I could always just ignore the response queue, but that doesn't feel right.

Is there equivalent of SQLdependency in AzureSQL?

I have two apps. One inserts into AzureSQL DB and other reads. I want second app to cache query results and invalidate cache only when something changed in table/query results. In standalone SQL Server it was possible by SQLDependency (or SQLCacheDependency) mechanism. As far as I understood, in AzureSQL this mechanism is unavailable. It requires ServiceBroker component to be enabled, and there's no such component in Azure SQL.
I apoligize if I reapeat already asked questions, but all answers come from 2012 or so. Were there any changes? It's 2017.
And the questions is, what is the mechanism to inform application (say, ASP.NET) about changes in AzureSQL?
PS: I know there's related feature "ChangesTracking", but it is about inserting records about some other changes in speical table. That is "within" database. I need to inform app outside of DB.
To my understanding, SQLDependency works by using DependencyListener, that is an implementation of RepositoryListener and relays on ServiceBroker, as you stated AzureSQL does not support ServiceBroker. But you could use the PollingListener implementation of RepositoryListener to verify a change.
"The PollingListener will run until cancelled and will simply compare the result of the query against until change is detected. Once it is detected, a callback method will be called"
(Source1)
(Source 2)

Asynchronous triggers in SQL Server

I need to know what is the meaning of "asynchronous trigger" and is there difference between asynchronous triggers and the normal triggers that is used in SQL Server after or before inserting, updating, deleting.
I think you're getting confused with Service Brokers.
Triggers always execute synchronously, in the context of a given transaction. If you need to invoke an asynchronous process from within a trigger, use a Service Broker.
It's basically like a Queue - you send things to the queue, then can go on about your business without waiting for it to finish.
However, there is a lot more to it than that, have a read of the link.

Sending mail when a table is updated

my name is Tayyeb, I have recently finished my course in SQL Server 2005. I am currently working as a Windows System Administrator.
I am a newbie to databases, my question is that we have a database and if a table gets updated then I'd like to receive an email saying what has been updated.
Can anyone help me on this solution?
Thanks in advance
You would want to setup insert and update triggers on the table and have them call the msdb.dbo.sp_send_dbmail stored procedure.
Create a table that stores the datetime for the last update in that particular table.
Set up a trigger for your table that updates the datetime on an update.
Have an external application poll the datetime at a regular interval, and if it is changed, send an e-mail.
Using a trigger is a given. Either solution, DBMail or a polling process, will work. If you go with a polling process, go ahead and make the polling interval something you can change while the polling process is running, if possible. The problem you are going to run into is if you want to test or debug it, you won't want to wait the full polling interval. If the interval is 5 minutes, you either have to restart the poller or have a separate polling interval just for checking if the polling interval changed (can we say recursive?). So write the poller with debugging/testing in mind.
That might be enough to convince you to use the DBMail solution. I've never used it so others will have to speak to that.

How do I rollback a transaction that has already been committed?

I am implementing an undo button for a lengthy operation on a web app. Since the undo will come in another request, I have to commit the operation.
Is there a way to issue a hint on the transaction like "maybe rollback"? So after the transaction is committed, I could still rollback in another processes if needed.
Otherwise the Undo function will be as complex as the operation it is undoing.
Is this possible? Other ideas welcome!
Another option you might want to consider:
If it's possible for the user to 'undo' the operation, you may want to implement an 'intent' table where you can store pending operations. Once you go through your application flow, the user would need to Accept or Undo the operation, at which point you can just run the pending transaction and apply it to your database.
We have a similar system in place on our web application, where a user can submit a transaction for processing and has until 5pm on the day it's scheduled to run to cancel it. We store this in an intent table and process any transactions scheduled for that day after the daily cutoff time. In your case you would need an explicit 'Accept' or 'Undo' operation from the user after the initial 'lengthy operation', so that would change your process a little bit.
Hope this helps.
The idea in this case is to log for each operation - a counter operation that do the opposite, in a special log and when you need to rollback you actually run the commands that you've logged.
some databases have flashback technology that you can ask the DB to go back to certain date and time. but you need to understand how it works, and make sure it will only effect the data you want and not other staff...
Oracle Flashback
I don't think there is a similar technology on SQL server and there is a SO answer that says it doesn't, but SQL keeps evolving...
I'm not sure what technologies you're using here, but there are certainly better ways of going about this. You could start by storing the data in the session and committing when they are on the final page. Many frameworks these days also allow for long running transactions that span multiple requests.
However, you will probably want to commit at the end of every page and simply set some sort of flag for when the process is completed. That way if something goes wrong in the middle of it the user can recover and all is not lost.

Resources