Calling Powershell script from SQL Server trigger - sql-server

Is it possible to use powershell "Send-MailMessage -SMTPServer" command from the sql server trigger?
I am trying to send emails when the rows in database update or new row created. I am not able to use Database-mail due to security restrictions. However, I can send emails through powershell's Send-MailMessage command.

First off, this is almost certainly a very bad idea. Keep in mind that triggers can cause unexpected issues in terms of transaction escalation and holding locks longer than necessary while they're processing. Also keep in mind that people will probably not expect there to be triggers of this sort on your table, and that they'll try to do CRUD operations on it like it's a normal table and not understand why their applications are timing out.
That said, you could do this at least three ways:
Enable xp_cmdshell, and use that to shell out to PowerShell, as explained here: Running Powershell scripts through SQL - but don't do this, because xp_cmdshell is a security risk and this is very likely to cause problems for you in one way or another (whether because someone uses it in a damaging manner or because PowerShell just fails and you don't even know why). If you can't use database mail due to security restrictions, you should definitely not be using xp_cmdshell, which has even more security concerns!
Instead of using PowerShell, configure Database Mail have your trigger call sp_db_sendmail - but don't do this because this could easily fail or cause problems for your updates (e.g. SMTP server goes down and your table can't be updated anymore). (I wrote this part before I saw you can't use it because of security restrictions.)
One other option comes to mind that may be more secure, but still not ideal - create a SQL CLR library that actually sends the mail using the .NET SmptClient. This could be loaded into your instance and exposed as a regular SQL function that could be called from your trigger. This can be done more securely than just enabling xp_cmdshell, but if you don't have the ability to configure Database mail this probably violates the same policy.
Instead of these options, I'd recommend something like these:
Instead of sending an email every time there's an update, have your trigger write to a table (or perhaps to a Service Broker Queue); create a job to send emails periodically with the latest data from that table, or create some kind of report off of that. This would be preferable because writing to a table or SSB queue should be faster and less prone to error than trying to send an email from in a trigger.
Configure and use Change Data Capture. You could even write some agent jobs or something to regularly email users when there are updates. If your version supports this, it may be a bit more powerful and configurable for you, and solve some problems that triggers can cause more easily.

Related

TADOConnection.OnExecuteComplete / OnWillExecute event not called with TADOTable

I try to trace SQL command. I read this post : How can I monitor the SQL commands send over my ADO connection?
It does work for select but not for Delete/Insert/Update...
Configuration : A TADOConnection (MS SQL Server), a TADOTable, a TDatasource, a TDBGrid with TDBNavigator.
So I can trace the SELECT which occurs when the table is open, but nothing occurs when I use the DBNavigator to UPDATE, INSERT, or DELETE records.
When I use a TADOCommand to delete a record, it works too. It seems It doesn't work only when I use the DBNavigator so maybe a clue but I don't find anything about that.
Thanks in advance
Hopefully someone will be able to point you in the direction of a pre-existing library that does your logging for you. In particular, if FireDAC is an option, you might take a look at what it says in here:
http://docwiki.embarcadero.com/RADStudio/XE8/en/Database_Alerts_%28FireDAC%29
Of course, converting your app from using Ado to FireDAC, may not be an option for you, but depending on how great your need is, you could conceivably extract the Sql-Server-specific method of event alerting FireDAC uses into an Ado application. I looked into this briefly a while ago and it looked like it would be fairly straightforward.
Prior to FireDAC, I implemented a server-side solution that caught Inserts, Updates and Deletes. I had to do this about 10 years ago (for Sql Server 2000) and it was quite a performance to set up.
In outline it worked like this:
Sql Server supports what MS used to call "extended stored procedures" which are implemented in custom DLLs (MS may refer to them by a different name these days or have even stopped supporting them). There are Delphi libraries around that provide a wrapper to enable these to be written in Delphi. Of course, these days, if your Sql Server is 64-bit, you need to generate a 64-bit DLL.
You write Extended Stored Procedures to log the changes any way you want, then write custom triggers in the database for Inserts, Updates and Deletes, that feed the data of the rows involved to your XSPs.
As luck would have it, my need for this fell away just as I was completing the project, before I got to stress-testing and performance-profiling it but it did work.
Of course, not in every environment will you be allowed/able to install s/ware and trigger code on the Sql Server.
For interest, you might also take a look at https://msdn.microsoft.com/en-us/library/ms162565.aspx, which provides an SMO object for tracing Sql Server activity, though it seems to be 32-bit only at the moment.
For amusement, I might have a go at implementing an event-handler for the recordset object that underlies a TAdoTable/TAdoQuery, which sould be able to catch the changes you're after but don't hold your breath ...
And, of course, if you're only interested in client-side logging, one way to do it is to write handlers for your dataset's AfterEdit, AfterInsert and AfterDelete events. Those wouldn't guarantee that the changes are ever actually applied at the server, of course, but could provide an accurate record of the user's activity, if that's sufficient for your needs.

Audit on oracle schema for dml statements

I need to secure an oracle user for doing inserts/updates/deletes from outside programs written by me.
I googled a bit around to find what I need. I know you can use own written db triggers.
And I now there are two major systems from oracle (at least that is what I found).
You can use fine grained auditing. And you can use the audit trail.
I think in my case the audit trail comes close but just isn't what I am looking for. Because I need to now from which program the connection to the DB is coming. For example I need to register all connections that are doing inserts/updates/deletes with there statements executed that are coming from sql developer or toad. But all the other connections may pass without audit.
On daily basis I have lots of connections so registering everything is too much overload.
I hope one of you have a good idea on how to set this up.
Regards
You can use a product of Oracle: Oracle Audit Vault and Database Firewall. Because you want to know also from which program the connection comes, you need the Database Firewall. It can monitor all traffic through the database, specifying the IP address and the client from which the connection was started. You can also specify if you want to audit DML or DDL,or other statements. Data is stored locally in the product's database,not in the secure target (your database). Just have a look, it it just what you need: http://www.oracle.com/technetwork/products/audit-vault-and-database-firewall/overview/overview-1877404.html

SQL Server to send email notifying of data inserted into table - options?

I have a SQL Server database which is shared between several ASP.NET (VB.NET) websites, the database has a table which stores customer enquiries - I wish to provide email notifications informing of new enquiries to relevent people within the organisation (and possibly put some of the enquiry data into the body of the email). Please could someone outline my options in terms of how this could be implemented?
My thoughts are:
SQLMail
Database Mail
VB.NET Stored Procedure using System.Net.Mail library?
Other options?
Option 3 seems desirable because I know how to do this in VB.NET and it is trivial.
What are my other options?
What are the caveats?
Can I do this in real-time i.e. a trigger (the volume of inserts is low)?
or is it vital that this is done in a batch i.e. a job?
Any help very welcome!
Thanks.
Between 1), 2) and 3) the only one worth considering is 2). 1) is a stillborn, using a deprecated feature notorious for it's problems and issues. 3) is a bad idea because it breaks transactional consistency (the SMTP mail is sent even if the insert rolled back) and is a performance hog as the caller (presumably your web page being rendered) has to wait until the SMTP exchange completion (ie. is synchronous). Only 2) offers transactional consistency and is asynchronous (does not block the caller until the SMTP completes).
Normally though such task is better delegated to a Reporting Services task. You would create a Data-Driven subscription with an email delivery. This works out-of-the-box, no need to reinvent the wheel.
I have worked with a similar situation to this, and the solution I went with was to have a C# windows service checking a SQL server job queue every minute. This job queue would include jobs such as send an email. You could then use a TRIGGER on your table to insert a new "Email Alert" job, that would get picked up by the next cycle.

Specify service_broker_guid instead of getting random one from NEW_BROKER

In SQL 2008, is there a way to specify a service_broker_guid instead of simply taking whatever GUID is given to you by:
ALTER DATABASE MyDB SET NEW_BROKER
Our current (broken, in my opinion) release methodology is to restore two databases which have a codependent relationship. One is a "source" database, and one is a star-schema BI database. Part of the regression test plan is to restore backups of both databases in a "gold" state across different servers and even on the same server.
We generally do not include BROKER_INSTANCE variables on our routes because most places, we do not need them (i.e. the combination of SERVICE_NAME and ADDRESS is enough to guarantee delivery). However, when we have 2 databases running on the same instance both with broker enabled, one of these will need a new BROKER_ID. In addition, all routes to these databases will now require a BROKER_INSTANCE qualifier, since there are 2 SERVICE_NAME's on the same ADDRESS.
We use Visual Studio Database Professional to generate our build output scripts, and there's simply no simple way to include a BROKER_INSTANCE as part of it's SQLCMD variable replacement technique, unless you know it before hand.
No. NEW_BROKER would generate a new guid.
There is no way to use a specific guid and that is very much intentional. If you explain what is the underlying problem that is making you ask this question, perhaps we can work toward a solution to that problem.
After your edit.
The broker_instance, as well as routing information, is considered runtime, deployment specific information. As such, it was not designed to accept fixed, predetermined values, which is what a VS GDR project or a set of SQLCMD scripts would like to use. Besides, the broker_instance_id is really meant to be a unique, database instance specific value, and allowing users to specify their own would quickly result in duplicates, which would confuse the heck out of conversation endpoints trying to exchange messages.
The problem you're facing though is a legitimate one. How to automate deployment of routing information (and it's associate problems of automating the deployment and exchange of certificates, configuring users w/o login and granting appropiate permissions, configuration of remote service bindings and service broker transport endpoints). There simply is no Wizard to do this. Quest has a set of tools that handle this actually.
Once upon a time I made a tool, called ssbslm.exe, that automated this entire process and was designed to be usable from scripts. This tool did everything to set up the routes, certificates and endpoints between two arbitrary services. While this tool is no longer available (long and boring story why that is the case), the gist of this story is that is not that hard to write one. Took me few days back in the day.

Anyone else heard of coldfusion t-sql use database bug?

On our admin of our company's production site, we have a little query dumping tool, and I unknowingly, in trying to get data from a database, different than the main one, used the use database command.
And here's the kicker, it then made every coldfusion page with it's query instantly fail.
since it somehow caches that use database command.
Has anyone else heard of this weird bug?
How can we stop this behavior?
If i use a "use database" command, I want that to only exist as far as the current query i am running, after i am done, to go back to the normal database usage.
This is weird and a potentially damaging problem.
Any thoughts?
I imagine that this has something to do with connection pooling. When you call close, it doesn't close the connection, it just puts it back into the pool. When you call open, it doesn't have to open a new connection, it just grabs an existing one from the pool. If you change the database that the connection is pointing to, ColdFusion may be unaware of this. This is why some platforms (MySQL on .Net for instance) reset the connection each time you retrieve it from the pool, to ensure that you are querying the correct database, and to ensure that you don't have any temporary tables and other session info hanging around. The downside of this kind of behaviour, is that it has to make a round trip to the database, even when using pooled connections, which really may not be necessary.
Kibbee is on the right track, but to extend that a little further with three possible workarounds:
Create a different DSN for use by that one query so the "USE DATABASE" statement would only persist for any queries using that DSN.
Uncheck "Maintain connections across client requests" in the CF admin
Always remember to reset the database to the one you intend to use at the end of the request. It kinda goes without saying that this is a very dangerous utility to have on your production server!
It's not a bug nor is it really unexpected behavior - if the query is cached, then everything inside the cfquery block is going along for the ride. Which database platform are you using?

Resources