I have a SQL Server database which is shared between several ASP.NET (VB.NET) websites, the database has a table which stores customer enquiries - I wish to provide email notifications informing of new enquiries to relevent people within the organisation (and possibly put some of the enquiry data into the body of the email). Please could someone outline my options in terms of how this could be implemented?
My thoughts are:
SQLMail
Database Mail
VB.NET Stored Procedure using System.Net.Mail library?
Other options?
Option 3 seems desirable because I know how to do this in VB.NET and it is trivial.
What are my other options?
What are the caveats?
Can I do this in real-time i.e. a trigger (the volume of inserts is low)?
or is it vital that this is done in a batch i.e. a job?
Any help very welcome!
Thanks.
Between 1), 2) and 3) the only one worth considering is 2). 1) is a stillborn, using a deprecated feature notorious for it's problems and issues. 3) is a bad idea because it breaks transactional consistency (the SMTP mail is sent even if the insert rolled back) and is a performance hog as the caller (presumably your web page being rendered) has to wait until the SMTP exchange completion (ie. is synchronous). Only 2) offers transactional consistency and is asynchronous (does not block the caller until the SMTP completes).
Normally though such task is better delegated to a Reporting Services task. You would create a Data-Driven subscription with an email delivery. This works out-of-the-box, no need to reinvent the wheel.
I have worked with a similar situation to this, and the solution I went with was to have a C# windows service checking a SQL server job queue every minute. This job queue would include jobs such as send an email. You could then use a TRIGGER on your table to insert a new "Email Alert" job, that would get picked up by the next cycle.
Related
Is it possible to use powershell "Send-MailMessage -SMTPServer" command from the sql server trigger?
I am trying to send emails when the rows in database update or new row created. I am not able to use Database-mail due to security restrictions. However, I can send emails through powershell's Send-MailMessage command.
First off, this is almost certainly a very bad idea. Keep in mind that triggers can cause unexpected issues in terms of transaction escalation and holding locks longer than necessary while they're processing. Also keep in mind that people will probably not expect there to be triggers of this sort on your table, and that they'll try to do CRUD operations on it like it's a normal table and not understand why their applications are timing out.
That said, you could do this at least three ways:
Enable xp_cmdshell, and use that to shell out to PowerShell, as explained here: Running Powershell scripts through SQL - but don't do this, because xp_cmdshell is a security risk and this is very likely to cause problems for you in one way or another (whether because someone uses it in a damaging manner or because PowerShell just fails and you don't even know why). If you can't use database mail due to security restrictions, you should definitely not be using xp_cmdshell, which has even more security concerns!
Instead of using PowerShell, configure Database Mail have your trigger call sp_db_sendmail - but don't do this because this could easily fail or cause problems for your updates (e.g. SMTP server goes down and your table can't be updated anymore). (I wrote this part before I saw you can't use it because of security restrictions.)
One other option comes to mind that may be more secure, but still not ideal - create a SQL CLR library that actually sends the mail using the .NET SmptClient. This could be loaded into your instance and exposed as a regular SQL function that could be called from your trigger. This can be done more securely than just enabling xp_cmdshell, but if you don't have the ability to configure Database mail this probably violates the same policy.
Instead of these options, I'd recommend something like these:
Instead of sending an email every time there's an update, have your trigger write to a table (or perhaps to a Service Broker Queue); create a job to send emails periodically with the latest data from that table, or create some kind of report off of that. This would be preferable because writing to a table or SSB queue should be faster and less prone to error than trying to send an email from in a trigger.
Configure and use Change Data Capture. You could even write some agent jobs or something to regularly email users when there are updates. If your version supports this, it may be a bit more powerful and configurable for you, and solve some problems that triggers can cause more easily.
My first ever question on stack overflow so please go easy. I have a long running windows application that continually processes sql server commands. I also have a web front end that users use occasionally use to update the same db. I've noticed that sometimes (depending on what the windows application is processing at the time) that if a user submits something to the db I receive out of memory exceptions on the server. I realise I need to dig around a bit more and optimise the code. However I cannot afford the server to go down and expect that in the future i'll be allowing more and more users on the frontend. What i really need is a system that will queue the users requests (they are not time critical) and process them when the db is ready.
I'm using SQL 2012 express.
Is SQL Service Broker the best solution, i've also looked into MSMQ.
If so can someone point me in the right direction for it would be appreciate. In my search i'm just finding a lot of things it does that i don't think i need.
Cheers
It depends on where you're doing the persistence work, and / or calculations. If you're doing the hard work in your Windows Application, then using a Service Broker queue won't be worthwhile, as all you will be doing is receiving a message from the Service Broker queue in your Windows Application, doing your calculations and / or queries from the Windows Application, and then persisting the results to the database: as your database is already under memory pressure, this seems like an unnecessary extra load as you could just as easily queue and retrieve the message from MSMQ (or any other queueing technology).
If however you are doing all the work in the database and your Windows Application just acts as a marshalling service - eg taking the request and palming it off to a stored procedure for actioning - then Service Broker Queues may be worth using: because they are already operating within the context of the database, they can be very efficient at persisting amd querying data.
You would also want to take into failure modes, depending on whether or not you can afford to lose any messages. To ensure message persistence in MSMQ you have to use Transactional Messaging: Service Broker is more efficient at transactional queue processing than MSMQ (because it has transaction support built in, unlike MSMQ which has to use DTC, which adds an overhead) - but if your volume of messages is low, this may not be an issue.
disclaimer: I must use a microsoft access database and I cannot connect my app to a server to subscribe to any service.
I am using VB.net to create a WPF application. I am populating a listview based on records from an access database which I query one time when the application loads and I fill a dataset. I then use LINQ to dataset to display data to the user depending on filters and whatnot.
However.. the access table is modified many times throughout the day which means the user will have "old data" as the day progresses if they do not reload the application. Is there a way to connect the access database to the VB.net application such that it can raise an event when a record is added, removed, or modified in the database? I am fine with any code required IN the event handler.. I just need to figure out a way to trigger a vb.net application event from the access table.
Think of what I am trying to do as viewing real-time edits to a database table, but within the application.. any help is MUCH appreciated and let me know if you require any clarification - I just need a general direction and I am happy to research more.
My solution idea:
Create audit table for ms access change
Create separate worker thread within the users application to query
the audit table for changes every 60 seconds
if changes are found it will modify the affected dataset records
Raise event on dataset record update to refresh any affected
objects/properties
Couple of ways to do what you want, but you are basically right in your process.
As far as I know, there is no direct way to get events from the database drivers to let you know that something changed, so polling is the only solution.
I the MS Access database is an Access 2010 ACCDB database, and you are using the ACE drivers for it (if Access is not installed on the machine where the app is running) you can use the new data macro triggers to record changes to the tables in the database automatically to an audit table that would record new inserts of updates, deletes, etc as needed.
This approach is the best since these happen at the ACE database driver level, so they will be as efficient as possible and transparent.
If you are using older versions of Access, then you will have to implement the auditing yourself. Allen Browne has a good article on that. A bit of search will bring other solutions as well.
You can also just run some query on the tables you need to monitor
In any case, you will need to monitor your audit or data table as you mentioned.
You can monitor for changes much frequently than 60s, depending on the load on the database, number of clients, etc, you could easily check ever few seconds.
I would recommend though that you:
Keep a permanent connection to the database while your app is running: open a dummy table for reading, and don't close it until you shutdown your app. This has no performance cost to anyone, but it will ensure that the expensive lock file creation is done only once, and not for every query you run. This can have a huge performance import. See this article for more information on why.
Make it easy for your audit table (or for your data table) to be monitored: include a timestamp column that records when a record was created and last modified. This makes checking for changes very quick and efficient: you just need to check if the most recent record modified date matches the last one you read.
With Access 2010, it's easy to add the trigger to do that. With older versions, you'll need to do that at the level of the form.
If you are using SQL Server
Up to SQL 2005 you could use Notification Services
Since SQL Server 2008 R2 it has been replaced by StreamInsight
Other database management systems and alternatives
Oracle
Handle changes in a middle tier and signal the client
Or poll. This requires you to configure the interval so you do not miss out on a change too long.
In general
When a server has to be able to send messages to clients it needs to keep a channel/socket open to the clients this can become very expensive when there are a lot of clients. I would advise against a server push and try to do intelligent polling. Intelligent polling means an interval that is as big as possible and appropriate caching on the server to prevent hitting the database to many times for the same data.
I am looking for a best practice or example of how I might be able to generate events for all update events on a given SQL Server 2008 R2 db. To be more descriptive, I am working on a POC where I would essentially publish update events to a queue (RabbitMq in my case) that could then be consumed by various consumers. This would be the first part of implementing a CQRS query-only data model via event sourcing. By placing on the que anybody could then subscribe to these events for replication into any number of query-only data models. This part is clear and fairly well-defined. the problem I am having is determining the best approach for generating the events out from SQL server. I have been given a few ideas such as monitoring the transaction log and SSIS. However, I'm not entirely sure if these options are adviseable or even feasible.
Does anybody have any experience with this sort of thing or have any notions on how to go about such an adventure? any help or guidance would be greatly appreciated.
You cannot monitor the log because, even if you would be able to understand it, you have the problem of the log being recycled before you had a chance to read it. Unless the log is somehow marked not to be truncated it will reused. For instance when transactional replication is enabled the log be pinned until is read by the replication agent and only then truncated.
SSIS is a very broad concept and saying that 'using SSIS to detect changes' is akin to saying 'I'll use a programing language to solve my problem'. The details is how would you use SSIS? There is no way, with or without SSIS, to reliably detect data changes on an arbitrary schema. Even data models specifically designed to allow for detecting changes have issues, specially at detecting deletes.
However there are viable alternatives. You can deploy Change Data Capture and delegate to the engine itself to track the changes. Consuming these detected changes and publishing them to consumers (via RabbitMQ if that's your fancy) is a something SSIS would be good at. But you have to understand that SSIS does not fare well to continuos, real-time tasks. It is designed to run periodically on batches, so your change notification consumers will be notified in spikes, with long delays (minutes), when the SSIS jobs run.
For a real-time approach a better solution is Service Broker. One possibility is to SEND Service Broker messages from triggers, but I would not recommend it. A better design is to have the application itself publish the changes by SEND-ing the message explicitly, when it does the data modification. With SQL Server 2012 is possible to multicast Service Broker messages to other SQL Server consumers (including SQL Server Express). SSB message delivery is fully transactional (no message gets sent if transaction rolls back) and does not require two-phase-commit with a message store resource manager. But to broadcast via RabbitMQ you would need to bridge the communication, ie. RECEIVE the SSB messages and transform them into RabbitMQ notifications.
We have lots of SSRS reports that we would like to automatically deliver via email to employees based on data in the database. SSRS has the data driven reports feature, but the key missing component is any way to indicate that a report was successfully processed/sent.
For example, lets say that some event triggers a record to go into a database table saying that employee X needs to get a copy of the cost report for project Y.
It's very easy to schedule a data driven subscription for the cost report that gets X his report, there is apparently no way to then flag the X-Y record as "Don't send this report again".
We could, and in some instances have, changed the SP that selects the data for the report to also make this update, but the result is that if there is a problem with the report, or a problem delivering the email (say for example the network is down) then the system thinks the report has been sent when it hasn't.
And yes, we could, and in some instances have, written our own little services/apps to read the DB, grap the report, send it in an email, and update the database again, but this seems like something that shouldn't be as difficult as it is.
I'm looking for any suggestions here. This must be one of the most common things people want to do with reports, but I see nothing anywhere about addressing this problem, or products we could buy to remove the shortcoming, or open source projects we could use etc. It seems like we are left with either totally rolling our own system, or sticking with a really crappy method for delivering these reports.
Caveat: I've only got access to SQL Server 2005 & 2008 R2 Standard editions. I'm not lucky enough to have access to the Enterprise editions.
However, I know you can check the LastStatus and LastRunTime fields in the Subscriptions table of the ReportServer database (I presume that data driven subscriptions would still store subscription data in the Subscriptions table).
With this, you could create and schedule a stored proc that checks the status and last send time of the subscription(s), and if it needs to be sent again, you can pass the GUID for the subscription to the sp_start_job stored proc, which will send the subscription as normal. You would probably need to adjust the subscription so it won't send itself - only when triggered from your scheduled stored proc.
It's something I've used in the past, but once you get to having hundreds of subscriptions, it can be cumbersome at best to manage.
you can look in msdb if you are looking to see whether a person was sent a specific report. This might give you an idea, it's off one of the system views, but there are also other system views related to attachments and whether the email is in an unsent status. This is if you are utilizing database mail..
chris
`
use [msdb]
GO
select
recipients
,copy_recipients
,blind_copy_recipients
,subject
,body
,file_attachments
,sent_status
,convert(varchar ,sent_date,109) as Date_Sent
from dbo.sysmail_sentitems
`
Can you not just deliver reports to a central file share instead, then tell the users that's where the reports are. We went down this route to get round the issue of mailbox's full, issues with email etc.