SQL Broker: Async procedure execution - sql-server

I have read this great Remus Rusanu's article
http://rusanu.com/2009/08/05/asynchronous-procedure-execution/
How to implement this idea:
I have a big main table, user can mark 'as delete' records there (set field to 1)
I cannot use SQL Jobs because customers can use SQLExpress.
The idea is: when user 'delete' or 'undelete' records in big table need to send message to a queue.
An activation proc 'fire and forget' proc to execute real Delete statement for marked records in the main table - all or parts, it depends.
But need maximally to avoid blocking.... That's why the question:
How to execute real deletion when SQL Server has a lowest loading? or when database has a lowest activity?
How to detect these 'Low database loading' moments in the async proc?

There is no way to lync Service Broker activation directly to workload and only activate during 'low activity'.
I cannot use SQL Jobs because customers can use SQLExpress
While is true that SQL Server Express Edition lacks SQL Agent scheduling, there are work arounds using Service Broker conversation timers. See Scheduling Jobs in SQL Server Express (and part 2).

Related

"Fire and forget" T-SQL query in SSMS

I have an Azure SQL Database where I sometimes want to execute ad-hoc SQL statements, that may take a long time to complete. For example to create an index, delete records from a huge table or copy data between tables. Due to the amounts of data involved, these operations can take anywhere from 5 minutes to several hours.
I noticed that if a SQL statement is executed in SSMS, then the entire transaction will be automatically rolled back, in case SSMS loses its connection to the server, before the execution is complete. This is problematic for very long running queries, for example in case of local wifi connectivity issues, or if I simply want to shut down my computer to leave the office.
Is there any way to instruct SQL Server or SSMS to execute a SQL statement without requiring an open connection? We cannot use SQL Server Agent jobs, as this an Azure SQL DB, and we would like to avoid solutions based on other Azure services, if possible, as this is just for simple Ad-hoc needs.
We tried the "Discard results after execution" option in SSMS, but this still keeps an open connection until the statement finishes executing:
It is not an asynchronous solution I am looking for, as I don't really care about the execution result (I can always check if the query is still running using for example sys.dm_exec_requests). So in other words, a simple "fire and forget" mechanism for T-SQL queries.
While my initial requirements stated that we didn't want to use other Azure services, I have found that using Azure Data Factory seems to be the most cost-efficient and simple way to solve the problem. Other solutions proposed here, seems to suffer from either high cost (spinning up VMs), or timeout limitations (Azure Functions, Azure Automation Runbooks), non of which apply to ADF when used for this purpose.
The idea is:
Put the long-running SQL statement into a Stored Procedure
Create a Data Factory pipeline with a Stored Procedure activity to execute the SP on the database. Make sure to set the Timeout and Retry values of the activity to sensible values.
Trigger the pipeline
Since no data movement is taking place in Data Factory, this solution is very cheap, and I have had queries running for 10+ hours using this approach, which worked fine.
If you could put the ad-hoc query in a stored procedure you could then schedule to run on the server assuming you have the necessary privileges.
Note that this may not be a good idea, it but should work.
Unfortunately I don't think you will be able to complete the query the without an open connection in SSMS.
I can suggest the following approaches:
Pass the query into an azure function / AWS Lambda to execute on your behalf (perhaps, expose it as a service via rest) and have it store or send the results somewhere accessible.
Start up a VM in the cloud and run the query from the VM via RDP. Once you are ready you re-establish your RDP connection to the VM and you will be able to view the outcome of the query.
Use an Azure automation runbook to execute the query on a scheduled trigger.

Best practice to recover an CRUD statement to a linked server if connection is lost

I am looking for the best practice for the following scenario.
We have a CRM in our company. When an employee updates the record of a company, there is trigger that fires a stored procedure which has a CRUD statement to the linked server hosting the SQL DB of our website.
Question:
What happens when the connection is lost in the middle of the CRUD and the SQL DB of the website did not get updated? What would be the best way to have the SQL statement processed again when the connection is back?
I read about Service Broker or Transactional Replication. Is one of these more appropriate for that situation?
The configuration:
Local: SQL server 2008 R2
Website: SQL server 2008
Here's one way of handling it, assuming that the CRUD statement isn't modal, in the sense that you have to give a response from the linked server to the user before anything else can happen:
The trigger stores, in a local table, all the meta-information you need to run the CRUD statement on the linked server.
A job runs every n minutes that reads the table, attempts to do the CRUD statements stored in the table, and flags them as done if the linked server returns any kind of success message. The ones that aren't successful stay as they are in the table until the next time the job runs.
If the transaction failed in the middle of the trigger it would still be in the transaction and the data would not be written to either the CRM database or the web database. There is also the potential problem of performance, the SQL server data modification query wouldn't return control to the client until both the local and remote change had completed. The latter wouldn't be a problem if the query was executed async, but fire and forget isn't a good pattern for writing data.
Service Broker would allow you to write the modification into some binary data and take care of ensuring that it was delivered in order and properly processed at the remote end. Performance would not be so bad as the insert into the queue is designed to be completed quickly returning control to the trigger and allowing the original CRM query to complete.
However, it is quite a bit to set up. Even using service broker for simple tasks on a local server takes quite a bit of setup, partly because it is designed to handle secure, multicast, reliable ordered conversations so it needs a few layers to work. Once it is all there is is very reliable and does a lot of the work you would otherwise have to do to set up this sort of distributed conversation.
I have used it in the past to create a callback system from a website, a customer enters their number on the site and requests a callback this is sent via service broker over a VPN from the web to the back office server and a client application waits for a call on the service broker queue. Worked very efficiently once it was set up.

Azure SQL Database trigger to insert audit info into Azure Table

I am working on a database auditing solution and was thinking of having SQL Server triggers take care of changes and inserting them into an auditing table. Since this is a SQL Azure Database and will be fairly large I am concerned about the cost of a growing database due to auditing.
In order to cut down on the costs needed for auditing purposes, I am considering storing the audit table (or tables) in Azure Tables instead of Azure SQL databases. So the question becomes, how to get the SQL Server trigger to get the changed data into Azure Tables?
The only thing I can come up with is to have an audit table (or tables) in SQL Databases so the trigger can insert the rows locally, and then have a Worker Role every X seconds pull any rows from that and move them to Azure Tables and delete from the SQL Database table so it doesn't grow large.
Is there a better way to do this integration? Can I somehow put a message in a queue from a trigger?
Azure SQL Database (formerly SQL Azure) doesn't support CLR (hence no EXTERNAL NAME trigger parameter) so there's no way for your triggers to do anything outside of T-SQL. If you want audit content to go to a table, you could take the approach you came up with (temporarily write to SQL table, then move content periodically to Table). There are other approaches you could take (and this would be opinion/subjective, frowned upon here), but going with the queue concept for a minute, since you asked about queues, and illustrating what you could do with Azure Queues:
You could use an Azure queue to specify an item to insert/update in your SQL database. The queue processing code could then be responsible for performing the update and writing to the Azure table. Since the queue messages must be explicitly deleted after processing, you could simply repeat the queue message processing if something failed during execution (e.g. you write to SQL but fail before writing to table storage). The message eventually becomes visible for reading again, if you don't delete it before its timeout value. As long as your operations are idempotent, you'd be ok with this pattern.
A cheaper solution than using worker roles would be to use a combination of Azure Scheduled Tasks (you can enable them for free to run every 15 min within Mobile Apps) and Azure Web Sites. Basically the way it would work is to run this scheduled job every 15 min which would make an HTTP call to some code you have running within your Azure Web Site. This code would do the same work you had outlined for your worker role.
Alternatively, use SQL Server System-Versioned temporal tables to automatically handle the writing of audited record (i.e., changes) to corresponding history tables.

Pause SQL server replication temporarily

We have a transactional replication setup using 3 SQL Servers, 1st as publisher, 2nd as distributor, and 3rd as subscriber.
We have an activity to change the location of the replicated DB (subscriber) using de-attach and attach method. During this activity, I will need to stop the SQL server and hence all replicated transactions will fail.
What's the proper way to pause the replication during this activity, so when I attach the DB again and start the SQL service, replication will resume normally.
Thanks
Please see the following link for details of how to accomplish this:
Start and Stop a Replication Agent
The above article doesn't appear to give information on stopping the distribution agent, this can be achieved by using the stored procs detailed in the link below:
Start/Stop SQL Server Replication Agent using TSQL

SQL Server Agent Job monitoring and notification

I want to create a stored procedure/ Job that will monitor all the SQL server Agent job. When any of the SQL Server Agent's jobs fail it will send and email with the job name to the admin.
What is the best way to create such a job that will monitor all jobs.
You can approach this issue in several ways. From top of my head you can use either a SSRS report scheduled for auto delivery. Or a SQL agent job that run periodically.
In both cases the an underlying stored procedure needs to be built that queries sysjobs and related tables in MSDB.

Resources