Triggering Hangfire Jobs using SQL - sql-server

I have a Hangfire service running on one of my servers, I'm a DBA and sometimes I'm asked to trigger jobs using the Dashboard, but it takes me a lot of time to connect to the jobs' server due to some connectivity and security issues.
And to overcome that, I want to trigger those jobs by inserting in Hangfire's tables on the database, I can already query those tables to find which job executed when and whether they failed, succeeded or still enqueued, does anyone know an approach to do so?
I've included a sample of two tables which I think will be used to do this trick, their names are Hash and Set respectively:

Hangfire normally uses a gui like swagger in .net (http://localhost:5000/hangfire) , there should be a immediate trigger feature. If not a second option is changing the cron expression for every minute or maybe every 30 seconds.

Related

Sending a "Select functionname001" to SQL to easily identify long running queries and where called from

We had a performance issue with one of our queries in our application that was taking 20 seconds to run. Using azure data studio we figured out the SQL that was long running and then eventually traced that back to the entity framework query that was executed.
I had an idea of adding a logging function to our code where it is called before any data access is done (insert, select, delete, update etc) in the entity framework code.
What the function would do is simple execute a "Select user_functionname_now" sql statement.
Then in azure data studio profiler we would see :
The image tells me that the user ran the load invoice function and it took 2717 milliseconds.
Granted if you have 100 users doing things in the app the logs might get mixed up a bit but it would go a long way in being able to figure out where in the code the long running query is executing from.
I was also thinking that we could add a fixed column to each query run so that you could see something like this:
But the issue with adding a column is you are returning extra data each time a query is run which requires more data back and forth between the SQL server and the application and that for sure is not a good thing.
So my question is: Is adding a "Select XYZ" before every CRUD call a bad idea? If we add this logging call to some or all of our code where it executes our queries will it cause a performance issue/slowdown that I haven't thought about?
I don't think using any "select ..." is reasonable in your case.
Perhaps, SET CONTEXT_INFO or sp_set_session_context would be better.
This is the scenario that EF Query Tags are for.

"Fire and forget" T-SQL query in SSMS

I have an Azure SQL Database where I sometimes want to execute ad-hoc SQL statements, that may take a long time to complete. For example to create an index, delete records from a huge table or copy data between tables. Due to the amounts of data involved, these operations can take anywhere from 5 minutes to several hours.
I noticed that if a SQL statement is executed in SSMS, then the entire transaction will be automatically rolled back, in case SSMS loses its connection to the server, before the execution is complete. This is problematic for very long running queries, for example in case of local wifi connectivity issues, or if I simply want to shut down my computer to leave the office.
Is there any way to instruct SQL Server or SSMS to execute a SQL statement without requiring an open connection? We cannot use SQL Server Agent jobs, as this an Azure SQL DB, and we would like to avoid solutions based on other Azure services, if possible, as this is just for simple Ad-hoc needs.
We tried the "Discard results after execution" option in SSMS, but this still keeps an open connection until the statement finishes executing:
It is not an asynchronous solution I am looking for, as I don't really care about the execution result (I can always check if the query is still running using for example sys.dm_exec_requests). So in other words, a simple "fire and forget" mechanism for T-SQL queries.
While my initial requirements stated that we didn't want to use other Azure services, I have found that using Azure Data Factory seems to be the most cost-efficient and simple way to solve the problem. Other solutions proposed here, seems to suffer from either high cost (spinning up VMs), or timeout limitations (Azure Functions, Azure Automation Runbooks), non of which apply to ADF when used for this purpose.
The idea is:
Put the long-running SQL statement into a Stored Procedure
Create a Data Factory pipeline with a Stored Procedure activity to execute the SP on the database. Make sure to set the Timeout and Retry values of the activity to sensible values.
Trigger the pipeline
Since no data movement is taking place in Data Factory, this solution is very cheap, and I have had queries running for 10+ hours using this approach, which worked fine.
If you could put the ad-hoc query in a stored procedure you could then schedule to run on the server assuming you have the necessary privileges.
Note that this may not be a good idea, it but should work.
Unfortunately I don't think you will be able to complete the query the without an open connection in SSMS.
I can suggest the following approaches:
Pass the query into an azure function / AWS Lambda to execute on your behalf (perhaps, expose it as a service via rest) and have it store or send the results somewhere accessible.
Start up a VM in the cloud and run the query from the VM via RDP. Once you are ready you re-establish your RDP connection to the VM and you will be able to view the outcome of the query.
Use an Azure automation runbook to execute the query on a scheduled trigger.

Implications of using waitfor delay task in ssis package on scheduled server

I have a question regarding implications of using waitfor delay in an execute sql task on an ssis package. Here's what's going on: I have source data tables that due to the amount of data and linked server connection yada yada they are dropped and created every night. Before my package the utilizes this data runs I have a loop for container. In this container I have an execute sql task that checks to see my source tables exist and if they do not, it sends me and email via email task, then goes to an execute sql task that has a waitfor delay of 30 mins (before looping and checking for source tables again). Now I thought I was pretty slick with this design but others on my team are concerned because they do not know enough about this waitfor task. They are concerned that my package could possibly interfere with theirs, or slow down server, use resources etc....
From my google searches I didn't see anything that actually seemed like it would cause issues. Can anyone here speak to the implications of using this task?
SQL WAITFOR is ideal for this requirement IMO - I've been using it in production SSIS packages for years with no issues. You can monitor it via SSMS Activity Monitor and see that it doesnt consume any resources.

Efficient way to delete records every 10 mins

Problem at hand
Need to delete some few thousand records every 10 minutes from a SQL Server database table.This is part of cleanup for older records.
Solutions under consideration
There's .Net Service running for some other functionality. Same service can be used with a timer to execute SQL delete command on db.
SQL server job
Trigger
Key consideration for providing solution
Ours is a web product which gets deployed at different client locations. we want minimal operational overhead as resources doing deployment are very limited technical skill and we also want to make sure that there's less to none configuration requirement for our Product.
Performance is very important, as it on live transactional database.
This sounds like exactly the sort of work that a SQL Server job was intended to provide; database maintenance.
A scheduled job can execute a basic T-SQL statement that will delete the records you don't want any more, on whatever schedule you want it to run on. The job creation can be scripted to be part of your standard deployment scripts, which should negate the deployment costs.
Additionally, by utilizing an established part of SQL Server, you capitalize on the knowledge of other database administrators that will understand SQL jobs and be able to manage them.
I would not use a trigger...and stick with SQL Server DTS or SSIS. Obviously you will need some kind of identifier so I would use a timestamp column with an index...if that's not required just fire off a TRUNCATE once nightly.
The efficiency of the delete comes from indexes, has nothing to do how the timer is triggered. It is very important that the 'old' records be easily identifiable by a range scan. If the DELETE has to scan the whole table to find these 'old' records, it will block all other activity. Usually in such cases the table is clustered by the datetime value first, and unique primary keys are delegated to a non-clustered index, if needed.
Now how to pop the timer, you really have three alternatives:
SQL Agent job
Conversation Timers
Application timer
SQL Agent job is the best option for 10 minute intervals. Only drawback is that it does not work on SQL Express deployments. If that is a concern, then conversation timers and activated procedures are a viable alternative.
Last option has the disadvantage that the application must be running for the timer to trigger deletion. If this is not a concern (ie. if the application is not running, it doesn't matter that the records are not deleted) then is OK. Note that ASP.Net applications are very bad host for such timers, because of the way IIS and ASP may choose to recycle and put to sleep app pools.

Sending a summary of SQL Server Agent job failures

SQL Server Agent allows you to create Notifications at the moment a Job succeeds or fails, but I'd like to create a regular notification that sends a summary of the events for those who are one step removed from server maintenance.
Is there a way to setup summary notifications that spell out which jobs failed over the last 24 hours?
There are several system stored procedures that you might be able to use or you could query directly against the system tables. If you have SSRS available then put together a report using those queries and you can have it on-demand or scheduled to be emailed to the necessary people at whatever time is right for you.
Specifically, check out sp_help_jobhistory and sp_help_jobhistory_full in msdb.
I'd be surprised if you couldn't use Google to find a demo of setting this up.

Resources