Get SQL Agent job status without polling? - sql-server

I'm trying to find a way to have the SQL Server 'SQL Agent' run a particular piece of code on job step events. I had hoped that there was a way using SMO to register a callback method so that when job steps begin or change status, my code is called. I'm not having any success. Is there any way to have these events pushed to me, rather than polling?

There is no Smo, DDL or trace event exposed for job execution (as far as I can see from Books Online), so I don't think you can do what you want directly. It would be helpful if you could explain exactly what your goal is (and your MSSQL version) and someone may have a useful suggestion. For example, a trace may be a better idea if you want to gather audit or performance data.
In the meantime, here are some ideas (mostly not very 'nice' ones):
Convert your jobs into SSIS packages (they have a full event model)
Build something into the job steps themselves
Log job step completion to a table, and use a trigger on the table to run your code
Run a trace with logging to a table and use a trigger on the table to run your code

Related

How will I know specifically when the tasks in my packages have run successfully in SSIS also how can I show that in a log tablet in SQL table

Let's there is a master package and several tasks run in it on a daily basis, I want to specifically determine when those tasks have finished like table load completed and cuble load completed, these steps run daily but I have to show this in a SQL table that this particular day table load started at this time and ended at this like etc
SSIS event handlers are the simplest means of turning an SSIS script
into a reliable system that is auditable, reacts appropriately to
error conditions, reports progress and allows instrumentation and
monitoring your SSIS packages. They are easy to implement, and provide
a great deal of flexibility. Rob Sheldon once again provides the easy,
clear introduction.
You can use on PostExecute and when the tasks runs successfully :
Or you can use containers and then use precedence constraints with success and failure

SSIS - what is being processed right now?

For a report, I need to know what sub-components (parts of the control flow) of a SSIS package are being processed right now. I know about the catalog view catalog.executable_statistics, but it seems a record is added there after the execution of a given execution path is finished.
Is there a way to check what execution paths already entered pre-execute phase and not yet entered the post-execute phase? In other words, what is the package working on right now?
We use SQL server 2016 Enterprise edition.
Edit:
I prefer a solution that would work with the default logging level.
One option is querying catalog.execution_component_phases which will display the most recently run execution phase of each sub-component within a Data Flow Task while the package is executing. This will let you see what component has started a phase such as PreExecute but hasn't yet begun a subsequent one like PostExecute or ReleaseConnections. To use this, you'll need to set the logging level at either Performance or Verbose as well.
As far as I know there isn't any logging out of the box that will tell you the exact state of all subcomponents in the package when executing it from SQL server.
I simply use an SQL task at the start of some steps that inserts, and when done, updates a row with specifics like start/end datetime, package name & amount of rows processed. You could add a column which specifies the subcomponents affected is this way.

TADOConnection.OnExecuteComplete / OnWillExecute event not called with TADOTable

I try to trace SQL command. I read this post : How can I monitor the SQL commands send over my ADO connection?
It does work for select but not for Delete/Insert/Update...
Configuration : A TADOConnection (MS SQL Server), a TADOTable, a TDatasource, a TDBGrid with TDBNavigator.
So I can trace the SELECT which occurs when the table is open, but nothing occurs when I use the DBNavigator to UPDATE, INSERT, or DELETE records.
When I use a TADOCommand to delete a record, it works too. It seems It doesn't work only when I use the DBNavigator so maybe a clue but I don't find anything about that.
Thanks in advance
Hopefully someone will be able to point you in the direction of a pre-existing library that does your logging for you. In particular, if FireDAC is an option, you might take a look at what it says in here:
http://docwiki.embarcadero.com/RADStudio/XE8/en/Database_Alerts_%28FireDAC%29
Of course, converting your app from using Ado to FireDAC, may not be an option for you, but depending on how great your need is, you could conceivably extract the Sql-Server-specific method of event alerting FireDAC uses into an Ado application. I looked into this briefly a while ago and it looked like it would be fairly straightforward.
Prior to FireDAC, I implemented a server-side solution that caught Inserts, Updates and Deletes. I had to do this about 10 years ago (for Sql Server 2000) and it was quite a performance to set up.
In outline it worked like this:
Sql Server supports what MS used to call "extended stored procedures" which are implemented in custom DLLs (MS may refer to them by a different name these days or have even stopped supporting them). There are Delphi libraries around that provide a wrapper to enable these to be written in Delphi. Of course, these days, if your Sql Server is 64-bit, you need to generate a 64-bit DLL.
You write Extended Stored Procedures to log the changes any way you want, then write custom triggers in the database for Inserts, Updates and Deletes, that feed the data of the rows involved to your XSPs.
As luck would have it, my need for this fell away just as I was completing the project, before I got to stress-testing and performance-profiling it but it did work.
Of course, not in every environment will you be allowed/able to install s/ware and trigger code on the Sql Server.
For interest, you might also take a look at https://msdn.microsoft.com/en-us/library/ms162565.aspx, which provides an SMO object for tracing Sql Server activity, though it seems to be 32-bit only at the moment.
For amusement, I might have a go at implementing an event-handler for the recordset object that underlies a TAdoTable/TAdoQuery, which sould be able to catch the changes you're after but don't hold your breath ...
And, of course, if you're only interested in client-side logging, one way to do it is to write handlers for your dataset's AfterEdit, AfterInsert and AfterDelete events. Those wouldn't guarantee that the changes are ever actually applied at the server, of course, but could provide an accurate record of the user's activity, if that's sufficient for your needs.

Implications of using waitfor delay task in ssis package on scheduled server

I have a question regarding implications of using waitfor delay in an execute sql task on an ssis package. Here's what's going on: I have source data tables that due to the amount of data and linked server connection yada yada they are dropped and created every night. Before my package the utilizes this data runs I have a loop for container. In this container I have an execute sql task that checks to see my source tables exist and if they do not, it sends me and email via email task, then goes to an execute sql task that has a waitfor delay of 30 mins (before looping and checking for source tables again). Now I thought I was pretty slick with this design but others on my team are concerned because they do not know enough about this waitfor task. They are concerned that my package could possibly interfere with theirs, or slow down server, use resources etc....
From my google searches I didn't see anything that actually seemed like it would cause issues. Can anyone here speak to the implications of using this task?
SQL WAITFOR is ideal for this requirement IMO - I've been using it in production SSIS packages for years with no issues. You can monitor it via SSMS Activity Monitor and see that it doesnt consume any resources.

SQL Server Nested Triggers not firing as expected

I have upgraded a SQL Server 6.5 database to SQL Server 2012 by scripting the schema from 6.5, fixing any syntax issues in this script and then I have used this script to create a 2012 database.
At the same time I have upgraded the front-end application from PowerBuilder 6 to 12.5.
When I perform a certain action in the application it inserts data in to a given table. This table has a trigger associated with the INSERT action and within this trigger other tables are updated. This causes additional triggers to fire on these tables as well.
Initially the PowerBuilder application reports the following error:
Row changed between retrieve and update.
No changes made to database.
Now I understand what this error message means but this is where it gets really 'interesting'!
In order to understand what is happening in the triggers I decided to insert data in to a logging table from within the triggers so that I could better understand the flow of events. This had a rather unexpected side effect - the PowerBuilder application no longer reports any errors and when I check in the database all data is written away as expected.
If I remove these lines of logging, the application once again fails with the error message previously listed.
My question is - Can anyone explain why adding some lines of logging could possibly have this side effect? It almost seems like the act of adding some logging which write data away to a logging table, slows things down or somehow serializes the triggers to fire in the correct order....
Thanks in advance for any insight you can offer :-)
Well, let's recap why this message comes up (I have a longer explanation at http://www.techno-kitten.com/PowerBuilder_Help/Troubleshooting/Database_Irregularities/database_irregularities.html). It's basically because the database can no longer find the data it's trying to UPDATE, based on the WHERE clause generated by the DataWindow. Triggers cause this by changing data in columns in the updated table, so logic of WHERE = fails.
If I were troubleshooting this, I'd do the following for both versions of the trigger:
retrieve your data in the app, and also in a DBMS tool, cache the data from both (data from the Original buffer in PB, debugger breakpoint in PB may help) and compare the columns you expect to be in the WHERE clause (client side manipulation of data and status flags can also cause this problem)
make your data changes and initiate the save
from a breakpoint in the SQLPreview events (this is likely multiple rows if it's trigger-caused), cache the pending UPDATE statements
while still paused in SQLPreview, use the WHERE clause in the UPDATE statements to SELECT the data with the DBMS tool
Somewhere through all this, you'll identify where the process is breaking down in the failure case, and figure out why it passes in the good case. I suspect you'll find a much simpler solution than you're hypothesizing.
Good luck,
Terry

Resources