SSIS 2008 task dependency configuration - sql-server

I have a package that before any ETL happens it checks source tables to ensure they exist. If they do not exist, it sends me an email via send mail task then waits 30 mins via execute sql task, before trying again via for loop container.
I'm trying to configure this package so if it loops, and then finally succeeds I get an email telling me succeeded. But I don't want an email EVERY time it succeeds, just if the loop occurred and then finished.
So if the source data does not exist, do not proceed to next container, instead send me an email, wait 30 mins, and try again. If finally the source tables appear, then proceed to next container, and send me an email.

If I understand your steps correctly, you have an Execute SQL Task which checks for the schema, if schema is not present sends an email and then waits for 30 mins and loops back again to check the schema. You can add a boolean variable say SendSucessEmail which can be set with something like this
DECLARE #SendSucessEmail BIT = 0
WHILE NOT EXISTS(
SELECT TOP 1 1
FROM sys.tables where name = 'checktable'
)
BEGIN
SET #SendSucessEmail = 1
WAITFOR DELAY '00:30:00'
END
SELECT #SendSucessEmail AS SucessEmailVariable
In your package, You can get this value and use it to send your email.

Related

Microsoft SQL Server Management Studio - Alerts with additional information of lock

We want to have an alert when a lock is waiting longer than 60 seconds. The alert script below is performing as expected.
But we'd like to have more information like the locked Session ID, Locking Status, login name, etc.
Is there a way to include this in the #notification_message?
USE [msdb]
GO
EXEC msdb.dbo.sp_update_alert #name=N'Total Lock Wait Time (ms) > 60000',
#message_id=0,
#severity=0,
#enabled=1,
#delay_between_responses=0,
#include_event_description_in=1,
#database_name=N'',
#notification_message=N'',
#event_description_keyword=N'',
#performance_condition=N'MSSQL$DB:Locks|Lock Wait Time (ms)|_Total|>|60000',
#wmi_namespace=N'',
#wmi_query=N'',
#job_id=N'00000000-0000-0000-0000-000000000000'
GO
EXEC msdb.dbo.sp_update_notification
#alert_name = N'Total Lock Wait Time (ms) > 60000',
#operator_name = N'me',
#notification_method = 1
GO
The msdb.dbo.sp_update_alert system stored procedure updates records in the msdb.dbo.sysalerts table. The nvarchar(512) parameter, "#notification_message" gets stored in the msdb.dbo.sysalerts.notification_message column. When an alert is triggered, the contents of that column are pulled for the message. I have not tried this before, but one thing you could try is to create a SQL Agent job that modifies the value in msdb.dbo.sysalerts.notification_message and attach that job to the notification by using either the #job_id or #job_name parameters. If you're lucky, the job will be executed before the notification is sent out, thus "dynamically" changing the text of the notification. What I expect is more likely is that the job will be run at the same time and would only affect the next time that this alert is triggered. But depending on what you're looking to see, this might be good enough.
For more information, go into your MSDB database and run sp_helptext sp_update_alert and you can see what it's doing.
One other option is to have your SQL Agent job send a message using sp_send_dbmail. Then you can customize your message all you want.

I need to truncate and replace data in a table that is frequently being queried

Primary Question:
I want to truncate and refresh a table in SQL Server, but want to wait until any queries currently accessing the table to finish. Is this a simple setting in SQL Server, or do I need to create some logic to accomplish it?
Detailed Description:
I have a VB application that sits on about 300 terminals. The application calls a SqlServer(2008 R2) stored procedure ([spGetScreenData]) every 2 minutes to get the latest sales data.
[spGetScreenData] creates a series of temp tables and returns a select query of about 200 rows and 100 columns. It takes about 8 seconds to execute.
My goal is to create a new stored procedure ([spRefreshScreenData]) that executes every two minutes which will refresh the data in a table ([SCREEN_DATA]). I will then change [spGetScreenData] to simply query [SCREEN_DATA].
The job that refreshes [SCREEN_DATA] first sets a flag in a status table to 'RUNNING' while it executes. Once complete, it sets that status to 'COMPLETED'.
[spGetScreenData] checks the status of the flag before querying and waits (for a period of time) until it's ready. Something like...
DECLARE #Condition AS BIT=0
, #Count AS INT=0
, #CycleCount AS INT=10 --10 cycles (20 Seconds)
WHILE #Condition = 0 AND #Count < #CycleCount
BEGIN
SET #Count = #Count + 1
IF EXISTS( SELECT Status
FROM tbl_Process_Status
WHERE Process = 'POS_Table_Refresh'
AND Status='Running')
WAITFOR DELAY '000:00:02' --Wait 2 seconds
ELSE
SET #Condition=1
END
SELECT *
FROM SCREEN_DATA
WHERE (Store=#Store OR #Store IS NULL)
My concern has to do with [spRefreshScreenData]. When [spRefeshScreenData] begins its truncation, there could be dozens of requests for the data currently running.
Will SqlServer simply wait until the request are done before truncating? Is there a setting I have to set to not mess these queries up?
Or do I have to build some mechanism to wait until all requests are completed before starting the truncation?
The job that refreshes [SCREEN_DATA] first sets a flag in a status table to 'RUNNING' while it executes. Once complete, it sets that status to 'COMPLETED'.
[spGetScreenData] checks the status of the flag before querying and waits (for a period of time) until it's ready
Don't. Use app locks. The readers (spGetScreenData) are the app lock in shared mode, the writers (refresh job) requests it X mode. See sp_getapplock.
But even this is no necessary. You can build the new data online, while the queries continue, w/o affecting them using a staging table, ye a different table than the one queried by the apps. When the rebuild is complete simply swap the original tabel with the staging one, using either fast SWITCH operations (see Transferring Data Efficiently by Using Partition Switching) or using the good 'ole sp_rename trick.

How to control SSIS package flow based on record count returned by a query?

I'm trying to first check if there are any new records to process before I execute my package. I have a bit field called "processed" in a SQL Server 2008 R2 table that has a value of 1 if processed and 0 if not.
I want to query it thus:
select count(processed) from dbo.AR_Sale where processed = 0
If the result is 0 I want to send an e-mail saying the records are not there. If greater than zero, I want to proceed with package execution. I am new to SSIS and can't seem to figure out what tool to use for this.
My package has a data flow item with an OLE DB connection inside it to the database. The connection uses a query to return the records. Unfortunately, the query completes successfully (as it should) even if there are no records to process. Here is the query:
Select * from dbo.AR_Sale where processed = 0
I copy these records to a data warehouse and then run another query to update the source table by changing the processed field from 0 to 1.
Any help would be greatly appreciated.
One option would be to make use of precedence constraint in conjunction with Execute SQL task to achieve this functionality. Here is an example of how to achieve this in SSIS 2008 R2.
I created a simple table based on the information provided in the question.
Create table script:
CREATE TABLE dbo.AR_Sale(
Id int NOT NULL IDENTITY PRIMARY KEY,
Item varchar(30) NOT NULL,
Price numeric(10, 2) NOT NULL,
Processed bit NOT NULL
)
GO
Then populated the new table with some sample data. You can see that one of the row has Processed flag set to zero.
Populate table script:
INSERT INTO dbo.AR_Sale (Item, Price, Processed) VALUES
('Item 1', 23.84, 1),
('Item 2', 72.19, 0),
('Item 3', 45.73, 1);
On the SSIS package, create the following two variables.
Processed of data type Int32
SQLFetchCount of data type String with value set to SELECT COUNT(Id) ProcessedCount FROM dbo.AR_Sale WHERE Processed = 0
On the SSIS project, create a OLE DB data source that points to the database of your choice. Add the data source to the package's connection manager. In this example, I have used named the data source as Practice.
On the package's Control Flow tab, drag and drop Execute SQL Task from the toolbox.
Configure the General page of the Execute SQL Task as shown below:
Give a proper Name, say Check pre-execution
Change ResultSet to Single row because the query returns a scalar value
Set the Connection to the OLE DB datasource, in this example Practice
Set the SQLSourceType to Variable because we will use the query stored in the variable
Set the SourceVariable to User::SQLFetchCount
Click Result Set page on the left section
Configure the Result Set page of the Execute SQL Task as shown below:
Click Add button to add a new variable which will store the count value returned by the query
Change the Result Name to 0 to indicate the first column value returned by query
Set the Variable Name to User::Processed
Click OK
On the package's Control Flow tab, drag and drop Send Mail Task and Data Flow Task from the toolbox. The Control Flow tab should look something like this:
Right-click on the green arrow that joins the Execute SQL task and Send Mail Task. Click Edit... the Green Arrow is called as Precedence Constraint.
On the Precedence Constraint Editor, perform the following steps:
Set Evaluation operation to Expression
Set the Expression to #[User::Processed] == 0. It means that take this path only when the variable Processed is set to zero.
Click OK
Right-click on the green arrow that joins the Execute SQL task and Data Flow Task. Click Edit... On the Precedence Constraint Editor, perform the following steps:
Set Evaluation operation to Expression
Set the Expression to #[User::Processed] != 0. It means that take this path only when the variable Processed is not set to zero.
Click OK
Control flow tab would look like this. You can configure the Send Mail Task to send email and the Data Flow Task to update the data according to your requirements.
When I execute the package with the data set to based on the populate table script, the package will execute the Data Flow Task because there is one row that is not processed.
When I execute the package after setting Processed flag to 1 on all the rows in the table using the script UPDATE dbo.AR_Sale SET Processed = 1, the package will execute the Send Mail Task.
Your SSIS design should be
Src:
Select count(processed) Cnt from dbo.AR_Sale where processed = 0
Conditional Split stage [under data flow transformations]:
output1: Order 1, Name - EmailCnt, Condition - Cnt = 0
output2: Order 2, Name - ProcessRows, Condition - Cnt > 0
Output Links:
EmailCnt Link: Send email
ProcessRowsLink: DataFlowTask

How to accurately detect if a SQL Server job is running and deal with the job already running?

I'm currently using code like this to detect if a SQL server job is running. (this is SQL Server 2005, all SP's)
return (select isnull(
(select top 1 CASE
WHEN current_execution_status = 4 THEN 0
ELSE 1
END
from openquery(devtestvm, 'EXEC msdb.dbo.sp_help_job')
where current_execution_status = 4 and
name = 'WQCheckQueueJob' + cast(#Index as varchar(10))
), 1)
)
No problems there, and generally speaking, it works just fine.
But.... (always a but)
On occasion, I'll invoke this, get back a "job is not running" result, at which point I'll try and start the job, via
exec msdb.dbo.sp_start_job #JobName
and SQL will return that "SQLAgent has refused to start the job because it already has a pending request".
Ok. Also not a problem. It's conceivable that there's a slight window where the target job could get started before this code can start it, but after checking if it's started. However, I can just wrap that up in a try catch and just ignore the error, right?
begin try
if dbo.WQIsQueueJobActive(#index) = 0 begin
exec msdb.dbo.sp_start_job #JobName
break
end
end try begin catch
-- nothing here
end catch
here's the problem, though.
9 times out of 10, this works just fine. SQL agent will raise the error, it's caught, and processing just continues on, since the job is already running, no harm no foul.
But occasionally, I'll get a message in the Job History view (keep in mind the above code to detect if a specific job is running and start it if not is actually running from another job) saying that the job failed because "SQLAgent has refused to start the job because it already has a pending request".
Of course, this is the exact error that TRY CATCH is supposed to be handling!
When this happens, the executing job just dies, but not immediately from what I can tell, just pretty close. I've put logging all over the place and there's no consistency. One time it fails, it'll be at place a, the next time at place b. In some cases, Place A and place B have nothing but a
select #var = 'message'
in between them. Very strange. Basically, the job appears to be unceremoniously dumped and anything left to execute in the job is +not+ executed at all.
However, if I remove the "exec StartJob" (or have it invoked exactly one time, when I KNOW that the target job can't already be running), everything works perfectly and all my processing in the job runs through.
The purpose behind all this is to have a job started as a result of a trigger (among other things), and, if the job is already started, there's really no need to "start it again".
Anyone ever run into behavior like this with SQL Agent's Job handling?
EDIT:
Current flow of control is like so:
Change to a table (update or insert)...
fires trigger which calls...
a stored proc which calls...
sp_Start_Job which...
starts a specific job which...
calls another stored proc (called CheckQueue) which...
performs some processing and...
checks several tables and depending on their contents might...
invoke sp_start_job on another job to start up a second, simultaneous job
to process the additional work (this second job calls the CheckQueue sproc also
but the two invocations operate on completely separate sets of data)
First of all, have you had a chance to look at service broker? From your description, it sounds like that's what you actually want.
The difference would be instead of starting a job, you put your data into a SB queue and SB will call your processing proc asynchronously and completely side-step issues with already-running jobs etc. It will auto spawn/terminate additional threads and demand dictates, it takes care of order etc.
Here's a good (and vaguely related) tutorial. http://www.sqlteam.com/article/centralized-asynchronous-auditing-with-service-broker
Let's assume that you can't use SB for whatever reason (but seriously, do!).
What about using the job spid's context_info.
Your job calls a wrapper proc that execs each step individually.
The first statement inside the wrapper proc is
DECLARE #context_info VARBINARY(30)
SET #context_info = CAST('MyJob1' AS VARBINARY)
SET CONTEXT_INFO #context_info
When your proc finishes (or in your catch block)
SET CONTEXT_INFO 0x0
When you are looking at calling your job, do this:
IF NOT EXISTS (SELECT * FROM master..sysprocesses WITH (NOLOCK) WHERE context_info=CAST('MyJob1' AS VARBINARY))
EXEC StartJob
When your wrapper proc terminates or the connection is closed, your context_info goes away.
You could also use a global temp table (i.e. ##JobStatus) They will disappear when all spids that reference it disconnect or if it's explicitly dropped.
Just a few thoughts.
I have a query that gives me the running jobs, maybe it can help you. It has been working for me, but if you find any fault on it, let me know, I will try to rectify. cheers.
-- get the running jobs
--marcelo miorelli
-- 10-dec-2013
SELECT sj.name
,DATEDIFF(SECOND,aj.start_execution_date,GetDate()) AS Seconds
FROM msdb..sysjobactivity aj
JOIN msdb..sysjobs sj on sj.job_id = aj.job_id
WHERE aj.stop_execution_date IS NULL -- job hasn't stopped running
AND aj.start_execution_date IS NOT NULL -- job is currently running
--AND sj.name = 'JobName'
and not exists( -- make sure this is the most recent run
select 1
from msdb..sysjobactivity new
where new.job_id = aj.job_id
and new.start_execution_date > aj.start_execution_date )
To Deal with a job already running:
1. Open Task Manger
2. Check if a Process with ImageName "DTExec.exe" is running
3. If the process is running and if it is the problematic job, execute "End Process".

How to send email from execute sql task?

How can i send email if execute sql task get executed and loads a table. so, if table is loaded with any record send email, if not loaded no email.
Appreciate any help.
after that task add another task that checks if there are any rows in the table
something like
IF EXISTS (SELECT 1 FROM Yourtable)
BEGIN
EXEC msdb.dbo.sp_send_dbmail ---------
END
read up on sp_send_dbmail here: http://msdn.microsoft.com/en-us/library/ms190307.aspx
its cool i solved it. i used two execute sql tasks and first one for loading data into the table, second one counting the records and i put variable on green arrow #MyVariable > 0 and connected the send mail task.
Thanks to all.

Resources