SSIS...How do i run SQL script inside SSIS? - sql-server

So my question is how do I take a table that populates QC(quality control) SQL scripts and then have them run within SSIS, so that I can have flags set regarding each SQL Script.
There are 15 total scripts that check different things, so I will need to loop through them each.
--example--
Id | SQL_Statment | Bypass
1 |Select count() from "" where firstName is null; | 0
2 |Select count() from "" where lastName is null; | 1
Depending on if those scripts are true or false I will need to set flags to later place back in another table.
Is this even possible and how would I approach it.

#Liqudid715
Not sure if I understand your question. Also, for SO, what you should be doing is showing what you have already tried and why it didn't work.
There is an execute SQL task that you can use in control flow items to run the scripts. Set the result set to single row. Create a user variable, and set the user variable based on the result (Result set, with result name being 0). Then you can use the variable in another execute sql task to update another table.
I dont know what you mean by loop through, but you can connect all of the execute sql tasks, but if you are using them to set variable, I don't think you would need to do that.

Related

How to transform a column into an array using ADF

I need to read a column on a db on ADF and use all it´s values as parameters in a foreach.
I tried reading the column using a dataflow and a cache sink to then in a pipeline use Set Variable and then the foreach...but instead of an array of values I get an array with one value that contains all the others I want (but i cant iterate over)
I am using:
#array(activity('myDataflow').output.runStatus.output.columName
Any help is appreciated, seems simple enough (column to array) but I am stuck
Use a lookup activity to get the data from SQL server, and run for each loop on the output of the lookup.
Example:
Create a new pipeline
Add a lookup activity
Choose your source dataset (in this example, an Azure SQL database)
Remove the checkbox from “First row only”
Choose a table, stored procedure or type in a query
SELECT 1 AS result UNION ALL
SELECT 2 AS result UNION ALL
SELECT 3 AS result UNION ALL
SELECT 4 AS result
Add a foreach activity
In the foreach activity, under settings tab: “Items” -
#activity('Lookup SQL query').output.value – where 'Lookup SQL
query' is the name of the lookup activity
Inside the foreach loop, add a wait activity
In the settings tab, “Wait time in seconds” : #item().result .
item() is the current loop, and result is the name of the SQL column
debug the pipeline. You can see that the foreach activity iterates 4 times, for every row returned from the sql query.
You can use append variable activity also, inside ForEach after lookup.
First create an array variable in the pipeline.
Then use append variable activity inside ForEach and give
#item.<your_column_name>
Result variable stored in a sample variable:
Result:
I tried reading the column using a dataflow and a cache sink to then in a pipeline use Set Variable and then the foreach
If you want to do it with dataflows instead of lookup, use the same above procedure and give the below dynamic content in the ForEach.
#activity('Data flow1').output.runStatus.output.sink1.value

How do I configure a foreach loop container in SSIS to take defined start and end dates and run for each date in between?

I'd like to define start_date and end_date parameters in my SSIS package, and have a foreach container that runs for each date in between these 2 (inclusive), which executes a SQL query taking in the current date value (ie starting at start_date) and using it as a parameter for the query.
I'm quite new to SSIS programming and I cannot find information on how to do this.
You can simply add a for loop container and use these variables as mentioned in the image below:
Where #[User:Loop], #[User:MinDate], #[User::MaxDate] are of type System.DateTime
image reference
How do I loop through date values stored as numbers within For Loop container?
Passing parameters to Execute SQL Task
You can refer to the following posts for more details:
Passing Variables to and from an SSIS task
How to pass variable as a parameter in Execute SQL Task SSIS?
A For Loop would be the better option to do this. Assuming that the start and end dates as supplied as parameters to the package as indicated in your question, be aware that parameters cannot be updated in an SSIS package however variables can be. This, as well as an example of the process outlined in your question, is further detailed below.
Create an SSIS datetime variable. As mentioned earlier, this will be used to store in initial value of the start date parameter.
Next add a For Loop on the Control Flow. In the screenshot below, the variable #[User::vStartDate] is set to the same value as the package parameter #[$Package::pStartDate] in the InitExpression on the For Loop. Iterations of the loop continue while the start date variable is less than/equal to the end date parameter, which is specified in the EvalExpression field.
After the Execute SQL Task (or however the SQL query is executed) add a Script Task. This will increment the value of the start date variable, so make sure this is the last task in the loop. An example C# script is below, which simply sets the value of the start date SSIS variable to a C# variable, increments the C# variable by one day, then writes that value back to the SSIS variable. Make sure to add the SSIS start date variable in the ReadWriteVariables field on the Script Task. This will go in the Main method of the script as follows. Although there’s just an increment of the date and update of the variable done in the Script Task, having this in place will allow for easier sustainability in the long term in case more logic needs to be added to this as C# provides much more functionality.
Script Task:
public void Main()
{
//get value in current iteration of loop
DateTime currentIterationValue = Convert.ToDateTime(Dts.Variables["User::vStartDate"].Value);
//increment by one day
currentIterationValue = currentIterationValue.AddDays(1);
//update SSIS variable
Dts.Variables["User::vStartDate"].Value = currentIterationValue;
Dts.TaskResult = (int)ScriptResults.Success;
}
I used an Execute SQL Task to store the dates (results) as a Result Set in a user defined variable. Then, inside the foreach loop container, I used the foreach ADO Enumerator on the user defined variable which has the set of dates. Using the variable mapping in the foreach loop container, you can map the start_date and end_dates from the user defined variable and pass it to other variables.
For example:
I have a SELECT statement which selects 2 rows with columns start_date and end_date. This will be stored as a result set in a variable called "main_dates". The foreach ADO Enumerator will enumerate on this "main_dates" variable (for each row in main_dates run the for loop). Then in the Variable Mapping section, you can create 2 new variables called u_start_date and u_end_date and map the columns 0 and 1 to these variables.
Inside the foreach loop whenever you execute a stored procedure, you can pass the u_start_date and u_end_date variables as parameters.

Auto-generating destinations of split files in SSIS

I am working on my first SSIS package. I have a view with data that looks something like:
Loc Data
1 asd
1 qwe
2 zxc
3 jkl
And I need all of the rows to go to different files based on the Loc value. So all of the data rows where Loc = 1 should end up in the file named Loc1.txt, and the same for each other Loc.
It seems like this can be accomplished with a conditional split to flat file, but that would require a destination for each Location. I have a lot of Locations, and they all will be handled the same way other than being split in to different files.
Is there a built in way to do this without creating a bunch of destination components? Or can I at least use the script component to act as a way?
You should be able to set an expression using a variable. Define your path up to the directory and then set the variable equal to that column.
You'll need an Execute SQL task to return a Single Row result set, and loop that in a container for every row in your original result set.
I don't have access at the moment to post screenshots, but this link should help outline the steps.
So when your package runs the expression will look like:
'C:\Documents\MyPath\location' + #User::LocationColumn + '.txt'
It should end up feeding your directory with files according to location.
Set the User::LocationColumn equal to the Location Column in your result set. Write your result set to group by Location, so all your records write to a single file per Location.
I spent some time try to complete this task using the method #Phoenix suggest, but stumbled upon this video along the way.
I ended up going with the method shown in the video. I was hoping I wouldn't have to separate it in to multiple select statements for each location and an extra one to grab the distinct locations, but I thought the SSIS implementation in the video was much cleaner than the alternative.
Change the connection manager's connection string, in which you have to use variable which should be changed.
By varying the variable, destination file also changes
and connection string is :
'C:\Documents\ABC\Files\' + #User::data + '.txt'
vote this if it helps you

SQLPlus conditional execution with variable from query

I have a batch file which has many steps in it that will get executed one by one.
However, to be able to make it more flexible, I want to include a condition check in SQLPlus.
Something like, to get the conditional variables' value from a query first and store is in say v_variable. Then use it for some checks like
IF v_variable = 'Y' THEN
--DO SOME DDL
ELSE
--DO OTHER DDL
END IF
I have to repeat this block in many places in the batch file and I can't do it through a PL/SQL somehow.
I am trying to use this COLUMN command in SQLPlus but somehow not getting the variable value to get saved.
COLUMN VARIABLE1 NEW_VALUE V_VARIABLE1
SELECT PARAM_VAL AS VARIABLE1 FROM TABLE_T WHERE PARAM_TYPE = 'XYZ';
-- This query will only throw one record.
DEFINE V_VARIABLE1
Is that absolutely wrong? What do we do to see if the V_VARIABLE1 is getting the value coming from the query?
And even after I get it right I am clueless on the IF-ELSE part. Can anybody help here? I am interested in solution that works for SQLPlus.
sql*plus doesn't support flow control natively, so anything you can do here would fall somewhere between "workaround" and "hack". Some of possible options are:
Use PL/SQL block and run your DDL as dynamic SQL via execute
immediate or dbms_utility.execute_ddl_statement. Full access to
PL/SQL features, so most flexible in terms of flow control and
statement building, however harder to manage if you're deploying
something large.
Write a script file per if/else branch, get its name with something
like column/query trick you provided in your post, run it with
something like #&scriptname.
Use substitution variables that, when used properly, will comment
out some parts of your script. You can use Tanel Poder's snapper
utility script as example; notice &_IF_ substitution
variables there.
You can embed child script into parent script's pl/sql block. Like
this:
--
21:21:23 SQL> ho cat run.sql
begin
case '&1.' when 'A' then
#script_a
when 'B' then
#script_b
else
null;
end case;
end;
/
21:21:26 SQL> ho cat script_a.sql
dbms_output.put_line('this is a');
21:21:30 SQL> ho cat script_b.sql
dbms_output.put_line('this is b');
21:21:34 SQL> #run
Enter value for 1: A
this is a
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.02
21:21:37 SQL> #run B
this is b
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.03
I don't have any experience with SQLPlus. I am going to assume that you can redirect the output of your select command to a temp file. Assuming you only selected one column in 1 record you can then do something like this:
FOR /F "tokens=*" %%A IN ('FINDSTR /v "(" YourTempFile.txt) DO SET YourVariable=%%A
"tokens=*" is optional. It will remove leading spaces and save the rest of the string including spaces. May not be relevant depending on your data.
You may not need the 'FINDSTR /V "(" and the trailing ' either. I am assuming the output of SQLPlus will be similar to SQL and this would exclude the number of rows processed that gets reported by SQL. Anything is possible. If you can't make this work, post the contents of the temp file and we can make the necessary modifications.

Script output to file when using SQL-Developer

I have a select query producing a big output and I want to execute it in sqldeveloper, and get all the results into a file.
Sql-developer does not allow a result bigger than 5000 lines, and I have 100 000 lines to fetch...
I know i could use SQL+, but let's assume I want to do this in sqldeveloper.
Instead of using Run Script (F5), use Run Statement (Ctrl+Enter). Run Statement fetches 50 records at a time and displays them as you scroll through the results...but you can save the entire output to a file by right-clicking over the results and selecting Export Data -> csv/html/etc.
I'm a newbie SQLDeveloper user, so if there is a better way please let me know.
This question is really old, but posting this so it might help someone with a similar issue.
You can store your query in a query.sql file and and run it as a script. Here is a sample query.sql:
spool "C:\path\query_result.txt";
select * from my_table;
spool off;
In oracle sql developer you can just run this script like this and you should be able to get the result in your query_result.txt file.
#"C:\Path\to\script.sql"
Yes you can increase the size of the Worksheet by change the setting Tool-->Preferences - >Database -> Worksheet -> Max rows to print in a script(depends on you).
Mike G answer will work if you only want the output of a single statement.
However, if you want the output of a whole sql script with several statements, SQL*Plus reports, and some other output formats, you can use the spool command the same way as it is used in SQL*Plus.

Resources