How to execute a script file in netezza? - netezza

I have a .sql file which i need to run from SSIS. But before running them in SSIS i need to run it outside the SSIS . Please let me know how to execute this file.
and also what to check if declaration of variables work in the external script. ?

I'm not much of an SSIS expert, but running .sql scripts is easily done with nzsql.exe
Most of our automated jobs run by generating large .sql files with Informatica powercenter (a tool much like ssis) and then execute these with nzsql.
All looping and branching is done while generating the file before the nszsql is started, since you cannot use such constructs outside a stored procedure in Netezza. In that respect it is very different from SQLserver...
I hope you can follow me?

Related

Snowflake Loading Flat Files

My company is looking to possibly migrate to Snowflake from SQL Server. From what i've read on snowflake documentation, flat files (CSV) can get uploaded and set into a staging table then use COPY INTO that loads data into physical table.
example: put file://c:\temp\employees0*.csv #sf_tuts.public.%emp_basic;
My question is, can this be automated via a job or script within snowflake? this includes the copy into command.
Yes, there are several ways to automate jobs in Snowflake as already commented by others. Putting your code in a Stored Procedure and call it via a Task in schedule is an option.
There is also a command line interface in Snowflake called SnowSQL.

execute rss script on several servers using SSIS, storing results in a table

I found a wonderful script that collects all the (shared) datasources used on a reportserver:
LINK
I simply love this script.
However, I am looking for a way to execute this script on several reportservers and add the results to a centralised table. That way my colleagues and me would be able to see pretty quickly what datasources are used.
I could place this script on each reportserver, collect the csv's on a central server and then use SSIS to insert them into a MSSQL table. That way I would have a nice central overview of all the used datasources.
However, I would prefer to have the script in one location and then execute that script on a list of servers.
Something like:
Loop through table with servers
execute script (see link)
insert resulting csv into central table (preferably skip this step, have script insert data in table directly)
next server
Any suggestions as to what the best approach would be? Should it be a webservicetask? Scripttask?
Something else completeley?
The level of scripting in the mentioned script is right at the edge of what I understand, so if someone would know how to adapt the script in such a way that I could use it as input in a dataflow in SSIS I would be very happy.
Thanks for thinking with me,
Henro
This script is called using a utility called rs.exe so you would use an execute process task to call it. To avoid writing to a file, you could modify the script and have it insert the results into a table. The package could be set up as follows:
Create a foreach loop which iterates over a list or ado.net recordset of your servers
Put the server name in a variable
Create a variable for the arguments for the process task, referencing the server variable from step 2
Add a process task which uses the above argument and calls rs.exe

inline .dat files loading into SQL server database

Anyone knows how to perform inline loading into SQL server, I have DsGen software provided by TCP council, it generates data files have the extension .dat. Is there any mechanism to load these files directly to SQL database (during the generation period). I have done that using import/export wizard, but that is not inline loading.
There are several ways to accomplish your task, here are two of them.
You may use the BULK INSERT command to import them. Basically, what the import wizzard does is about the same, but it let's you select the various options using a nifty GUI.
You can save the DTS package when running through the wizzard, create a SQL Server Agent job and execute this job using the sp_start_job stored procedure.
I like the BULK INSERT as it it easier to implement. Just play arround with the options until you get what you want.

SSMS Query to Text File

I have a complicated query that marshals data to a temporary table, which I then marshal into a further output temporary table before finally selecting on it to display to screen. This gets saved out from Grid view, to text and I get the file I need for processing off site.
What I want to do is have this query be run-able and create that file on the local disk without any need for the operator to change the "Results to" option, or fiddle with anything.
What command or functionality might be available to me to do this?
I can not install any stored procedures or similar to the server involved.
Since you can't do anything on the server I would suggest writing an SSIS package. Create a data flow, and in your source object put your script. Your destination object will then point to the file you want. You have a fair number of options for output.
The SSIS package can then be run by
A SQL Job (assuming you are allowed even that)
A non SQL job running a bat file with a DTEXEC command
The DTEXECUI GUI.
Also you can store your SSIS package in the instance or on any fileshare you choose.

SQL Server, execute batch T-SQL script on multiple databases

Our SQL Server 2000 instance hosts several databases which are all similar, one for each of our client. When comes the time to update them all, we use Red Gate SQL Compare to generate a migration script between the development database and a copy of the current state DB of all the clients database.
SQL Compare generates a script which is transactional, if one step fails, the script rolls back everything. But currently our system uses a method that splits the script on batch separators (the GO statement) and then runs each command separately, which ruins all the transactional stuff. The GO statement is not supported when querying the database by programmation (in classic ASP)
I want to know how I could run that script (keeping the transactions) on all those databases (like 250 DB), programmatically or manually in a tool? In Query Analyzer, we need to select each DB and press Run which is quite long for the number of DB we have.
If you can use SSMS from SQL 2005 or 2008, then I'd recommend the free SSMS Tool pack
I use external sqlcmd command line tool. I have the same situation on the server I work.
I have the script in *.sql file and the list of databases on the 2nd file. I have small *.bat script which iterate through all the databases and execute script using sqlcmd command.
In more details I have like this:
DB.ini file with all the databases on which I want to deploy my script
sql/ directory where I store all scripts
runIt.bat - script which deploys scripts
The command line looks more-less like this:
sqlcmd -S <ComputerName>\<InstanceName> -i <MyScript.sql> -d <database_name> -T
In SQL Server 2000 it was osql utility
UPDATE
Red Gate now have a tool called SQL Multi Script, which basically does exactly what you want. I supports SQL 2000 to 2008 R2 and running queries on multiple databases in parallel which improve performance.
7 years later i had the same issue so many times so I made it and published the project:
TAKODEPLOY
Here are some features:
Get all databases from a single instance and apply a name filter. Or just a single direct connection.
Mix database sources as much as you want. Example, two direct and one full instance with or withut a filter.
Script editor (Avalon Text, same monodevelop uses)
Scripts are parsed and errors are detected before executing.
Scripts are 'splitted' by GO statements.
Save your deployment into a file
Get a list of all databases before deploying.
See in realtime what is happening (PRINT statements are recommended here!).
Automatic rollback to independent database if any error occurs.
Transparent Updates via Squirrel.
You can get it at: https://github.com/andreujuanc/TakoDeploy
Not sure if this will work, but try replacing the GO statements with semicolons, and running the entire statement in one batch.
If I recall, you can also create a script in SQL Compare to change everything back to the state it started in. You might want to generate both.
When I did this sort of deployment (it's been awhile), I first loaded to a staging server that was made exactly like prod before I started to make sure the scripts would work on prod. If anything failed (usually because of the order that scripts were run, can't set a foreign key to a table that doesn't exist yet for instance). I also scripted al table changes first, then all view changes, then all UDF changes, then all stored proc changes. This cut down greatly onthe failures due to objects not yet existing, but I still usually had a few that needed to be adjusted.

Resources