I have an SSIS project with multiple packages. I would like to create a "Master" package which would run the individual packages in a sequence. The first package contains a Data Flow task which imports data from Excel files, so my Run64BitRuntime setting is set to "false". The following package that needs to be run contains a Fuzzy Lookup, which requires that the Run64BitRuntime setting is set to "true".
Is there a way that I can change this project property setting through a Script Task, so that I can fully automate this process?
You Deploy the Packages in an SSIS Catalog (may be in your same instance of SQL).
In the SSISDB Database, we have several SPs. Some of them are listed below.
[SSISDB].[catalog].[set_execution_parameter_value]
[SSISDB].catalog.start_execution
[SSISDB].catalog.create_execution
Every sp has its own purpose. see here
http://technet.microsoft.com/en-us/library/ff878034.aspx
See syntax of
create_execution [ #folder_name = folder_name
, [ #project_name = ] project_name
, [ #package_name = ] package_name
[ , [ #reference_id = ] reference_id ]
[ , [ #use32bitruntime = ] use32bitruntime ]
, [ #execution_id = ] execution_id OUTPUT
The parameter #use32bitruntime will help you to make changes in 32/64 bit execution.
With the above set of SPs you can have great control over package execution.
Related
For monitoring & logging purposes - we have several tasks that we wish to add QUERY_TAG to each of them.
AFAIK - QUERY_TAG is only working at the session-level - is there any way to add QUERY_TAG to the snowflake tasks?
Session parameters can be set for Tasks within the CREATE TASK statement, and the QUERY_TAG is no exception.
An example:
CREATE OR REPLACE TASK TASK_TEST_QUERY_TAG
WAREHOUSE = MY_WH
SCHEDULE = '1 MINUTE'
QUERY_TAG = 'My Test Query Tag'
AS
[...]
;
Check the CREATE TASK syntax:
CREATE [ OR REPLACE ] TASK [ IF NOT EXISTS ] <name>
WAREHOUSE = <string>
[ SCHEDULE = '{ <num> MINUTE | USING CRON <expr> <time_zone> }' ]
[ <session_parameter> = <value> [ , <session_parameter> = <value> ... ] ]
[ USER_TASK_TIMEOUT_MS = <num> ]
[ COPY GRANTS ]
[ COMMENT = '<string_literal>' ]
[ AFTER <string> ]
[ WHEN <boolean_expr> ]
AS
<sql>
Reference: https://docs.snowflake.com/en/sql-reference/sql/create-task.html#syntax
Snowflake tasks are comprised of a single command- so alter session + your command would exceed that.
You could create a simple stored procedure that would set the query tab and run your command. Then, have your task call that.
If anyone had played with Confluent MSSQL CDC Connector (https://docs.confluent.io/current/connect/kafka-connect-cdc-mssql/index.html)
I tried setting up this connector, downloading the jar and setting up config files as mentioned in docs. Running it is actually not throwing any error but it NOT able to fetch any changes from the SQL Server. Below is my config:
{
"name" : "mssql_cdc_test",
"connector.class" : "io.confluent.connect.cdc.mssql.MsSqlSourceConnector",
"tasks.max" : "1",
"initial.database" : "DBASandbox",
"username" : "xxx",
"password" : "xxx",
"server.name" : "rptdevdb01111.homeaway.live",
"server.port" : "1433",
"change.tracking.tables" : "dbo.emp"
}
This is the message I am getting in the logs (at INFO level) :
INFO Source task WorkerSourceTask{id=mssql_cdc_test-0} finished initialization and start (org.apache.kafka.connect.runtime.WorkerSourceTask:143)
Strange is even if I change the server.name to some junk value, it doesn’t bother and no errors. So, probably its NOT even trying to hit my sql server.
I did also enable change tracking on Database as well specified Table:
ALTER DATABASE DBASandbox
SET CHANGE_TRACKING = ON
(CHANGE_RETENTION = 2 DAYS, AUTO_CLEANUP = ON)
ALTER DATABASE DBASandbox
SET ALLOW_SNAPSHOT_ISOLATION ON
ALTER TABLE dbo.emp
ENABLE CHANGE_TRACKING
WITH (TRACK_COLUMNS_UPDATED = ON)
Not sure whats wrong and how to debug it further. Any clue or insight will be helpful.
I want to see the definition of a table in SQL Server.
Running this query from SQLPro for MSSQL is OK
SELECT TOP 100 * FROM dbo.[ATRESMEDIA Resource Time Registr_];
but when I run this one
exec sp_columns dbo.[ATRESMEDIA Resource Time Registr_];
I got this error:
Msg 102, Level 15, State 1.
Incorrect syntax near '.'. (Line 3)
dont use schema dbo.
exec sp_columns [ATRESMEDIA Resource Time Registr_];
why? because, following are the parameters accepted by sp_columns stored proc:
sp_columns [ #table_name = ] object
[ , [ #table_owner = ] owner ]
[ , [ #table_qualifier = ] qualifier ]
[ , [ #column_name = ] column ]
[ , [ #ODBCVer = ] ODBCVer ]
source: msdn
update:
Martin's explanation as in comment:
Strings in SQL Server are delimited by single quotes - as a parameter to a stored proc in very limited circumstances it will allow you to skip the quotes but the dot breaks that. exec sp_columns 'dbo.[ATRESMEDIA Resource Time Registr_]'; wouldn't give the syntax error - but that wouldn't be what the proc expects anyway as the schema would need to be the second param
select the table name in the query window
and press the below key combination
Alt +F1 or
Alt+Fn+F1 will bring the table definition
I have read several articles about Environment variables but I can't find how to apply their usage in my case. I am developing SSIS packages on my local machine. Once they are finished I plan to deploy them on staging an production server. My SSIS project consists of several packages which most of them connect to 2 databases (but each server has it's own copy of db) and few excel files.
So, I want to deploy my packages to 3 different servers. Based on server, connection strings would be different. Since this is still development phase I would have to redeploy most packages from time to time. What would be the best practice to achieve this?
Creating your folder
In the Integration Services Catalog, under SSISDB, right click and create a folder giving it a name but do not click OK. Instead, click Script, New Query Editor Window. This gives a query like
DECLARE #folder_id bigint
EXEC [SSISDB].[catalog].[create_folder]
#folder_name = N'MyNewFolder'
, #folder_id = #folder_id OUTPUT
SELECT
#folder_id
EXEC [SSISDB].[catalog].[set_folder_description]
#folder_name = N'MyNewFolder'
, #folder_description = N''
Run that but then Save it so you can create the same folder on Server 2 and Server 3. This will be a theme, by the way
Creating your environment
Refresh the dropdown under the SSISDB and find your newly created folder. Expand it and under Environments, right click and Create New Environment. Give it a name and description but DO NOT CLICK OK. Instead, click Script, New Query Editor Window.
We now have this code
EXEC [SSISDB].[catalog].[create_environment]
#environment_name = N'DatabaseConnections'
, #environment_description = N''
, #folder_name = N'MyNewFolder'
Run that and save it for deployment to Server 2 and 3.
Adding values to an Environment
Refresh the Environments tree and under the Properties window for the newly created Environment, click to the Variables tab and Add your entries for your Connection strings or whatever. This is where you really, really do not want to click OK. Instead, click Script, New Query Editor Window.
DECLARE #var sql_variant = N'ITooAmAConnectionString'
EXEC [SSISDB].[catalog].[create_environment_variable]
#variable_name = N'CRMDB'
, #sensitive = False
, #description = N''
, #environment_name = N'DatabaseConnections'
, #folder_name = N'MyNewFolder'
, #value = #var
, #data_type = N'String'
GO
DECLARE #var sql_variant = N'IAmAConnectionString'
EXEC [SSISDB].[catalog].[create_environment_variable]
#variable_name = N'SalesDB'
, #sensitive = False
, #description = N''
, #environment_name = N'DatabaseConnections'
, #folder_name = N'MyNewFolder'
, #value = #var
, #data_type = N'String'
GO
Run that query and then save it. Now when you go to deploy to environment 2 and 3, you'll simply change the value of #var
Configuration
To this point, we have simply positioned ourselves for success in having a consistent set of Folder, Environment and Variable(s) for our packages. Now we need to actually use them against a set of packages. This will assume the your packages have been deployed to the folder between the above step and now.
Right click on the package/project to be configured. You most likely want the Project.
Click on the References tab. Add... and use DatabaseConnections, or whatever you've called yours
Click back to Parameters. Click to Connection Managers tab. Find a Connection Manager and in the Connection String, click the Ellipses and change it to "Use Environment Variable" and find your value
DO NOT CLICK OK! Script -> New Query Editor Window
At this point, you'll have a script that adds a reference to environment variable (so you can use it) and then overlays the stored package value with the one from the Environment.
DECLARE #reference_id bigint
EXEC [SSISDB].[catalog].[create_environment_reference]
#environment_name = N'DatabaseConnections'
, #reference_id = #reference_id OUTPUT
, #project_name = N'HandlingPasswords'
, #folder_name = N'MyNewFolder'
, #reference_type = R
SELECT
#reference_id
GO
EXEC [SSISDB].[catalog].[set_object_parameter_value]
#object_type = 30
, #parameter_name = N'CM.tempdb.ConnectionString'
, #object_name = N'ClassicApproach.dtsx'
, #folder_name = N'MyNewFolder'
, #project_name = N'HandlingPasswords'
, #value_type = R
, #parameter_value = N'SalesDB'
GO
This script should be saved and used for Server 2 & 3.
Job
All of that makes is so you will have the configurations available to you. When you schedule the package execution from a job, you will end up with a job step like the following
EXEC msdb.dbo.sp_add_jobstep
#job_name = N'Demo job'
, #step_name = N'SSIS job step'
, #subsystem = N'SSIS'
, #command = N'/ISSERVER "\"\SSISDB\MyNewFolder\HandlingPasswords\ClassicApproach.dtsx\"" /SERVER "\".\dev2014\"" /ENVREFERENCE 1 /Par "\"$ServerOption::LOGGING_LEVEL(Int16)\"";1 /Par "\"$ServerOption::SYNCHRONIZED(Boolean)\"";True /CALLERINFO SQLAGENT /REPORTING E'
The Command is obviously the important piece.
We are running the package ClassicApproach
Run this on the current server with an instance of Dev2014
Use Environment reference 1
We use the standard logging level.
This is a Synchronous call meaning that the Agent will wait until the package completes before going to the next step
Environment Reference
You'll notice all of the above was nice and specified text strings instead of random integer values, except for our Environment Reference. That's because you can have the same textual name for an environment in multiple folders. Similar to how you could deploy the same project to multiple folders but for whatever reason, the SSIS devs chose to provide fully qualified paths to a package while we use "random" integer values. To determine your environment ID, you can either run the following query
SELECT
ER.reference_id AS ReferenceId
, E.name AS EnvironmentName
, F.name AS FolderName
, P.name AS ProjectName
FROM
SSISDB.catalog.environments AS E
INNER JOIN
SSISDB.catalog.folders AS F
ON F.folder_id = E.folder_id
INNER JOIN
SSISDB.catalog.projects AS P
ON P.folder_id = F.folder_id
INNER JOIN
SSISDB.catalog.environment_references AS ER
ON ER.project_id = P.project_id
ORDER BY
ER.reference_id;
Or explore the Integration Services Catalog under Folder/Environments and double click the desired Environment. In the resulting Environment Properties window, the Name and Identifier will be greyed out and it is the Identifier property value that you need to use in your SQL Agent's job step command for the /ENVREFERENCE value.
Wrapup
If you're careful and save every thing the wizard does for you, you have only 1 thing that must be changed when migrate changes throughout your environment. This will lead to clean, smooth, repeatable migration processes and you wondering why you'd ever want to go back to XML files or any other configuration approach.
I have two database , dbOne(version - 10.50.1600 - locate in office server ) and dbTwo(version - 10.0.1600 - locate in my local server) .
I want to copy dbOne's tables with data to dbTwo .
Is there any way or script to do it ? I don't want to upgrade my local server-version !
"Import and Export Data" tool provided by SQL Server is a good tool to transfer data between two different servers.
How about generating the database scripts like in the following artcles
http://www.codeproject.com/Articles/598148/Generate-insert-statements-from
and
http://msdn.microsoft.com/en-us/library/ms186472(v=sql.105).aspx
Its possible to transfer data from one server to another server using SQL linked server query, if both are in a same network. below are the steps
Copying table structures
Generate script of all tables from server1 database then excute in server2 database. using Generate Script utility
Copying table data
sp_addlinkedserver [ #server= ] 'server' [ , [ #srvproduct= ] 'product_name' ]
[ , [ #provider= ] 'provider_name' ]
[ , [ #datasrc= ] 'data_source' ]
[ , [ #location= ] 'location' ]
[ , [ #provstr= ] 'provider_string' ]
[ , [ #catalog= ] 'catalog' ]
Insert into databaseserver2.db1.table1(columnList)
select columnList
from databaseserver1.db1.table1
Here are general steps you need to take in order for this to work
Migrating tables
Create scripts for tables in db1. Just right click the table and go to “Script table as -> Create to”
Re-order the scripts so that tables that don’t depend on any other tables are executed first
Execute scripts on db2
Migrating data
The most convenient way is to use SQL Server Import/Export wizard