I'm trying to pass a variable value from SQL Server Agent job to SSIS package but the variable contains an apostrophe in it causing the SQL Server Agent job to fail
e.g In SQL Server Agent at Job Step Properties I'm entering the following details:
Property Path: \Package.Variables[User::VariableName].Properties[Value] Property
Value: Michael O'Callaghan.
Any idea how to resolve this issue?
If the package is deployed to SSISDB and executed from there, use SSISDB stored procedures to set the value and escape the quote like how you would via T-SQL. The SQL Agent job can then use a T-SQL script for this step instead. The example below uses the set_execution_parameter_value stored procedure to set this value and will still result in "Michael O'Callaghan" being passed in.
DECLARE #execution_id BIGINT
EXEC [SSISDB].[catalog].[create_execution] #package_name=N'Package.dtsx', #execution_id=#execution_id OUTPUT,
#folder_name=N'Project Folder', #project_name=N'Project', #use32bitruntime=False, #reference_id=Null
DECLARE #var0 SQL_VARIANT = N'Michael O''Callaghan'
EXEC [SSISDB].[catalog].[set_execution_parameter_value] #execution_id, #object_type=30, #parameter_name=N'Name', #parameter_value=#var0
DECLARE #var1 SMALLINT = 1
EXEC [SSISDB].[catalog].[set_execution_parameter_value] #execution_id, #object_type=50, #parameter_name=N'LOGGING_LEVEL', #parameter_value=#var1
EXEC [SSISDB].[catalog].[start_execution] #execution_id
Escape it. Just use a double apostrophe. '' (Not a quotation ", but a apostrophe apostrophe).
Try the standard way of maintaining a configuration file(if you are using 2008 or less) and pass the variables values through the file.
An alternative way to handle this, and frankly I think the best way, is to use Environment Variables. To my knowledge, this was introduced when Microsoft rolled out the project deployment model with SQL Server 2012 as a replacement to the package deployment model. The package deployment model required that package parameters be specified in a separate XML file to be deployed to the server. With the project deployment model, Microsoft has created a user-friendly user interface in SQL Server to manage this - the XML file has been removed.
In short, Environment Variables allow developers to link package parameters, but not package variables as those are internal to the package itself, to SQL Server and thus expose them on the server. This makes management of identical package parameters that exist across packages (e.g., connection managers, network folder locations in FQDN format, etc.) incredibly easy to manage. The idea here is that if packages need to be pointed to a new server or new network folder, then developers can simply change a single value in SQL Server, which would then propogate out to all packages without the need to open, change, and re-deploy the package.
For detailed steps on how to do this, see the following references:
Microsoft: This is a bit dry, but is comprehensive and from the horse's mouth.
Deploy Packages with SSIS
Lesson 1: Preparing to Create the Deployment Bundle
SQL Chick: More intuitive and provides screenshots, which I found helpful.
Parameterizing Connections and Values at Runtime Using SSIS Environment Variables
Thanks for your all you suggestions but unfortunately they didn't work, however I built a clever workaround for this.
SQL server agent wraps a variable value in single quote e.g specifying Jon Doe in sql server agent, the agent wraps it like this 'Jon Doe' and passes it to the SSIS package, so if you were to enter a value with an apostrophe it would break the sql server agent job and won't execute the SSIS package it would look like this E.G passing this value: 'John O' Doe' this would cause the agent the job to break so you need to pass your variable value as : John O''Doe and the agent wraps it as follows: 'John O''''Doe' so you would need to include the following logic in your SSIS package:
Declare #TempVar nVarchar(50)
SET #TempVar = REPLACE(?, '''''', CHAR(39))
The above code creates a variable to store the parameter value. It replaces the 4 single quotes to one. CHAR(39) is the ASCII representation of a single quote.
This would then cause the variable value to look like John O'Doe.
Hope this helps.
The reason I wanted to pass a variable value from the agent as I had to change the variable value very often from the SSIS package it would need to be deployed every time. So this way is faster.
Related
How do i execute a stored procedure created in a Microsoft SQL Server Database in my application created on Node.js express framework which uses MSSQL package provided by Node.js to connect to the DB and execute DB related tasks?
Please be clear on how to pass the parameters and how the stored procedure name bares a reference in the application through mssql package.
I am quite new to the technology so any help would be greatly appreciated. Thank you.
There are many ways of doing this and many packages that can help. I would recommend the Knex.js package. Once you've set that up and made a connection, you can then use the knex.raw function to execute arbitrary SQL and have it returned as a knex object. I'm not sure of the specific SQL syntax for MSSQL, but it should be very similar to Postgres where you would do something like:
knex.raw('select * from my_func(?, ?)', [valOne, valTwo]);
In the above example I am running a select query against a stored procedure called my_func. I am them passing in a question mark for each parameter, and then matching those up in an array after the string. This will result in the SQL being executed.
select * from my_funct(valOne, valTwo);
This includes escaping values to help defend against things such as SQL injection.
Your execution syntax may be slightly different in MSSQL, but you can still use knex.raw and the question mark + array syntax to inject values into a prepared statement like this.
Earlier we were in SSSIS 2008, we have 3 projects under a solution, Project A, Project B, Project C. I have Master Package in Project B from which I am calling packages in ProjectA and ProjectC in execute package task using external reference as File System, everything worked well till here. We recently moved from SSIS 2008 to SSIS 2012 and used Project deployment model to deploy packages to SSISDB instead of File System using Package Deployment Model, so I replaced Execute Package task with Execute SQL task in my master package to call packages from Project A and C.
I created Connection to SSISDB and used it in Execute SQL task and placed the below T-SQL code in it, it worked fine here also.
Declare #execution_id bigint
EXEC [SSISDB].[catalog].[create_execution]
#package_name=N'Child_Package.dtsx',
#execution_id=#execution_id OUTPUT,
#folder_name=N'Test',
#project_name=N'ProjectA',
#use32bitruntime=False, #reference_id=Null
Select #execution_id
DECLARE #var0 smallint = 1
EXEC [SSISDB].[catalog].[set_execution_parameter_value]
#execution_id,
#object_type=50,
#parameter_name=N'LOGGING_LEVEL',
#parameter_value=#var0
EXEC [SSISDB].[catalog].[start_execution] #execution_id
GO
I have 2 question here
1) If I deploy my packages from development to Production server, will this T-SQL code inside Execute SQL task works there without any issues (I read in one post that it will have some issues when deployed to diff environment) I am not hard-coding any environment parameter value inside my T-SQL code, will this work in another environment as well?
2) I am trying to implement rollback transaction support in my master package, so if any child package fails, everything needs to be rolled back, so I have Changed the package TransactionOption property to required and all other tasks(Execute SQL TASK) as supported(Default), I am ending up with an error like
"[Execute SQL Task] Error: Executing the query "Declare #execution_id bigint
EXEC [SSISDB].[catalo..." failed with the following error: "Cannot use SAVE TRANSACTION within a distributed transaction.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly"
The package is working fine if the TransactionOption property of the package is supported, the problem comes only when I use TransactionOption property of package as Required.
Any help is much appreciated.
I am trying to call a Microsoft SQL Server Stored Procedure that delivers data in table like format (rows / columns) in Oracle BI Publisher 11g (11.1.1.7).
Selecting procedure call as a data source for the data model does not work because BIP expects it to behave like a PL/SQL call to an Oracle database instead.
Oracle developers claim this is not supported by the software.
Is there any way around this restriction?
Although not supported out-of-the-box by BI Publisher 11g, there is a workaround to the problem. It involves tricking the software into thinking it is making a standard PL/SQL call when in fact in reality it is executing a stored procedure on the SQL Server datasource.
1) Make sure you have the native MS SQL server library installed on your weblogic server running the BIP instance.
It can be downloaded from MSDN here: http://msdn.microsoft.com/en-us/sqlserver/aa937724.aspx - depending on your JRE version you'll want to use one or the other jar file:
For JRE 1.6 and above, use sqljdbc4.jar. For 1.5 and below, use sqljdbc.jar.
You should place this in your $MIDDLEWARE_HOME\user_projects\domains\$your_domain_here$\lib\ folder and remember to restart weblogic server afterwards.
2) Inside BI Publisher administration, create a new JDBC datasource.
Our example works with following properties:
Driver Type: Microsoft SQL Server 2008 Database Driver Class: com.microsoft.sqlserver.jdbc.SQLServerDriver Connection String: jdbc:weblogic:sqlserver://[hostname]:[port];databaseName=[database name]
Fill in username/pw and test connection (if driver is installed correctly, this should work just fine)
3) Create a new datamodel.
Choose SQL query as your dataset. Here, add in these properties:
Data Source: your JDBC data source
Type of SQL: Non-standard SQL
Row Tag Name: (choose one yourself) - for now just write test.
4) Under SQL query, we now need to convince BIP that it is calling an Oracle SP when in fact it is calling an existing stored procedure on your MS SQL datasource.
This part is assuming your stored procedure delivers N amount of rows and column labels over.
Here is how we solved it for our SP that is called nrdart_get_custody_holding_headers_sp '2014-11-25' where the parameter is a date supplied by the user.
declare #var1 datetime
declare #sql varchar(255)
set #var1 = '2014-11-25'
set #sql = 'nrdart_get_custody_holding_headers_sp' +'''' + cast(#var1 as varchar) + ''''
exec (#sql)
Here, we are declaring some SQL Server datatypes, and setting them as our date parameter and as our procedure call name using some creative use of the cast function and escape characters, before finally calling exec on the stored procedure.
Parameter var1 will also work if you use a standard BIP parameter instead of our hard-coded example above
i.e. :userDate where :userDate is referring to an existing parameter called userDate in the datamodel.
Don't worry if you don't see row/column labels after clicking OK. Instead, click on "view data" and there you go. Rows and columns with data from your SP on Microsoft SQL Server. Now proceed to save this as sample data and design report layout as you would normally do. For non-date parameters you might need to play around a little bit with datatypes, but I don't see why you shouldn't get it to work with integers or varchars as well.
My company considers database scripts we write part of our intellectual property.
With new releases, we deliver a 2-part setup for our users:
a desktop application
an executable that wraps up the complexities of initializing/updating a database (RedGate SQL Packager).
I know I can encrypt a stored procedure on a database once the script is present, but is there any way to insert it in an encrypted form? I don't want plain-text to be able to be intercepted across the "wire" (or more accurately, between the SQL script executable and the server).
I'm not really tied to the tool we're using - I just want to know if it's possible without having to resort to something hokey.
Try using Enctyptpassphrase and DecryptPassPharse functions.
Use ENCRYPTBYPASSPHRASE to encrypt all your DDL statements and then DECRYPTBYPASSPHRASE on the server to decrypt and execute.
declare #encrypt varbinary(200)
select #encrypt = EncryptByPassPhrase('key', 'your script goes here' )
select #encrypt
select convert(varchar(100),DecryptByPassPhrase('key', #encrypt ))
Create a procedure that would look like this
CREATE PROCEDURE DBO.ExecuteDDL
(
#script varbinary(max)
)
AS
BEGIN
DECLARE #SQL nvarchar(max)
SET #SQL = (select convert(varchar(max),DecryptByPassPhrase('key', #script )))
EXECUTE sp_executesql #SQL
END
Once this is in place you can publish scripts to your server like this
This isn't plain-text and last I checked, it still works:
declare #_ as varbinary(max)
set #_ =0x0D000A005000520049004E0054002000270054006800690073002000620069006E00610072007900200073007400720069006E0067002000770069006C006C002000650078006500630075007400650020002200530045004C0045004300540020002A002000460052004F004D0020005300590053002E004F0042004A00450043005400530022003A0027000D000A00530045004C0045004300540020002A002000460052004F004D0020005300590053002E004F0042004A0045004300540053000D000A00
exec (#_)
Technically, it's not encryption, but it's not plaintext either and it can server as the basis for some mild encryption pretty easily.
There's little you can do to reliably prevent the code in the database to be read by anyone who really wants. The WITH ENCRYPTION parameter is really just an obfuscation and many simple scripts are able to get it again in plain text, and when the database is being upgraded, ultimately the profiler will always be able to catch the ALTER PROCEDURE statement with the full text. Network tracers can be evaded by using an encrypted connection to the server.
The real problem comes from the fact that the database is installed in a server that your users own and fully control (correct me if that's not the case). No matter what you do, they'll have full access to the whole database, it's schema, and internal programming inside sprocs/functions.
The closest I can think of to prevent that is to switch to CLR stored procedures, which are installed by copying a DLL to the server and registering within SQL Server. They pose other problems, as they are totally different to program and may not be the best tool for what you use a sproc normally. Also, since the are made of standard .NET code, they can also be trivially decompiled.
The only way I can think of fully protecting the database structure and code would be to put it in a server of yours, that you expose to your customers though, say, a webservice or a handful of sprocs as wrappers, so no one can peek inside.
I'm trying to understand how I can use an alias to reference another database in the same instance, without having to use a hardcoded name.
The scenario is as below:
I have a data db with stores data, an audit db which keeps all changes made. for various reason, i want to keep the audit data in a separate database, not least because it can get quite large and for reporting purposes.
In the data db, I don't want to reference this by a hardcoded name but an alias so that in different environments, I don't have to change the name and various sp's to reference the new name.
for example:
mydevdata
mydevaudit
If a sp exists in mydevdata such as which calls the mydevaudit, I don't want to change the sp when I go to test where the db's may be called mytestdata and mytestaudit. Again, for various reasons, the database names can change, more to do with spaces an instances etc.
So if I had procedure in mydevdata:
proc A
begin
insert into mydevaudit.table.abc(somecol)
select 1
end
when I go to test, I don't want to be change the procedure to reference another name, (assume for sake of argument that happened)
Instead I am looking to do something like:
proc A
begin
insert into AUDITEBALIAS.table.abc(somecol)
select 1
end
I am interested in finding out how I could do something like that, and the pro's and cons.
Also, dymnamic SQL is not an option.
thanks in advance for you help.
You may be able to use synonyms
CREATE SYNONYM WholeTableAliasWithDBetc FOR TheDB.dbo.TheTable
This means all object references in the local DB are local to that DB, except for synonyms that hide the other database from you.
You can also use stored procedures in the audit DB. There is a 3rd form of EXEC that is little used where you can parametrise the stored proc name
DECLARE #module_name_var varchar(100)
SET #module_name_var = 'mydevaudit.dbo.AuditProc'
-- SET #module_name_var = 'whatever.dbo.AuditProc'
EXEC #module_name_var #p1, #p2, ...
Obviously you can change module_name_var to use whatever DB you like
I've just posted this to How to create Sql Synonym or "Alias" for Database Name? which is a workaround for the same situation:
There is a way to simulate this using a linked server. This assumes you have two SQL servers with the same set of databases one for development/test and one live.
Open SQL Server Management Studio on your development/test server
Right click Server Objects > Linked Servers
Select New Linked Server...
Select the General page
Specify alias name in Linked server field - this would normally be the name of your live server
Select SQL Native Client as the provider
Enter sql_server for Product Name
In Data Source specify the name of the development server
Add Security and Server Options to taste
Click OK
The above is for SQL Server 2005 but should be similar for 2008
Once you've done that you can write SQL like this:
SELECT * FROM liveservername.databasename.dbo.tablename
Now when your scripts are run on the development server with the linked server back to itself they will work correctly pulling data from the development server and when the exact same scripts are run on the live server they will work normally.