Store SSIS package in named database? - sql-server

I have a server hosting a number of different databases, all with the same structure, and a library of SSIS packages, some of which are common across the client databases and others are client-specific.
I'm aware that you can store packages in MSDB, but this is a shared store across the whole SQL instance - is it possible to attach a package to a specific database?

Actually, when you store packages into the msdb, they are stored in specific instance's msdb. Either run SELECT * FROM dbo.sysdtspackages90 (2005) or SELECT * FROM dbo.sysssispackages (2008) across all your instances and you'll determine which one is currently hosting your packages. If you are using folders, I have fancier version of these queries available.
What I believe you are observing is an artifact of the tools. There is only one instance of the SQL Server Integration Services Service. This service doesn't stop you from storing packages in specific instance, it just makes it a little more complex to do so. Or as I see it, by ditching the GUI (SSMS) you free yourself from the fetters of non-automated administration.
How do you push packages into the other named instances? You can either edit the service's .ini file as described in the above link and then reconnect to the Integration Services thing in SSMS or use a command line or query approach to managing your packages. We used the SSISDeployManifest in my previous shops with success to handle deployments. There is a GUI associated to the .ssisDeploymentManifest and you can use that to handle your deploys or you're welcome to work with the .NET object model to handle deployments. I was happy with my PowerShell SSIS deployment and maintenance but your mileage my vary.
Finally to give concrete examples for a command line deployment, the following would deploy a package named MyPackage.dtsx sitting in the root of C to named instances on the current machine and deploy them into the root of MSDB.
dtutil.exe /file "C:\MyPackage.dtsx" /DestServer .\Dev2008 /COPY SQL;"MyPackage"
dtutil.exe /file "C:\MyPackage.dtsx" /DestServer .\Test2008 /COPY SQL;"MyPackage"
I have an earlier version of my PowerShell script for generating calls to dtexec instead of using the object library directly.
Let me know if you have further questions

Related

When to use SSIS Environments

Ok, so I understand what they are, they're collections of parameter assignments that you can tell a package to use upon execution.
What I'm trying to understand is why or if I need to use them.
I'm migrated a bunch of old SSIS packages using the packages deployment model to some new servers using the project deployment model. I have one project containing about 35 packages. I've created parameters on all my packages, and have a couple of project level parameters for stuff like Server Name etc.
I'm developing on my PC and the packages will have their parameters set to my dev environment settings by default unless I change them.
I'm deploying my packages to 3 servers (Test, UAT, Prod). I deploy them from Visual Studio.
Each server runs an identically scripted SQL job to execute the package.
So now, I need to set my parameters for each environment/server.
Do I need to set up environments, or why can't I just right click - configure my Project in the SSIS Integration Services Catalog on each server, and set the parameters there for the project and each package?
If I create environments, I still need to enter all the parameter values for each server/environment, but then I need to set up the reference between the project and the environment, and set each SQL job to use the relevant environment when executing the job.
Are environments only useful if you have one server, one package catalogue, and one set of SQL Jobs, and you're just using different databases for each environment so you need the environments to toggle between each?
Aren't they overkill if you have your environments on different servers, or am I missing something?
A use case for Environments
As the DBA, I have access to the user names and passwords for systems. I can perform a one-time task of setting up environments that define all the things.
I can then empower the developers to create/deploy and configure their own SSIS projects without having to get involved or giving them a post-it with the associated credentials.
Environment vs Project configuration - I view this as automating what makes the most sense. I used to be a big advocate of Environments but I found migrating them difficult between servers (before I found the magic script) and the fact that you can only have 1 associated to a project a limitation I didn't like. At this point, I generally configure a project directly with SSMS and save the script off so I can replace values as I migrate up the environments.

How to build CI/CD for MS SQL Server?

I'm trying to build a CI/CD for my Microsoft SQL Server database projects. It will work with Microsoft DevOps pipelines.
I have all databases in Visual Studio databases projects with the GIT as source control. My objective is to have something that I can release databases with the use of DevOps pipelines to the diferents enviroments:
DEV
UAT
PROD
I was thinking of using DBGhost: http://www.innovartis.co.uk/ but I can't find updated information about this tool (only very old info) and there is very little information about it on the internet and how to use it (is it still in use?).
I would like to use a mix of DBGhost and DevOps. DBGhost to Source Scripting, Building, Comparing, Synchronizing, Creating Delta Scripts, Upgrading and DevOps to make releases (that would call the builds created by DBGhost)
If you have any ideas using this or other methods I am grateful because currently all releases are manual and it is not very advisable to do.
We have this configured in our environment using just DevOps. Our database is in a Visual Studio database project. The MSBuild task builds the project and generates a DACPAC file as an artifact, and the Release uses the "SQL Server Database Deploy" task to deploy this to the database. The deploy task needs to use an account with enough privileges to create the database, logins, etc., but takes care of performing the schema compare, generating the delta scripts, and executing them. If your deploy is going to make changes that could result in data loss such as removing columns, you will need to include the additional argument /p:BlockOnPossibleDataLoss=false in the deploy task. This flag is not recommended unless you know there will be changes that will cause data loss; without the flag any deploy which would result in data lost will fail.

SSISDB v MSDB Deployment for Execute Task

I had some old SQL server 2012 solution files from my last data warehouse implementation, & decided to try and make them work in SQL 2019. The whole deployment thing was not working, so I upgraded all of the packages & then made a new 2019 solution and started adding in all of the existing packages
The thing is I was bred on making DWs' in Cognos tools, so I was getting to grips with the MS way of doing things at the time, & package based deployment with Configurations was the original setting, I don't know whether they have imported into a package or project deployment model in the new solution, but I have deployed them to an IS Catalog SSISDB.
I never really got the whole deployment think properly in the fact that why do you create an SSISDB to deploy to (it seems from right-clicking in the solution file) but then when you place an 'Execute Package Task' in your package, you have to select the package either from a local file or from the package store on MSDB... Why do you not execute the package from the SSISDB? That means that now have to copy all of those packages 1 by 1 into the MSDB package store & have a maintenance plan to deploy all package modifications to SSISDB & then also remember to do the upload to MSDB too!?
Could anybody please confirm that I have this understanding correct, & why on earth would we want to do this?
Thank you for any help
A lot to unpack here...
SSISDB
The SSISDB is a bespoke database for managing Project Deployment model packages. Among the many benefits are: versioned deployments, native package execution, a unified logging approach, and a simplified and secure approach for configuration.
The SSISDB stores a project (the deployable unit has a .ispac extension). A project is the packages, project parameters, project level connection managers (if any) and a metadata file. MSDB stores packages.
The mechanism for deploying a package deployment model is the process dtutil.exe. The mechanism for deploying a project deployment model is the process ISDeploymentWizard.exe Visual Studio will offer to deploy a project deployment model to the SSISDB but under the covers, the process is going to be ISDeploymentWizard
I don't understand your deploy to msdb to run maintenance plan to deploy to SSISDB. That's not a thing I have encountered in 15 years of working with SSIS and 8 years with the Project Deployment Model. You just deploy the project to the SSISDB.
Execute package task
The Execute Package task is a mechanism for one package to run another. In the Package Deployment model, you must specify where to find the package either through a file connection manager or a database (going by memory here). When you launch it, you can specify whether it's in process (wait for it to complete) or out of process (fire and forget).
In the Project deployment model, you have an additional option of a project reference package. When you use that, you don't specify where the package is because it's right here, in the deployable quantum of our .ispac file.
If you think about the Package Deployment model, I could have 10 packages all focused on a Sales function in a Visual Studio project. They are only "together" because I have them that way. There's no enforced/trust relationship between them once Visual Studio is closed. I could deploy 3 packages to the file system, 3 to the SSIS Package Store (also the file system but a predefined location) and 4 to the msdb. Or maybe just create a custom folder per package and deploy all to the file system. The point is, package1 cannot assume that package2 is in a relative location to it.
The Project deployment model does ensure that relationship exists outside of the confines of an SSIS project. This empowers you to design packages that take parameters when they run or use a shared resource, like a connection manager or a project scoped property (parameter).
You could have an Package Deployment model package that expected a run-time variable to be passed in to override a design-time variable but the Execute Package Task didn't allow you that level of granularity.
But I want to execute a package that is in a different project and uses the Project deployment model
In this scenario, you're not reaching for the Execute Package Task. Instead, you're going to need an OLE/ADO/I-guess-ODBC-would-work-but-would-not-recommend Connection manager to your SSISDB and then you're going to fire off the correct TSQL statements.
catalog.create_execution
catalog.set_execution_parameter_value
catalog.start_execution
You'll likely want at least one parameter in there with a SYNCHRONIZED setting if you want to wait on the child package to run. Otherwise, you won't know if it when it finished. And maybe that's ok for your work.

SSIS operational configuration of server instance, database, and schema?

In order to enable operational management of data integration processes developed in SSIS, I am seeking to be able to externally configure:
server (data source)
database (catalog)
schema
From what I have seen, all of these are typically hardcoded into SSIS packages through the Connection Manager and in SQL statements. This hardcoding limits the DBA from being able to allocate resources differently and, if there is ever a change, requires every package to be modified if Package Deployment is being used.
It appears that the Project Deployment would reduce this somewhat, but no eliminate it.
Target environment is SQL Server 2016 and VS 2017.
How can the server, database, and schema be externalized from the package?
SSIS has a robust facility for configuring packages per environment. You can configure any property in the package externally. This can be done in SQL Agent and even from the command line at runtime. Configurations can be stored in config files, system environment variables, a SQL table, etc. However, the modern way of configuring packages is through the project deployment model.
Here is the gist of how it works:
Add a parameter at the package or project level
reference that parameter in an expression which configures the property you want to set, i.e. the server name or initial catalog
Deploy the project to an instance of SSIS
In SSIS, add an environment and configure the variable. This can even be passwords which are securely stored
Add a reference to that environment from the project, and finally reference which environment you want to use at runtime.
The first link below shows a dialogue that was created for configuring connection managers with parameters. Please note that the package will store the default values, but when you create an environment as noted above, this allows you to easily set it at runtime.
As for configuring a schema, this is possible as well, by using parameters, but you would need to use expressions for your SQL queries and setting the destination. I would avoid making schemas variable across environments. This will present a lot of effort and complexity for very little flexibility that is offered in return. Please read up on these links and good luck!
How to parameterize connection managers
All about parameters in SSIS

SQL Server Data Tools 2012 - deployment package for multiple databases

In our company we have database solution that contains three SQL Server instances each with different databases. Each instance has some jobs and replication.
As for now we are maintaining creation and update scripts manually and execute them with bat files.
Our deployment package contains scripts for all objects including jobs and replication.
We want to automate our process to make and test deployment packages after every svn commit - continuous integration. Also we have branches for every release. Release correspond to a database version. Different clients have different releases/versions installed. We need to create deployment package for any branch.
Can we use SQL Server Data Tools 2012 for our needs? I have only seen tutorials for single database and I don't know how to use it in more complex environment.
Optionally we could use Data Tools for maintaining schema scripts and write manually scripts for jobs/replication. But can we use the build process to combine it all into one package?
You should be able to use SSDT for this, by way of Publish Profiles. Create a publish profile for each instance and set up your CI jobs accordingly.
Standardizing your database names across instances (especially if they're all for the same product) would help.

Resources