I had some old SQL server 2012 solution files from my last data warehouse implementation, & decided to try and make them work in SQL 2019. The whole deployment thing was not working, so I upgraded all of the packages & then made a new 2019 solution and started adding in all of the existing packages
The thing is I was bred on making DWs' in Cognos tools, so I was getting to grips with the MS way of doing things at the time, & package based deployment with Configurations was the original setting, I don't know whether they have imported into a package or project deployment model in the new solution, but I have deployed them to an IS Catalog SSISDB.
I never really got the whole deployment think properly in the fact that why do you create an SSISDB to deploy to (it seems from right-clicking in the solution file) but then when you place an 'Execute Package Task' in your package, you have to select the package either from a local file or from the package store on MSDB... Why do you not execute the package from the SSISDB? That means that now have to copy all of those packages 1 by 1 into the MSDB package store & have a maintenance plan to deploy all package modifications to SSISDB & then also remember to do the upload to MSDB too!?
Could anybody please confirm that I have this understanding correct, & why on earth would we want to do this?
Thank you for any help
A lot to unpack here...
SSISDB
The SSISDB is a bespoke database for managing Project Deployment model packages. Among the many benefits are: versioned deployments, native package execution, a unified logging approach, and a simplified and secure approach for configuration.
The SSISDB stores a project (the deployable unit has a .ispac extension). A project is the packages, project parameters, project level connection managers (if any) and a metadata file. MSDB stores packages.
The mechanism for deploying a package deployment model is the process dtutil.exe. The mechanism for deploying a project deployment model is the process ISDeploymentWizard.exe Visual Studio will offer to deploy a project deployment model to the SSISDB but under the covers, the process is going to be ISDeploymentWizard
I don't understand your deploy to msdb to run maintenance plan to deploy to SSISDB. That's not a thing I have encountered in 15 years of working with SSIS and 8 years with the Project Deployment Model. You just deploy the project to the SSISDB.
Execute package task
The Execute Package task is a mechanism for one package to run another. In the Package Deployment model, you must specify where to find the package either through a file connection manager or a database (going by memory here). When you launch it, you can specify whether it's in process (wait for it to complete) or out of process (fire and forget).
In the Project deployment model, you have an additional option of a project reference package. When you use that, you don't specify where the package is because it's right here, in the deployable quantum of our .ispac file.
If you think about the Package Deployment model, I could have 10 packages all focused on a Sales function in a Visual Studio project. They are only "together" because I have them that way. There's no enforced/trust relationship between them once Visual Studio is closed. I could deploy 3 packages to the file system, 3 to the SSIS Package Store (also the file system but a predefined location) and 4 to the msdb. Or maybe just create a custom folder per package and deploy all to the file system. The point is, package1 cannot assume that package2 is in a relative location to it.
The Project deployment model does ensure that relationship exists outside of the confines of an SSIS project. This empowers you to design packages that take parameters when they run or use a shared resource, like a connection manager or a project scoped property (parameter).
You could have an Package Deployment model package that expected a run-time variable to be passed in to override a design-time variable but the Execute Package Task didn't allow you that level of granularity.
But I want to execute a package that is in a different project and uses the Project deployment model
In this scenario, you're not reaching for the Execute Package Task. Instead, you're going to need an OLE/ADO/I-guess-ODBC-would-work-but-would-not-recommend Connection manager to your SSISDB and then you're going to fire off the correct TSQL statements.
catalog.create_execution
catalog.set_execution_parameter_value
catalog.start_execution
You'll likely want at least one parameter in there with a SYNCHRONIZED setting if you want to wait on the child package to run. Otherwise, you won't know if it when it finished. And maybe that's ok for your work.
Related
I'm trying to build a CI/CD for my Microsoft SQL Server database projects. It will work with Microsoft DevOps pipelines.
I have all databases in Visual Studio databases projects with the GIT as source control. My objective is to have something that I can release databases with the use of DevOps pipelines to the diferents enviroments:
DEV
UAT
PROD
I was thinking of using DBGhost: http://www.innovartis.co.uk/ but I can't find updated information about this tool (only very old info) and there is very little information about it on the internet and how to use it (is it still in use?).
I would like to use a mix of DBGhost and DevOps. DBGhost to Source Scripting, Building, Comparing, Synchronizing, Creating Delta Scripts, Upgrading and DevOps to make releases (that would call the builds created by DBGhost)
If you have any ideas using this or other methods I am grateful because currently all releases are manual and it is not very advisable to do.
We have this configured in our environment using just DevOps. Our database is in a Visual Studio database project. The MSBuild task builds the project and generates a DACPAC file as an artifact, and the Release uses the "SQL Server Database Deploy" task to deploy this to the database. The deploy task needs to use an account with enough privileges to create the database, logins, etc., but takes care of performing the schema compare, generating the delta scripts, and executing them. If your deploy is going to make changes that could result in data loss such as removing columns, you will need to include the additional argument /p:BlockOnPossibleDataLoss=false in the deploy task. This flag is not recommended unless you know there will be changes that will cause data loss; without the flag any deploy which would result in data lost will fail.
I am implementing CI/CD for SQL Server Database through Redgate software and TeamCity. I manage to Build and push the NuGet Database Package to Octopus. I can see the NuGet package in Library section of Octopus. But I am facing issues in deploying that package to SQL Server. I cant find the Built-in Step Template "Deploy a NuGet package" in Octopus process section. I have also tried "Deploy a package" step template but it didnt worked.I am following this guide.
https://documentation.red-gate.com/sr1/worked-examples/deploying-a-database-package-using-octopus-deploy-step-templates
Any Help will be highly Appreciated.
Good question, to use Redgate's tooling with Octopus Deploy you will need to install the step templates they provided. I recommend create a database release and deploy a database release. When you are browsing the step template you might notice the step template to deploy directly from a package. The state-based functionality for SQL Change Automation works by comparing the state of the database stored in the NuGet package with the destination database. Each time it runs it creates a new set of delta scripts to apply. Because of that, the recommended process is:
Download the database package onto the jump box.
Create the delta script by comparing the package on the jump box with the database on SQL Server.
Review the delta script (can be skipped in dev and test).
Run the script on SQL Server using the tentacle on the jump box.
Let's go ahead and walk through each one. The download a package step is very straightforward, no custom settings aside from picking the package name.
The Redgate - Create Database Release step is a little more interesting. This is the step which generates the actual delta script that will be run on the database. What trips up most people is the Export Path. The export path is where the delta script will be exported to. This needs to be a directory outside of the Octopus Deploy tentacle folder. This is because the "Redgate - Deploy from Database Release" step needs access to that path and the Tentacle folder will be different for each step.
What I like to do is use a project variable.
The full value of the variable is:
C:\RedGate\#{Octopus.Project.Name}\#{Octopus.Release.Number}\Database\Export
The next step is approving the database release. I recommend creating a custom team to be responsible for this. My preference is to skip this step in Dev and QA.
The create database release step makes use of the artifact functionality built into Octopus Deploy. This allows the approver to download the files and review them.
The final step is deploying the database release. This step takes the delta script in the export data path and runs it on the target server. This is why I recommend putting the export path in a variable.
Some other general items to help get going. First, don't install tentacles directly onto SQL Server instances. In production, the typical SQL Server set up is a cluster or they have multiple nodes with always-on high availability. Access to SQL Server is handled via a virtual IP.
If you were to install tentacles on both nodes, Octopus Deploy would attempt to run the change script on both nodes at the same time (by default). That will cause a lot of drama. I recommend using a jump box because you will need something to sit between Octopus Deploy and SQL Server. When you get comfortable with that I'd recommend using workers (but that is a bit of scope creep, so I won't cover that).
If you would like to know more on how to wire this up, check out the blog post I wrote (and copied from for this answer) here.
I also have written an entire series on database deployments with Octopus Deploy, which you can find here.
Finally, our documentation covers jump boxes and permissions you will need for the user doing the database deployments.
Hope that helps!
I am currently working on a continuous integration project to auto build and deploy database changes to target environment.
We are using Perforce P4 for source code repository, Nexus for artefacts repository and MS SQL 2008.
We are not using Redgate for the database repository.
Check-in process
- Developers manually extract database objects (e.g. table, stored proc, function) using Management Studio and check-in to the source repository of Perforce.
Requirement:
As part of the CI process, when developers check-in their code to the source repository, the build process should get triggered and create artefacts of checked-in code and get copied to the artefacts repository.
The deployment process should get automatically triggered when it finds any new artefacts and deploy the artefact to the target environment.
I would highly appreciate if someone helps me to know :
build and deployment steps
requirement of manifest file
if it is possible to extract incremental changes
Get ssdt in visual studio (express works if you don't have licenses)
This will mean your developers check in create statements and you deploy incremental changes, it is pretty simple to setup just have a build step call sqlpackage.exe to deploy or generate scripts
I have a huge number of SSIS 2012 packages from different SSIS projects, often the SSIS packages names in the different projects are identical.
I use SSIS logging from all the packages to one single table in a log database. I would like to keep it that way, to be sure only to have one database to search for all the SSIS logging.
When I am using the SSIS logging in the packages, is it possible to identify the project name also ? so I can identify what SSIS packages and project that are affected?
BR
Carsten
It sounds like you are using package deployment, in which case there is no ssis catalog and no project (per se), hence, no further information about your package. Logging that writes to sysssislog was designed before the concept of project deployment so that's why that piece of information is missing. As well, the use of MSDB also predated project deployment so it has no information either.
So there's no simple solution. I would guess that you could convert to project deployment and take advantage of all of the built in logging and reporting there (which you already said you don't want). Or you could modify all the packages to log there package id and project name into an additional table.
I have a server hosting a number of different databases, all with the same structure, and a library of SSIS packages, some of which are common across the client databases and others are client-specific.
I'm aware that you can store packages in MSDB, but this is a shared store across the whole SQL instance - is it possible to attach a package to a specific database?
Actually, when you store packages into the msdb, they are stored in specific instance's msdb. Either run SELECT * FROM dbo.sysdtspackages90 (2005) or SELECT * FROM dbo.sysssispackages (2008) across all your instances and you'll determine which one is currently hosting your packages. If you are using folders, I have fancier version of these queries available.
What I believe you are observing is an artifact of the tools. There is only one instance of the SQL Server Integration Services Service. This service doesn't stop you from storing packages in specific instance, it just makes it a little more complex to do so. Or as I see it, by ditching the GUI (SSMS) you free yourself from the fetters of non-automated administration.
How do you push packages into the other named instances? You can either edit the service's .ini file as described in the above link and then reconnect to the Integration Services thing in SSMS or use a command line or query approach to managing your packages. We used the SSISDeployManifest in my previous shops with success to handle deployments. There is a GUI associated to the .ssisDeploymentManifest and you can use that to handle your deploys or you're welcome to work with the .NET object model to handle deployments. I was happy with my PowerShell SSIS deployment and maintenance but your mileage my vary.
Finally to give concrete examples for a command line deployment, the following would deploy a package named MyPackage.dtsx sitting in the root of C to named instances on the current machine and deploy them into the root of MSDB.
dtutil.exe /file "C:\MyPackage.dtsx" /DestServer .\Dev2008 /COPY SQL;"MyPackage"
dtutil.exe /file "C:\MyPackage.dtsx" /DestServer .\Test2008 /COPY SQL;"MyPackage"
I have an earlier version of my PowerShell script for generating calls to dtexec instead of using the object library directly.
Let me know if you have further questions