Deploying a database package to SQL Server through Octopus & TeamCity - sql-server

I am implementing CI/CD for SQL Server Database through Redgate software and TeamCity. I manage to Build and push the NuGet Database Package to Octopus. I can see the NuGet package in Library section of Octopus. But I am facing issues in deploying that package to SQL Server. I cant find the Built-in Step Template "Deploy a NuGet package" in Octopus process section. I have also tried "Deploy a package" step template but it didnt worked.I am following this guide.
https://documentation.red-gate.com/sr1/worked-examples/deploying-a-database-package-using-octopus-deploy-step-templates
Any Help will be highly Appreciated.

Good question, to use Redgate's tooling with Octopus Deploy you will need to install the step templates they provided. I recommend create a database release and deploy a database release. When you are browsing the step template you might notice the step template to deploy directly from a package. The state-based functionality for SQL Change Automation works by comparing the state of the database stored in the NuGet package with the destination database. Each time it runs it creates a new set of delta scripts to apply. Because of that, the recommended process is:
Download the database package onto the jump box.
Create the delta script by comparing the package on the jump box with the database on SQL Server.
Review the delta script (can be skipped in dev and test).
Run the script on SQL Server using the tentacle on the jump box.
Let's go ahead and walk through each one. The download a package step is very straightforward, no custom settings aside from picking the package name.
The Redgate - Create Database Release step is a little more interesting. This is the step which generates the actual delta script that will be run on the database. What trips up most people is the Export Path. The export path is where the delta script will be exported to. This needs to be a directory outside of the Octopus Deploy tentacle folder. This is because the "Redgate - Deploy from Database Release" step needs access to that path and the Tentacle folder will be different for each step.
What I like to do is use a project variable.
The full value of the variable is:
C:\RedGate\#{Octopus.Project.Name}\#{Octopus.Release.Number}\Database\Export
The next step is approving the database release. I recommend creating a custom team to be responsible for this. My preference is to skip this step in Dev and QA.
The create database release step makes use of the artifact functionality built into Octopus Deploy. This allows the approver to download the files and review them.
The final step is deploying the database release. This step takes the delta script in the export data path and runs it on the target server. This is why I recommend putting the export path in a variable.
Some other general items to help get going. First, don't install tentacles directly onto SQL Server instances. In production, the typical SQL Server set up is a cluster or they have multiple nodes with always-on high availability. Access to SQL Server is handled via a virtual IP.
If you were to install tentacles on both nodes, Octopus Deploy would attempt to run the change script on both nodes at the same time (by default). That will cause a lot of drama. I recommend using a jump box because you will need something to sit between Octopus Deploy and SQL Server. When you get comfortable with that I'd recommend using workers (but that is a bit of scope creep, so I won't cover that).
If you would like to know more on how to wire this up, check out the blog post I wrote (and copied from for this answer) here.
I also have written an entire series on database deployments with Octopus Deploy, which you can find here.
Finally, our documentation covers jump boxes and permissions you will need for the user doing the database deployments.
Hope that helps!

Related

How to build CI/CD for MS SQL Server?

I'm trying to build a CI/CD for my Microsoft SQL Server database projects. It will work with Microsoft DevOps pipelines.
I have all databases in Visual Studio databases projects with the GIT as source control. My objective is to have something that I can release databases with the use of DevOps pipelines to the diferents enviroments:
DEV
UAT
PROD
I was thinking of using DBGhost: http://www.innovartis.co.uk/ but I can't find updated information about this tool (only very old info) and there is very little information about it on the internet and how to use it (is it still in use?).
I would like to use a mix of DBGhost and DevOps. DBGhost to Source Scripting, Building, Comparing, Synchronizing, Creating Delta Scripts, Upgrading and DevOps to make releases (that would call the builds created by DBGhost)
If you have any ideas using this or other methods I am grateful because currently all releases are manual and it is not very advisable to do.
We have this configured in our environment using just DevOps. Our database is in a Visual Studio database project. The MSBuild task builds the project and generates a DACPAC file as an artifact, and the Release uses the "SQL Server Database Deploy" task to deploy this to the database. The deploy task needs to use an account with enough privileges to create the database, logins, etc., but takes care of performing the schema compare, generating the delta scripts, and executing them. If your deploy is going to make changes that could result in data loss such as removing columns, you will need to include the additional argument /p:BlockOnPossibleDataLoss=false in the deploy task. This flag is not recommended unless you know there will be changes that will cause data loss; without the flag any deploy which would result in data lost will fail.

SSISDB v MSDB Deployment for Execute Task

I had some old SQL server 2012 solution files from my last data warehouse implementation, & decided to try and make them work in SQL 2019. The whole deployment thing was not working, so I upgraded all of the packages & then made a new 2019 solution and started adding in all of the existing packages
The thing is I was bred on making DWs' in Cognos tools, so I was getting to grips with the MS way of doing things at the time, & package based deployment with Configurations was the original setting, I don't know whether they have imported into a package or project deployment model in the new solution, but I have deployed them to an IS Catalog SSISDB.
I never really got the whole deployment think properly in the fact that why do you create an SSISDB to deploy to (it seems from right-clicking in the solution file) but then when you place an 'Execute Package Task' in your package, you have to select the package either from a local file or from the package store on MSDB... Why do you not execute the package from the SSISDB? That means that now have to copy all of those packages 1 by 1 into the MSDB package store & have a maintenance plan to deploy all package modifications to SSISDB & then also remember to do the upload to MSDB too!?
Could anybody please confirm that I have this understanding correct, & why on earth would we want to do this?
Thank you for any help
A lot to unpack here...
SSISDB
The SSISDB is a bespoke database for managing Project Deployment model packages. Among the many benefits are: versioned deployments, native package execution, a unified logging approach, and a simplified and secure approach for configuration.
The SSISDB stores a project (the deployable unit has a .ispac extension). A project is the packages, project parameters, project level connection managers (if any) and a metadata file. MSDB stores packages.
The mechanism for deploying a package deployment model is the process dtutil.exe. The mechanism for deploying a project deployment model is the process ISDeploymentWizard.exe Visual Studio will offer to deploy a project deployment model to the SSISDB but under the covers, the process is going to be ISDeploymentWizard
I don't understand your deploy to msdb to run maintenance plan to deploy to SSISDB. That's not a thing I have encountered in 15 years of working with SSIS and 8 years with the Project Deployment Model. You just deploy the project to the SSISDB.
Execute package task
The Execute Package task is a mechanism for one package to run another. In the Package Deployment model, you must specify where to find the package either through a file connection manager or a database (going by memory here). When you launch it, you can specify whether it's in process (wait for it to complete) or out of process (fire and forget).
In the Project deployment model, you have an additional option of a project reference package. When you use that, you don't specify where the package is because it's right here, in the deployable quantum of our .ispac file.
If you think about the Package Deployment model, I could have 10 packages all focused on a Sales function in a Visual Studio project. They are only "together" because I have them that way. There's no enforced/trust relationship between them once Visual Studio is closed. I could deploy 3 packages to the file system, 3 to the SSIS Package Store (also the file system but a predefined location) and 4 to the msdb. Or maybe just create a custom folder per package and deploy all to the file system. The point is, package1 cannot assume that package2 is in a relative location to it.
The Project deployment model does ensure that relationship exists outside of the confines of an SSIS project. This empowers you to design packages that take parameters when they run or use a shared resource, like a connection manager or a project scoped property (parameter).
You could have an Package Deployment model package that expected a run-time variable to be passed in to override a design-time variable but the Execute Package Task didn't allow you that level of granularity.
But I want to execute a package that is in a different project and uses the Project deployment model
In this scenario, you're not reaching for the Execute Package Task. Instead, you're going to need an OLE/ADO/I-guess-ODBC-would-work-but-would-not-recommend Connection manager to your SSISDB and then you're going to fire off the correct TSQL statements.
catalog.create_execution
catalog.set_execution_parameter_value
catalog.start_execution
You'll likely want at least one parameter in there with a SYNCHRONIZED setting if you want to wait on the child package to run. Otherwise, you won't know if it when it finished. And maybe that's ok for your work.

SSIS 2012 Workflow Best Practices

It is not clear to me how I should use the new features of SSIS in SQL Server 2012/2014 in an enterprise environment. Specifically, I am referring to the project deployment model, project parameters, environments, etc. We use a three-tier environment workflow; developing in development, testing and staging in QA, and production in production. The developers only have access to the development environment. The DBA’s migrate code to the other environments. All source is kept in TFS.
What is the intended workflow using these new features? If a developer develops the project/package, does the developer deploy the project to the SSISDB or does the developer stop after checking in the source? Where does the DBA come into the picture? Which environment contains SSISDB? How does the project/package get deployed to the other environments?
There seems to be many “how-to’s” published on the Internet, but I am struggling to find one that deals with the business workflow best practices. Can anyone suggest a link to an article on this subject?
Thanks.
What is the intended workflow using these new features?
It is up to the enterprise to determine how they will use them.
If a developer develops the project/package, does the developer deploy the project to the SSISDB or does the developer stop after checking in the source?
Where does the DBA come into the picture? Which environment contains SSISDB? How does the project/package get deployed to the other environments?
It really does depend. I advocate that developers have sysadmin rights in the development tier of servers. If they break it, they fix it (or if they've really pooched it, we re-image the server). In that scenario, they develop the implementation process and use deployments to Development to simulate the actions the DBAs will take when deploying to all the other pre-production and production environments. This generally satisfies your favorite regulatory standard (SOX/SAS70/HIPPA/CPI/etc) as those creating the work are not the same ones that install it.
What is the deliverable unit of work for SSIS packages using the project deployment model? It is an .ispac file. That is a self contained zip file with a manifest, project level parameters, project level connection managers and the SSIS packages.
How you generate that is up to you. Maybe you check the ispac in and that is what is deployed to your environments. Maybe the DBAs open the solution from source control and build their own ispac. Maybe you have Continuous Integration, CI, running and you click a button and some automated process generates and deploys the ispac.
That's 1/3 of the equation. From the SSISDB side, you likely want to create an Environment and populate it with variable values. Things like Connection Strings and file paths and user names & passwords. When you start creating those things, CLICK THE CREATE SCRIPT TO NEW WINDOW button! Otherwise, you're going to have to re-enter all that data when you lift to a new environment. I would expect your developers to check those scripts into source control. For passwords, blank out the value and make notes in your deployment checklist that they need to fix that before mashing F5.
You also need SQL Scripts to create the structure (folder) within the SSISDB for the project to be deployed into. Once deployed, you'll want to apply the Environment values, created in the preceding step, to the newly deployed project. Save those out as well.
I would have each environment contain an SSISDB. I don't want a missed configuration allowing a process in the production tier to reach across to the development tier and pull data. I've seen that, it's not pretty. When code is deployed to the QA/Stage tier, we find out quickly whether we missed a connection string somewhere because the dev servers reject the connection from QA. This means our SQL Instances don't all run under the same server account. Each tier gets their own account: domain\SQLServer_DEV, domain\SQLServer_QA, domain\SQLServer_PROD Do what you can to prevent yourself from having a bad day. If you go with a single/shared SSISDB across all your tiers, it can work, but you're going to have to invest a lot more energy ensuring that packages always run with the correct configuration environment applied lest bad things happen.

TeamCity : How to define build and deployment steps for database objects

I am currently working on a continuous integration project to auto build and deploy database changes to target environment.
We are using Perforce P4 for source code repository, Nexus for artefacts repository and MS SQL 2008.
We are not using Redgate for the database repository.
Check-in process
- Developers manually extract database objects (e.g. table, stored proc, function) using Management Studio and check-in to the source repository of Perforce.
Requirement:
As part of the CI process, when developers check-in their code to the source repository, the build process should get triggered and create artefacts of checked-in code and get copied to the artefacts repository.
The deployment process should get automatically triggered when it finds any new artefacts and deploy the artefact to the target environment.
I would highly appreciate if someone helps me to know :
build and deployment steps
requirement of manifest file
if it is possible to extract incremental changes
Get ssdt in visual studio (express works if you don't have licenses)
This will mean your developers check in create statements and you deploy incremental changes, it is pretty simple to setup just have a build step call sqlpackage.exe to deploy or generate scripts

Deploying MSSQL change scripts

So I am in the throes of developing our Continuous Integration practices. We are a .Net/MSSQL shop. We will all soon be on VS2012. We have settled on CruiseControl.Net for CI server, using msbuild to compile our projects. We use SVN (possibly switching to Git later, but that's another discussion) for source control. I'm leaning towards using InstallShield to deploy code packages (usually web apps and/or batch exeutables) to our QA and production servers. (CCNet would build these MSI's as part of our CI.) We are also starting to include unit testing in our projects, and will use NUnit integrated with CCNet to run them automatically upon check-in.
So far this works for our standard web app/exe development. Where it does not fit in (yet) is with our MSSQL change management, or lack thereof. It's been pretty cowboy how we've done this. Some folks have used Migrator.Net. Others just do a SQL Compare with Redgate and generate a script. Still others have hand-written sql scripts. It may or may not be in SVN. "Source control" at the db level is basically "we have backups of our databases." Boo, hiss. Needless to say that if we want some consistency with our CI and with our deployments, we need to settle on something. So far I am leaning towards using VS SQL projects to handle the change management and deployment.
Note: we (developers) are not supposed to push changes. Sys admins do that. So we can't run anything to deploy code or sql.
So, 2 problems to solve (I think):
What "technique" to use so that our CI server blows away a CI version of the database so that unit tests can be tested against it. I've settled that VS2012 SQL projects can do that. CCNet can run msbuild against the db project, which recreates the database. This is fairly easy.
How to generate change scripts for our QA and prod environments? This one I'm stuck on.
VS can do a schema compare and then generate the sql script -- but it is dependent on sqlcmd. So our sys admins would have to run sqlcmd from the command prompt to deploy it... probably not ideal. Right?
I could run msbuild again to deploy... but I don't want the database re-created, I just want changes deployed.
So what are the options here? I need something self-contained for the admins to run -- and check-in to SVN. Should I make another msi for database deployments? Can CCNet/msbuild make some other kind of "deployment package" for database changes (not re-creation) where the sys admins can double-click and go?
How do you all handle this?
Thanks
Tom
Check out the SQL Server Data Tools package from the Microsoft site.
This will register a new SQL Server 2012 Database type project to contain the definition for all of your database structures. Upon build, this will generate a create script that you can use to deploy your database.
Then for upgrading your database, use the SQLPACKAGE.EXE tool using the create script and target database server name to generate an Update.sql script.
Update: Also on the issue of how you're running unit tests, you could create supplemental methodologies that invoke the create scripts by launching a process and and passing the path to the output create.sql script, then have your tests 'tear down' the database using the same method but with a drop database statement.

Resources