I'm working on developing YAML build and deploy pipelines for dacpacs built from Visual Studio database projects , and one of the requirements is version numbers. In my research, I've found two ways we might want to implement it:
Extended Properties via the add- and updateextendedproperty stored procedures.
sysdac_instances via setting the DacVersion when the .dacpac is built and using the RegisterDataTierAppliction flag when the dacpac is deployed.
Is one of these methods preferred over the other? Am I trying to compare apples to oranges? Is there a third way to store database versions that I haven't found? Is there a DevOps or DBA best practice that I don't know and should be following?
Related
Ok, so I understand what they are, they're collections of parameter assignments that you can tell a package to use upon execution.
What I'm trying to understand is why or if I need to use them.
I'm migrated a bunch of old SSIS packages using the packages deployment model to some new servers using the project deployment model. I have one project containing about 35 packages. I've created parameters on all my packages, and have a couple of project level parameters for stuff like Server Name etc.
I'm developing on my PC and the packages will have their parameters set to my dev environment settings by default unless I change them.
I'm deploying my packages to 3 servers (Test, UAT, Prod). I deploy them from Visual Studio.
Each server runs an identically scripted SQL job to execute the package.
So now, I need to set my parameters for each environment/server.
Do I need to set up environments, or why can't I just right click - configure my Project in the SSIS Integration Services Catalog on each server, and set the parameters there for the project and each package?
If I create environments, I still need to enter all the parameter values for each server/environment, but then I need to set up the reference between the project and the environment, and set each SQL job to use the relevant environment when executing the job.
Are environments only useful if you have one server, one package catalogue, and one set of SQL Jobs, and you're just using different databases for each environment so you need the environments to toggle between each?
Aren't they overkill if you have your environments on different servers, or am I missing something?
A use case for Environments
As the DBA, I have access to the user names and passwords for systems. I can perform a one-time task of setting up environments that define all the things.
I can then empower the developers to create/deploy and configure their own SSIS projects without having to get involved or giving them a post-it with the associated credentials.
Environment vs Project configuration - I view this as automating what makes the most sense. I used to be a big advocate of Environments but I found migrating them difficult between servers (before I found the magic script) and the fact that you can only have 1 associated to a project a limitation I didn't like. At this point, I generally configure a project directly with SSMS and save the script off so I can replace values as I migrate up the environments.
I,
We are currently working on a .net core project that will use multiple databases with the same structure.
In short, this is a multi tenant project and each tenant will use the same web application (multiple instances behind a load balancer) BUT each tenant will have its own database.
We are searching for the best solution to ease our deployment process.
When the application (and DB) is updated, we need to update the database structure on all SQL servers and all databases (one SQL can contain x databases).
FYI, application and SQL server are hosted on AWS, our CI/CD is Azure DevOps.
And last (but not least) limitation: we are working on VSCode only (MAC & Linux laptop).
So, we looked for some solutions :
Using Database projects (.sqlproj) + DACPAC generation deployed using DevOps, but it's not available on VSCode
Using Migration: not working with multiple databases and dynamic connection strings
Using SQL script: too complicated to maintains by hand a SQL script that takes care of possible cases
So could someone give us some advice to solve this problem?
The general solution here is to generate SQL Scripts for each deployment, and integrate those into your CI/CD process.
You could use EF Migrations to generate a SQL Script, that is then tested, deployed to your repo as a first-class asset, and deployed by your CI/CD pipeline. Or you could use SSDT to manage the schema and generate change scripts. But those aren't the only reasonable ways.
If you are modifying the schema by hand without using SSDT, you would normally just use a tool to generate the change script. And go from there.
There are many tools (including SSDT) that help you to diff a development environment against a target production schema and generate the change scripts. Eg Redgate ReadyRoll
Note that if you intend to perform online schema updates you need to review the change scripts manually for offline DDL operations, and to ensure that your code/database changes have the correct forward and backward compatibility to support a rollout while the application is online.
And preparing, reviewing, testing, and editing the database change scripts is not something that everyone on the team dev needs to do. So you can always consider jumping onto a Windows VM for that task.
I am using Redgare DLM Automation for database CI in a SQL Server and Visual Studio Team Services environment. I can easily deploy to multiple databases in a single environment, but apparently DLM Automation does not support multiple environments out of the box. Redgate support suggested using VSTS post-scripts in PowerShell, sqlcmd or something called "account_y" (I'm not sure what this refers to) to potentially add multiple environments.
Has anyone tried using DLM Automation for multiple environments? I have explored the PowerShell CmdLets, looked at SQL Compare options and filters, thought about using VSTS's Tokenizer for script alterations, but am still struggling with how to put all of this together to deploy to more than one environment.
Any experience or guidance would be greatly appreciated.
Thank you!
You definitely can deploy to multiple environments, however the issue of needing different user accounts for different environments is not a trivial problem to solve. Ultimately whatever you source control will be deployed to each environment, so if you need different user accounts then you will need to take care of it yourself by using some sort of post-deployment script.
I would suggest not source controlling user accounts and then adding a custom step after deployment to add the users - either command line using sqlcmd or the equivalent powershell cmdlets.
There are some blog posts that go into detail regarding this problem and their answers are probably more detailed than anything I can provide. I'd suggest that you have a read of them.
https://www.red-gate.com/blog/building/source-controlling-database-permissions
http://workingwithdevs.com/source-controlling-database-users/
I hope this helps.
I'm working with the Redgate SQL Comparison SDK at the moment and have got it set up to nicely diff 2 databases.
What I would like to do now is be able to diff an sqlproj from source control with a destination database.
I have tried pulling the sql files using the tfs/vsts SDKs but to no avail.
Is there any way to either build a sqlproj from source control into a dacpac and then pull this in as a source database, or to directly pull the sqlproj in as a source?
Edit:
My ultimate goal with this is to be able to basically compare the version of the database that is in source control with the the database running across many different environments and create delpoyment scripts for the diffs.
I have another couple of Redgate tools that accomplish this (SQL compare & SQL Source), but these can only be installed on 1 (maybe 2 max?) devices, the difficulty I have is in using Amazon RDS (where the endpoints are unreachable outside the VPC), I cannot connect one central install of these tools to all of my environments, and I can't buy an additional license for every environment. So I was trying to use the Comparison SDK to attempt to "roll my own" middle ground.
Many Thanks,
I also work at Redgate, please do email me via dlm#red-gate.com if you want to go into more details into your specific questions and I'll set up a call for us.
In general the process that Redgate recommends for what you're doing below would be to keep the canonised schema that you want all the database to have in version control. You could get that schema in either by each developer using the SQL Source Control product to bring their changes in from SSMS as they develop them, or by using the SQL Compare product to put a version in at the end of a sprint.
You can then use our DLM Automation tools in conjunction with a CI server to automate creating difference reports and sync scripts for your target servers. DLM Automation is a set of PowerShell commandlets and plugins for common CI servers like TeamCity, Jenkins, VSTS, TFS etc. You could also use the SQL Compare Pro command line.
If your whole team have our SQL Toolbelt product then you're licensed to install the DLM Automation tools as many times as you like on build/release agents, so you don't need additional licences per environment.
Are you doing this in the context of an automation build/ci system? You mention VSTS, so the way this normally works is that this would have already pulled the files from source control. Once the files are in the build agent's working folder, you should be able to point the SDK (or SQL Compare command line) at this. Bear in mind that a sql proj isn't an officially supported data source for Redgate tools, although it will work in many instances.
It would be good if you could edit your question and give some background on the higher level problem you're trying to solve just in case we (I work for Redgate) can recommend a more suited set of tools or techniques.
Can you create a generalized deployment script from a Sql Server Db Project in VS 2015 that doesn't require a schema compare / publish against a specific target database?
Some background:
We are using Sql Server Database projects to manage our database schema. Primarily we are using the projects to generate dacpacs that get pushed out to our development environments. They also get used for brand new installations of our product. Recently we have developed an add-on to our product and have created a new db project for it, referencing our core project. For new installations of our product where clients want the add-on, our new project will be deployed.
The problem we are having is that we need to be able to generate a "generic" upgrade script. Most of our existing installations were not generated via these projects and all contain many "custom" stored procedures/etc specific to that client's installation. I am looking for a way to generate a script that would do an "If Not Exists/Create + Alter" without needing to specify the target database.
Our add-on project only contains stored procedures and a couple tables, all of which will be new to any client opting for this add-on. I need to avoid dropping items not in the project while being able to deploy all of our new "stuff". I've found the option to Include Composite Objects which I can uncheck so that the deployment is specific to our add-on, but publishing still requires me to specify a target database so that a schema compare can be performed and I get scripts that are specific to that particular database. I've played with pretty much every option and cannot find a solution.
Bottom Line: Is there a way for me to generate a generic script that I can give to my deployment team whenever the add-on is requested on an existing install without needing to do a schema compare or publish for each database directly from the project?
Right now I am maintaining a separate set of .sql files in our (non db) project following the if not exists/create+alter paradigm that match the items in the db project. These get concatenated during build of our add on so that we can give our deployment team a script to run. This is proving to be cumbersome and we'd like to be able to make use of the database projects for this, if at all possible.
Best solution is to give the dacpacs to your installers. They run SQLPackage (maybe through a batch file or PowerShell) to point it at the server/DB to update. It would then generate the script or update directly. Sounds like they already have access to the servers so should be able to do this. SQLPackage should also be included on the servers or it can be run locally for the installer as long as they can see the target DB. This might help: schottsql.wordpress.com/2012/11/08/ssdt-publishing-your-project
There are a couple of examples of using PowerShell to do this, but it depends on how much you need to control DB names or Server names. A simple batch file where you edit/replace the Server/DB Names might suffice. I definitely recommend a publish profile and if this is hitting customer databases they could have modified, setting the "do not drop if not in project" options that show up is almost essential. As long as your customers haven't made wholesale changes to core objects, you should be good to go.