Is there a first-class citizen* way to backup the DB before deploying the dacpac in a Azure DevOps pipeline (yaml not classic gui)?
If there isn't an "easy button", how do I do this?
Reference: WinRM SQL Server DB Deployment task
*When I saw first-class citizen, I mean not rolling my own custom solution but using a generic solution provided by the MSFT eco-system.
How are you creating your dacpac? There is a sqlpackage option to backup database before deployment, and there are numerous ways of specifying it. You can specify it on the command line, with /p: "BackupDatabaseBeforeChanges=True", or you can specify it in your publish profile in your database project. Right-click the project => publish => Advanced => "Backup database before deployment", then save the profile in the project.
If you're using the using the 'SQL Server database deploy' task in Azure, you can put /p: "BackupDatabaseBeforeChanges=True" in the additional arguments box.
Source control side
add a sql script to the DB project, backup.sql
set the build action to none
backup script
EXECUTE dbo.DatabaseBackup #Databases = '__DbName__'
,#Directory = '__DbBackupLocation__'
Build side
add that file to the ADO artifact
steps:
- publish: $(System.DefaultWorkingDirectory)/db-backup.sql
artifact: DB Artifact
reference: Publish artifacts
Library side
in a variable group, add variables for DbName & DbBackupLocation
reference: Create a variable group
Deployment side
assumption: you're using a deployment task so the artifact(s) are automatically downloaded; reference: Artifacts in release and deployment jobs
import the variable group; reference: Use a variable group
call the replace token task from qetza/vsts-replacetokens-task
steps:
- task: replacetokens#3
displayName: DB Untoken
inputs:
rootDirectory: $(Pipeline.Workspace)\DB Artifact\
targetFiles: *.sql
tokenPrefix: __
tokenSuffix: __
use SqlDacpacDeploymentOnMachineGroup to execute backup.sql
steps:
- task: SqlDacpacDeploymentOnMachineGroup#0
displayName: DB Backup
inputs:
taskType: sqlQuery
sqlFile: $(Pipeline.Workspace)\DB Artifact\db-backup.sql
serverName: localhost
databaseName: master
authScheme: windowsAuthentication
additionalArgumentsSql: -Verbose -Querytimeout 0
Related
I am having a VERY difficult time publishing a pre-existing SQL Server project to a Docker hosted instance of SQL Server.
What I am attempting to do is make a clean pipeline for a Docker hosted instance to use in testing a SQL Server project, which of course starts with doing it first by hand to understand all the steps involved. The SQL Server project itself has been around for many years, and has no problems deploying to SQL Server instances hosted on Windows boxes.
As near as I can tell, the issue comes while SSDT is generating the SQL Server deployment script itself. In a normal deployment to a Windows hosted SQL Server, the generated script starts out with some :setvar commands, including:
:setvar DefaultDataPath "C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\"
:setvar DefaultLogPath "C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\"
However, when publishing to a Docker hosted instance of SQL Server, and the same deployment process, the SQL script has:
:setvar DefaultDataPath ""
:setvar DefaultLogPath ""
The 1st thing this deployment does is to alter the database by adding in an additional data file, e.g.:
ALTER DATABASE [$(DatabaseName)]
ADD FILE (NAME = [ARCHIVE_274A259D], FILENAME = N'$(DefaultDataPath)$(DefaultFilePrefix)_ARCHIVE_274A259D.mdf') TO FILEGROUP [ARCHIVE];
The Docker based deployment then craps itself because the file path is (obviously) invalid.
In researching this problem, I've seen MANY solutions which hand-edit the generated deployment SQL script, and manually set the "proper" values for DefaultDataPath and DefaultLogPath ... and even one solution that ran the generated Sql through some sort of post-processor to make that same edit in a programmatic way with string replacement. This does work, but is less than optimal (especially in an automated build/test/deploy pipeline).
I've checked in the Docker instance itself, and its mssql.conf file does have defaults defined:
$ cat /var/opt/mssql/mssql.conf
[sqlagent]
enabled = false
[filelocation]
defaultdatadir = /var/opt/mssql/data/
defaultlogdir = /var/opt/mssql/log/
Can anybody shed light on why these are not being picked up by the SSDT process of generating the deploy script?
I spent a few days trying various workarounds to the problem ...
Defined the DATA and LOG directories in the Docker "run" command, but this had no effect on the gnerated Sql deploy script, e.g.: -e 'MSSQL_DATA_DIR=/var/opt/mssql/data/' -e 'MSSQL_LOG_DIR=/var/opt/mssql/log/'
Configure the Sql Project with SQLCMD Variables. This method could not override the DefaultDataPath or DefaultLogPath. I could add new Variables, but those would not affect the file path of the ALTER DATABASE command above.
Tried a Pre-Deployment script specifically tailored to override the values of DefaultDataPath and DefaultLogPath. While this technically CAN override the default values, the Pre-Deployment script is included in the generated Sql deployment script after the ALTER DATABASE commands to add data files. It would effectively work for the rest of the script, just not the specific portion that was throwing the error on initial deployment of the database.
At this point I feel there is either a Sql Server configuration option that I am simply unaware of, or possibly a flaw in SSDT which is preventing it from gathering the Default Path values from the Docker Sql Server instsance. Any ideas?
In Azure DevOps release pipeline, How do I get our Script.PreDeployment.sql and Script.PostDeployment.sql files to execute during our SQL Server database deploy task?
In our release pipeline we have a Database Deployment phase, which has a SQL Server database deploy task. The tasks publishes our DACPAC file just fine. However I cannot figure out how to get the pre and post deployment scripts to execute.
Both of the scripts are included in the project, both have the appropriate BuildAction set to PreDeploy and PostDeploy. Yet in the logs of the dacpac deployment, there is no indication that the files were run - I have a bunch of PRINT statements in there.
I am also working on the post deployment scrip by using SSDT approach, and for deployment I am using the Azure SQL DacpacTask task in my Azure Pipeline, so you just need to create the post deployment scripts as you can see this in image and save it, after you run the Azure build this will add it in the Azure Pipeline artifact ill automatically executed in above azure tasks under release pipeline. First it will executed the database deployment and after that it runs the post deployment script. It works for me .
You can make use of Command line task to run those pre and post deployment SQL script using the SQLCMD tool.
The arguments to this its execution script are:
-S {database-server-name}.database.windows.net
-U {username}#{database-server-name}
-P {password}
-d {database-name}
-i {SQL file}
If you store the pre/postdeployment script in artifact, you can specify -i as like $(System.DefaultWorkingDirectory)/drop/Script.PreDeployment.sql.
Once Postdeployment script is added to the project, it will integrate with DacPac.
.sqlproj should have below PostDeploy Item group like -
<ItemGroup> <PostDeploy Include="PostDeploymentScript path"/> </ItemGroup>.
Sometime it does not update in .sqlproj and postdeployment does not work.
It should be added by default when postdeployment is added, just verify before publish.
.sqlproj
Goal
Clone a SQL database to a different remote SQL Server using a PowerShell script
What Works
Using SSMS to import the BACPAC file into different servers (remotely & local) works without (reported) warnings or errors.
What Doesn't
Importing the BACPAC into a remote SQL Server results in the following error using sqlpackage.exe & PowerShell dbatools:
Warning SQL72038: The object [XXX] already exists in database with a different definition and will not be altered.
Error SQL72014: .Net SqlClient Data Provider: Msg 15023, Level 16, State 1, Line 1 User, group, or role 'XXX' already exists in the current database.
Error SQL72045: Script execution error. The executed script:
CREATE USER [XXX] FOR LOGIN [XXX];
I also tried using PS dbatools DACPAC approach: https://dbatools.io/clone/
The error message with different settings changed to:
Initializing deployment (Start)
The object [XXX] already exists in database with a different definition and will not be altered.
Initializing deployment (Complete)
Analyzing deployment plan (Start)
Analyzing deployment plan (Complete)
Reporting and scripting deployment plan (Start)
Reporting and scripting deployment plan (Complete)
Updating database (Start)
Creating NEW_DATABASE...
The database settings cannot be modified. You must be a SysAdmin to apply these settings.
Creating [XXX]...
.Net SqlClient Data Provider: Msg 15023, Level 16, State 1, Line 1 User, group, or role 'XXX' already
exists in the current database.
Script execution error. The executed script:
CREATE USER [XXX] WITHOUT LOGIN;
An error occurred while the batch was being executed.
Updating database (Failed)
The next step was then to disable users and add back the required users & roles via a script. Using the following link as a reference, resulted in a database that I was unable to drop with our existing administrator login & password.
DacPac exclude users and logins on export or import
To fix this, we had to change our administrator password in RDS
AWS RDS SQL Server unable to drop database
Notes
I can't remove user XXX because it's mapped to different databases
SQL Server Management Studio v17.9.1
PowerShell dbatools v1.0.30
Questions
Is there a way to find out what SSMS is executing so that I can replicate it via a script?
What are the options to work around this issue?
This is very late to answer, although I will share my inputs here so someone in future can find it helpful to solve the problem.
This is what I did with some exceptions. Exclude the object types = Users;Permissions;RoleMembership;Logins; in the sql package command line property to successfully deploy the database.
The exception is, you will not be able to deploy your users, permissions on the already existing set. On a fresh install, you could remove this exclude property to deploy the entire set without errors.
Here is the commandline parameter to use in your sqlPackage.exe command
/P:ExcludeObjectTypes=Users;Permissions;RoleMembership;Logins;
Ref: MS DOCS -> https://learn.microsoft.com/en-us/sql/tools/sqlpackage/sqlpackage-publish?view=sql-server-ver15
My DBA and I are trying to work out how to effectively use Microsoft's Database projects and the Dacpacs they generate to simplify our production deployment system.
Ideally, I would be able to build and/or publish the .sqlproj, generating a .dacpac file, which can then be uploaded to the production server and used to upgrade the database from whatever version it was to the latest version. This is similar to how we're doing website deployments, where I publish to a package, and then that package is uploaded to the server and imported into IIS.
However, we can't work out how to make this work. The DBA has already created the database and added it to our Availability Groups. And every time we try to apply the Dacpac, it tries to adjust settings which it can't because of the AGs.
Nothing I've been able to do has managed to create a .dacpac file which doesn't try to impose settings on the database. The closest option I've found will exclude them when publishing, but as best as I can tell you can't publish to an inaccessible database, and only the DBA has access to the production server.
Can I actually use dacpacs this way?
There are two parts to this, firstly how do you stop deploying settings you don't want to deploy - can you give an example of one of the settings that doesn't apply?
For the second part where you do not have access to the SQL Server there are a few different ways to handle this:
Use an offline copy to generate the deploy script
Get the DBA to generate the deploy script
Get the DBA to deploy using the dacpac
Get read only access to the database
Option 1: "Use an offline copy to generate the deploy script"
You need to compare the dacpac to something and if you do not have a TDS connection (default instance default port tcp:1433) then you can use a version of the database that matches production either through:
Use log shipping to restore a copy of production somewhere you can access it
Get a development db and production in sync, then every release goes to the dev and prod databases, ensuring that they stay in sync
The log shipped copy is the easiest, if it is to a development server you can normally have server permissions to give you acesss or you can create the correct permissions at the database level but not on the production server level.
If the data is sensitive then the log shipped copy might not be appropriate so you could try to keep a development and production database in sync but this is difficult and requires that the DBA be "well trained" into not running anything that isn't first run against the db database as well.
Once you have access to a database that has exactly the same schema as the production database you can use sqlpackage.exe /action:script to generate a deploy script, in fact because it isn't the production database you can generate the script as part of your CI process :).
Option 2: "Get the DBA to generate the deploy script"
This is to get the DBA to copy the dacpac to the productions server and to use sqlpackage.exe that will be in "Program Files (x86)\Microsoft Sql Server\Version\DAC\bin" folder to compare the dacpac to the database and generate a script that he can review before deploying.
Option 3: "Get the DBA to generate the deploy script"
This is simlar to option 2 but instead of generating a script he deploys in SSMS he just use sqlpackage.exe /Action:Publish to deploy the changes directly.
Option 4: "Get read only access to the database"
This is actually my preferred as it means that you always build scripts against what is guaranteed to be the state of production (as it is production). In your case you would need to get the tcp port between your machine or ideally your build machine and the SQL Server and then you will need these permissions:
https://the.agilesql.club/Blogs/Ed-Elliott/What-Permissions-Do-I-Need-To-Generate-A-Deploy-Script-With-SSDT
As I said option 4 is always my preferred but I understand that it isn't always possible.
Option 2 + 3 are fraught with worry as you will be running scripts that haven't been tested anywhere, with option 4 and 1 you can generate the scripts and then deploy to a test / QA database as long as they themselves have the same schema as production. The scripts can also go through a code review process.
If you do option 2 / 3 then I would create a batch file or powershell script that drives sqlpackage.exe and if they deploy from a different server that doens't have sqlpackage.exe then you can copy the DAC folder to that machine and run sqlpackage from that, you do not have to actually install it (you may need to also copy in the Microsoft.SqlServer.TransactSql.ScriptDom.dll from the "Program Files (x86)\Microsoft Sql Server\Version\SDK\Assemblies" folder.
I hope this helps, if you have any more questions feel free to post here or ping me :)
ed
I want to automatically (ideally from the command prompt in a batch file) automate the generation of the schema of my SQL Server 2008 R2 database.
In SSMS, I can right-click the DB, choose "Tasks", "Generate scripts", and then follow the wizard to gen a Schema script. Is there a command-line version of this process that I can use?
Microsoft released a new tool a few weeks ago called mssql-scripter that's the command line version of the "Generate Scripts" wizard in SSMS. It's a Python-based, open source command line tool and you can find the official announcement here. Essentially, the scripter allows you to generate a T-SQL script for your database/database object as a .sql file. You can generate the file and then execute it. This might be a nice solution for you to generate the schema of your db (schema is the default option). Here's a quick usage example to get you started:
$ pip install mssql-scripter
# script the database schema and data piped to a file.
$ mssql-scripter -S localhost -d AdventureWorks -U sa > ./adventureworks.sql
More usage examples are on our GitHub page here: https://github.com/Microsoft/sql-xplat-cli/blob/dev/doc/usage_guide.md
From this answer, there appear to be tools called SMOScript and ScriptDB that can do that.
If you find a way without third party tools please share :)