Azure DevOps - how to execute Pre and Post Deployment SQL scripts - sql-server

In Azure DevOps release pipeline, How do I get our Script.PreDeployment.sql and Script.PostDeployment.sql files to execute during our SQL Server database deploy task?
In our release pipeline we have a Database Deployment phase, which has a SQL Server database deploy task. The tasks publishes our DACPAC file just fine. However I cannot figure out how to get the pre and post deployment scripts to execute.
Both of the scripts are included in the project, both have the appropriate BuildAction set to PreDeploy and PostDeploy. Yet in the logs of the dacpac deployment, there is no indication that the files were run - I have a bunch of PRINT statements in there.

I am also working on the post deployment scrip by using SSDT approach, and for deployment I am using the Azure SQL DacpacTask task in my Azure Pipeline, so you just need to create the post deployment scripts as you can see this in image and save it, after you run the Azure build this will add it in the Azure Pipeline artifact ill automatically executed in above azure tasks under release pipeline. First it will executed the database deployment and after that it runs the post deployment script. It works for me .

You can make use of Command line task to run those pre and post deployment SQL script using the SQLCMD tool.
The arguments to this its execution script are:
-S {database-server-name}.database.windows.net
-U {username}#{database-server-name}
-P {password}
-d {database-name}
-i {SQL file}
If you store the pre/postdeployment script in artifact, you can specify -i as like $(System.DefaultWorkingDirectory)/drop/Script.PreDeployment.sql.

Once Postdeployment script is added to the project, it will integrate with DacPac.
.sqlproj should have below PostDeploy Item group like -
<ItemGroup> <PostDeploy Include="PostDeploymentScript path"/> </ItemGroup>.
Sometime it does not update in .sqlproj and postdeployment does not work.
It should be added by default when postdeployment is added, just verify before publish.
.sqlproj

Related

SQL Server Project Publish to a Docker Hosted Instance - SSDT and DefaultDataPath

I am having a VERY difficult time publishing a pre-existing SQL Server project to a Docker hosted instance of SQL Server.
What I am attempting to do is make a clean pipeline for a Docker hosted instance to use in testing a SQL Server project, which of course starts with doing it first by hand to understand all the steps involved. The SQL Server project itself has been around for many years, and has no problems deploying to SQL Server instances hosted on Windows boxes.
As near as I can tell, the issue comes while SSDT is generating the SQL Server deployment script itself. In a normal deployment to a Windows hosted SQL Server, the generated script starts out with some :setvar commands, including:
:setvar DefaultDataPath "C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\"
:setvar DefaultLogPath "C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\"
However, when publishing to a Docker hosted instance of SQL Server, and the same deployment process, the SQL script has:
:setvar DefaultDataPath ""
:setvar DefaultLogPath ""
The 1st thing this deployment does is to alter the database by adding in an additional data file, e.g.:
ALTER DATABASE [$(DatabaseName)]
ADD FILE (NAME = [ARCHIVE_274A259D], FILENAME = N'$(DefaultDataPath)$(DefaultFilePrefix)_ARCHIVE_274A259D.mdf') TO FILEGROUP [ARCHIVE];
The Docker based deployment then craps itself because the file path is (obviously) invalid.
In researching this problem, I've seen MANY solutions which hand-edit the generated deployment SQL script, and manually set the "proper" values for DefaultDataPath and DefaultLogPath ... and even one solution that ran the generated Sql through some sort of post-processor to make that same edit in a programmatic way with string replacement. This does work, but is less than optimal (especially in an automated build/test/deploy pipeline).
I've checked in the Docker instance itself, and its mssql.conf file does have defaults defined:
$ cat /var/opt/mssql/mssql.conf
[sqlagent]
enabled = false
[filelocation]
defaultdatadir = /var/opt/mssql/data/
defaultlogdir = /var/opt/mssql/log/
Can anybody shed light on why these are not being picked up by the SSDT process of generating the deploy script?
I spent a few days trying various workarounds to the problem ...
Defined the DATA and LOG directories in the Docker "run" command, but this had no effect on the gnerated Sql deploy script, e.g.: -e 'MSSQL_DATA_DIR=/var/opt/mssql/data/' -e 'MSSQL_LOG_DIR=/var/opt/mssql/log/'
Configure the Sql Project with SQLCMD Variables. This method could not override the DefaultDataPath or DefaultLogPath. I could add new Variables, but those would not affect the file path of the ALTER DATABASE command above.
Tried a Pre-Deployment script specifically tailored to override the values of DefaultDataPath and DefaultLogPath. While this technically CAN override the default values, the Pre-Deployment script is included in the generated Sql deployment script after the ALTER DATABASE commands to add data files. It would effectively work for the rest of the script, just not the specific portion that was throwing the error on initial deployment of the database.
At this point I feel there is either a Sql Server configuration option that I am simply unaware of, or possibly a flaw in SSDT which is preventing it from gathering the Default Path values from the Docker Sql Server instsance. Any ideas?

Bitbucket and Database Development

I have a Windows server with MS SQL Server running on it.
On the SQL Server developers have created stored procedures, views, tables, triggers.
On the Windows server developers created shell scripts.
I would like to start versioning the code described above in a BitBucket repository. I have a repository created in BitBucket.
How should the branches be organized in this repository? i.e. "SQL Server\Database\\ ...
"Windows Server\\shell_script" ...
Can I connect BitBucket to SQL Server and Windows Server and specify which code needs to be versioned?
Are both 1 and 2 options above possible?
I just need to version control the changes to the code and have the ability to mark under which project the code change was made.
I am new to BitBucket. I am using the web front end of it. I do not know how to configure command line access, so please try not to reference Bitbucket commands. Sorry if I sound confusing.
Please help.
I know this is an old question but anyway, in principle I'd recommend:
Put all the server shell scripts into one place and make that a git repo linked to your bitbucket repo
Add a server shell script to export what you want version controlled from the SQL db
The export from the SQL db should be to text files so they are easily 'diffable'
You might as well make the export to a sub-directory within the shell scripts repo so that everything is in one place and can't get out of sync
So you only have one branch, not a separate one for server shell scripts and db
Make sure people run the export script and then commit everything when they make a change
You ideally have a test server which means you'd want a way to push changes from the repo into the SQL db. I presume you can do this with a script but deleting the server setup and re-creating it from the text files.
So basically, you can't connect an SQL db to bitbucket directly. You need scripts to read and write to the db from a repo.

SQLPackage Post Deployment Script not running

I'm using Visual Studio 2013 with TFS, over SQL-Server 2012. Specifically I'm using SQL-Server Data Tools within VS for our SQL development, rather than within SQL-Server Management Studio.
In short, the Post-Deployment SQL script appears not to be executed by SQLPackage.
The specifics are: I have a VS SQL Database Project 'ADMIR', which has been developing over the last few weeks and gradually becoming more sophisticated both in terms of SQL and it's deployment. The database includes some dynamically generated tables and therefore I've introduced SQL Command Variables to reference two databases (one is the actual ADMIR database, the other a DB on the same Server). Therefore I've added Database References to dacpacs for these two DB's.
The ADMIR Project will Build fine.
There is also a Post-Deployment SQL script, and prior to the introduction of SQL Command Variables, this would also run fine and initialize various table data.
However, my understanding is that SQL Command Variables won't work with the VS/TFS Publish action, in that the Post-Deployment script is no longer being executed.
This is ok, as ultimately I want to script/automate the Publish/Deploy of the ADMIR Project and so I've moved to using SQLPackage to perform the Publish.
Again, this appears to work fine - My SQL Package batch command will execute and log errors/output (as applicable) and Publish my Build dacpac to the target Server successfully. However, it seems to completely ignore the Post-Deployment script. In addition, the output reports that the SQL Command Variables are not set.
These are the various scripts and configuration settings:
SQLPackage batch command:
::Publish the ADMIR database using the ADMIR.DEV.Publish.xml Profile.
"C:\Program Files (x86)\Microsoft SQL Server\120\DAC\bin\SqlPackage.exe"
/Action:Publish
/Profile:"C:\Source Control\ADMIR\ReleaseControl\ADMIR.DEV.publish.xml"
/SourceFile:"C:\Source Control\ADMIR\Build\ADMIR.dacpac"
2> "C:\Source Control\ADMIR\ReleaseControl\Publish_Error.txt"
> "C:\Source Control\ADMIR\ReleaseControl\Publish_Output.txt"
ADMIR.DEV.publish.xml Publishing Profile:
<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="12.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<PropertyGroup>
<IncludeCompositeObjects>True</IncludeCompositeObjects>
<TargetDatabaseName>ADMIR</TargetDatabaseName>
<DeployScriptFileName>ADMIR.sql</DeployScriptFileName>
<TargetConnectionString>Data Source=FOOBAR;Integrated Security=True;Pooling=False</TargetConnectionString>
<ProfileVersionNumber>1</ProfileVersionNumber>
</PropertyGroup>
<ItemGroup>
<SqlCmdVariable Include="ADMIRDatabaseName">
<Value>ADMIR</Value>
</SqlCmdVariable>
<SqlCmdVariable Include="XYZDatabaseName">
<Value>XYZDEV</Value>
</SqlCmdVariable>
</ItemGroup>
</Project>
Script.PostDeploy.sql (first few lines):
/* ADMIR Post-Deployment Script Template */
PRINT N'Post Deploy Script started:'+CAST(GETDATE() AS NVARCHAR(20))+N'.'
PRINT N'SQLCMD Variable $ADMIRDBName:'+[$(ADMIRDatabaseName)]+N'.'
PRINT N'SQLCMD Variable $XYZDBName:'+[$(XYZDatabaseName)]+N'.'
/* Purge existing data as we'll repopulate with what is required for a fresh deploy */
PRINT N'Purging existing table data...'
DELETE FROM [ADMIR].[Engine].[ProcessingStatus]
DELETE FROM [ADMIR].[Engine].[DataField]
DELETE FROM [ADMIR].[Engine].[DataKeyField]
...
...
When running a Build, I can see the ADMIR_Create.sql script does contain all the Post-Deploy statements. i.e. I can see the above lines of code in this file.
** ADMIR.sqlproj file contains the Post-Deploy scripts and the SQL Command Variables:**
<ItemGroup>
...
...
<PostDeploy Include="ReleaseControl\Script.PostDeploy.sql" />
</ItemGroup>
...
...
<ItemGroup>
<SqlCmdVariable Include="ADMIRDatabaseName">
<DefaultValue>ADMIR</DefaultValue>
<Value>$(SqlCmdVar__7)</Value>
</SqlCmdVariable>
<SqlCmdVariable Include="XYZDatabaseName">
<DefaultValue>XYZDEV</DefaultValue>
<Value>$(SqlCmdVar__6)</Value>
</SqlCmdVariable>
Publish log, Publish_Output.txt:
Publishing to database 'ADMIR' on server 'FOOBAR'.
Initializing deployment (Start)
*** The following SqlCmd variables are not defined in the target scripts: ADMIRDatabaseName XYZDatabaseName.
Initializing deployment (Complete)
Analyzing deployment plan (Start)
Analyzing deployment plan (Complete)
Updating database (Start)
Update complete.
Updating database (Complete)
Successfully published database.
So:
Why is the Post-Deployment script not being executed? I'm not seeing any logging information relating to it.
Why does the SQLPackage output log state that the SQL Command Variables are not defined, when they are in the Publish profile?
Possibly related to 2. Is the .sqlproj file correct in have the values of the SQL Command Variables as variables themselves e.g. $(SqlCmdVar__7)? where __6 & __7 coming from?
Thanks for any assistance!
I found that a new .sqlproj file created in Visual Studio 2017 did not include "default" Predeployment.sql or PostDeployment.sql files, so I added one manually.
Publishing did not execute the PostDeployment.sql because the file's Build Action was set to None. Changing the Build Action to PostDeploy fixed the issue.
As per my comment, the issue logged in MSDN Connect, appears to be cause. i.e. an SQLCMD variable with the same name as the dacpac for the database is resulting in the script not being executed. When I remove this reference, and change the related SQLCMD variables to literal values for the DB Name, then the project builds and the post-deploy script is executed as required.

How to launch jenkins jobs after Post deployment in TFS

I have automated build and deploy process in TFS by referring to http://www.codeproject.com/Articles/790206/Deploying-Web-Applications-using-Team-Foundation.
After deployment I am validating deployed application by running selenium scripts via batch file by mentioning the path in "post-test script path". Its executes the batch file and runs automated tests.
Now, I wanted publish these selenium results. So I have created jenkins jobs with email configured. So how to execute this job post deployment. I have tried by providing jenkins job trigger email in "post-test script path", but it actually looking the path and throughs an error. So how to execute jenkins jobs post deployment.
Also, I am trying complete automation process, where it automatically build, deploy and run some selenium tests using TFS. If any body has better process please let me know. Thanks
You can use the Command Line task in the new TFS 2015/VSTS build system to easily execute selenium tests.
You can then easily configure and pass variables.
I would also recommend that you move to using release management tools for deployment. While a CD makes sense for development it is often not viable for a release pipeline and you need more meta data and approvals.
You can do this with the release management tools that come with TFS 2013+.

Deploying Dacpacs to an Availability Group in a locked-down production

My DBA and I are trying to work out how to effectively use Microsoft's Database projects and the Dacpacs they generate to simplify our production deployment system.
Ideally, I would be able to build and/or publish the .sqlproj, generating a .dacpac file, which can then be uploaded to the production server and used to upgrade the database from whatever version it was to the latest version. This is similar to how we're doing website deployments, where I publish to a package, and then that package is uploaded to the server and imported into IIS.
However, we can't work out how to make this work. The DBA has already created the database and added it to our Availability Groups. And every time we try to apply the Dacpac, it tries to adjust settings which it can't because of the AGs.
Nothing I've been able to do has managed to create a .dacpac file which doesn't try to impose settings on the database. The closest option I've found will exclude them when publishing, but as best as I can tell you can't publish to an inaccessible database, and only the DBA has access to the production server.
Can I actually use dacpacs this way?
There are two parts to this, firstly how do you stop deploying settings you don't want to deploy - can you give an example of one of the settings that doesn't apply?
For the second part where you do not have access to the SQL Server there are a few different ways to handle this:
Use an offline copy to generate the deploy script
Get the DBA to generate the deploy script
Get the DBA to deploy using the dacpac
Get read only access to the database
Option 1: "Use an offline copy to generate the deploy script"
You need to compare the dacpac to something and if you do not have a TDS connection (default instance default port tcp:1433) then you can use a version of the database that matches production either through:
Use log shipping to restore a copy of production somewhere you can access it
Get a development db and production in sync, then every release goes to the dev and prod databases, ensuring that they stay in sync
The log shipped copy is the easiest, if it is to a development server you can normally have server permissions to give you acesss or you can create the correct permissions at the database level but not on the production server level.
If the data is sensitive then the log shipped copy might not be appropriate so you could try to keep a development and production database in sync but this is difficult and requires that the DBA be "well trained" into not running anything that isn't first run against the db database as well.
Once you have access to a database that has exactly the same schema as the production database you can use sqlpackage.exe /action:script to generate a deploy script, in fact because it isn't the production database you can generate the script as part of your CI process :).
Option 2: "Get the DBA to generate the deploy script"
This is to get the DBA to copy the dacpac to the productions server and to use sqlpackage.exe that will be in "Program Files (x86)\Microsoft Sql Server\Version\DAC\bin" folder to compare the dacpac to the database and generate a script that he can review before deploying.
Option 3: "Get the DBA to generate the deploy script"
This is simlar to option 2 but instead of generating a script he deploys in SSMS he just use sqlpackage.exe /Action:Publish to deploy the changes directly.
Option 4: "Get read only access to the database"
This is actually my preferred as it means that you always build scripts against what is guaranteed to be the state of production (as it is production). In your case you would need to get the tcp port between your machine or ideally your build machine and the SQL Server and then you will need these permissions:
https://the.agilesql.club/Blogs/Ed-Elliott/What-Permissions-Do-I-Need-To-Generate-A-Deploy-Script-With-SSDT
As I said option 4 is always my preferred but I understand that it isn't always possible.
Option 2 + 3 are fraught with worry as you will be running scripts that haven't been tested anywhere, with option 4 and 1 you can generate the scripts and then deploy to a test / QA database as long as they themselves have the same schema as production. The scripts can also go through a code review process.
If you do option 2 / 3 then I would create a batch file or powershell script that drives sqlpackage.exe and if they deploy from a different server that doens't have sqlpackage.exe then you can copy the DAC folder to that machine and run sqlpackage from that, you do not have to actually install it (you may need to also copy in the Microsoft.SqlServer.TransactSql.ScriptDom.dll from the "Program Files (x86)\Microsoft Sql Server\Version\SDK\Assemblies" folder.
I hope this helps, if you have any more questions feel free to post here or ping me :)
ed

Resources