I am working for years with SSIS now and one thing I have never fully understood is SSIS Project/Build versioning. Which is not to be confused with the internal SSISDB versioning of a project. I mean it more in the sense of a release build versioning.
I'll try to explain and I'm probably going to make a bad job of it.
We have a number SSIS projects with multiple packages each and we store/execute them from catalog/SSISDB. We have a different catalog/SSISDB for each region like DEV, TEST, UAT, PROD. As one would expect.
And like in a normal SDLC we promote the SSIS projects through the regions all the way up to PROD. Like say originally we deploy to DEV from Visual Studio. Then we deploy from DEV to TEST where we want the exact same project 'release' deployed. Executed in TEST only with a different configuration/environment. So here we use SSMS to deploy to TEST from DEV.
I suppose like a Java project where you build an artifact - a WAR or EAR or whatever - with a certain version number and you promote that very same binary through the regions.
And like in Java project, if you make a change to one of the project's components/sources - in this case to one of the SSIS packages/dtsx's - it changes the make-up and version of the overall SSIS project. And I'd like this version, this release if you like, to be identifiable at this level.
I want to know that on Visual Studio I built version 1.5.1 of my SSIS project and then I deployed it to DEV and I want it to show there as version 1.5.1. And when I promote it up to TEST I want to be able to verify that indeed this is the version 1.5.1. And so on.
And when I make a new build in Visual Studio I want to be able to increment that version number. And so on.
So here is my question. Is there something within SSIS/VS that supports this kind of thing?
Or would I have to work at an ISPAC file level? Say I make my build, grab the resulting ISPAC file, rename it manually or so, compile it s hash whatever, store that ISPAC somewhere like in an artifactory or source control and promote through the regions from that ISPAC file?
Or is there something within the SSIS project properties I can use for this?
Sorry, this if this a bit long winded.
Thx
Calle
P.S. I had looked at this problem before in the past with no solution. So we used a 'workaround'.
We are using version control for the project files and we fudged something where we used SVN $Id commit level string replacements into the dtsx description property of each dtsx file. So after a deploy/promotion we would be able to look at a package/dtsx property in the SSISDB/catalog and identify 'ah yes, here is our new version after getting promoted'. It was not perfect since it wasnt working at a project level but it was something.
Now we are using git and conceptually this property replacement workaround does not work anymore. So I am revisiting the release/build version problem.
Each time you save in Visual Studio, two Package properties are updated: VersionBuild and VersionGUID. The former is a monotonically incrementing number and the other is a guid.
You can manually set VersionMajor and VersionMinor and once saved, you have hit F7 (View Code) and confirm you see something like this fragment. I set Major to 11, minor to 2 and you can see I've saved this package 8 times (VersionBuild of 9)
<?xml version="1.0"?>
<DTS:Executable xmlns:DTS="www.microsoft.com/SqlServer/Dts"
DTS:refId="Package"
DTS:CreationDate="7/3/2019 1:34:05 PM"
DTS:CreationName="Microsoft.Package"
DTS:CreatorComputerName="ERECH"
DTS:CreatorName="HOME\bfellows"
DTS:DTSID="{D5D7C0A7-5986-46DF-9609-501402E9E344}"
DTS:ExecutableType="Microsoft.Package"
DTS:LastModifiedProductVersion="14.0.3002.92"
DTS:LocaleID="1033"
DTS:ObjectName="Package1"
DTS:PackageType="5"
DTS:ProtectionLevel="0"
DTS:VersionBuild="9"
DTS:VersionGUID="{219699AF-3EC9-4CCA-AE51-61B813D190BE}"
DTS:VersionMajor="11"
DTS:VersionMinor="2">
Deployment to the SSISDB is at the project (.ispac) level so binary versioning of that takes place but and you can easily see that in project properties
The major/minor/build are not visible in the default tooling but you can tease that data out of the SSISDB with a query
SELECT
F.name AS FolderName
, PR.name AS ProjectName
, PR.last_deployed_time
, PR.object_version_lsn
, P.name AS PackageName
, P.version_major
, P.version_minor
, P.version_build
, P.package_guid
, P.version_guid
FROM
SSISDB.catalog.folders AS F
INNER JOIN SSISDB.catalog.projects AS PR
ON PR.folder_id = F.folder_id
INNER JOIN SSISDB.catalog.packages AS P
ON P.project_id = PR.project_id;
Related
When I have an SQL Server Database project (.sqlproj) in Visual Studio solution, I notice that that particular project always gets built every time I build the solution even though there is no change in the project files.
This is inconsistent with normal C# project (.csproj) that only gets built when there is any file that changes.
I have tried cranking up the msbuild output verbosity, and it seems to always build that project without special reason.
Is there a reason why .sqlproj is always built? Is there a way to make it build only when file changes?
MSBuild has the opportunity to make Incremental Build. The main idea of that is the following:
MSBuild attempts to find a 1-to-1 mapping between the values of these attributes. If a 1-to-1 mapping exists, MSBuild compares the time stamp of every input item to the time stamp of its corresponding output item. Output files that have no 1-to-1 mapping are compared to all input files. An item is considered up-to-date if its output file is the same age or newer than its input file or files.
I don't know which targets are inside the *.sqlproj file, so you need to determine does it ever uses that technology, and if it do use - you need to examine which inputs and outputs involved, and try to clarify it by yourself, what is going on with them under the hood.
I've read about the use of Catalogs in 2012/14 SSIS as a replacement for Configurations in 2008. With that replacement, I haven't seen how people handled the scanario of a configuration that is used by all packages on the server such as a Server Connection or path location. With this scanario, all packages point to one configuration, and should something about that value change, all packages are updated. Is this possible with catalogs? It seems each project has their on catalog and if that is the case, everytime a server wide config / parameter changes, it needs to change in each project.
In the SSSIDB, a project lives under a folder. A folder may also contain an SSIS Environment.
When you right click on a project (or package) and select Configure, this is where you would apply configurations, much as you did in 2008. You can use an SSIS Environment that exists in the same folder as the projects, or you can reference one in a different folder. That is the approach I use and suggest to people.
In my Integration Services Catalog, I have a folder called "Configurations" (because it sorts higher than Settings). Within that, I create one Environment called "General". Many people like to make environments called Dev, Test, Prod but unless you have 1 SSIS server handling all of those, I find the complexity of getting my deployment scripts nice and generic to be much too painful.
I then deploy my projects to sanely named folders so the Sales folder contains projects like SalesLoadRaw, SalesLoadStaging, SalesLoadDW.
If I have created a new project, then I need to add a reference to Configurations.General collection and then associate the project item to the Environment item. For Connection Strings, you do not need to define a Variable to accept the string. You can directly assign to the properties of a connection manager (either project or package scoped).
The great thing about Configurations is that once you've assigned them, they persist through redeploys of the project.
The biggest thing that tends to bite people in the buttocks is that when you create an Environment and add those entries into them, DO NOT CLICK OK. Instead, click the Script button and script those to new window. Otherwise, you have to recreate all those entries for your dev/test/load/stage/production environments. I find it far cleaner to script once and then modify the values (SLSDEV to SLSPROD) versus trying to create them all by hand.
Developers have expressed a desire to deploy a single database object from a Sql Server 2008 project, such as a stored procedure, without having to use the build/deploy or schema comparison functions.
To enable this, developers have created their database object scripts including 'if exists.. drop' checks at the top of the script and have included the grant statements for the objects in their scripts.
This results in build errors that then prevent the build/deploy or schema compare functions from operating. So then, developers mark the object as "not in build" but then the object can't be deployed at all using build/deploy or schema compare.
Has anyone found a way of quickly deploying a single database object from visual studio that does not involve schema compare or build/deploy which does not remove the object from the normal build process? Manual copy/paste is not an option but scripting/macros which effectively do the same would be viable.
SQL Server Data Tools (SSDT) now provides this functionality by way of comparison. Individual differences identified in the comparison may be deployed. We have found that during development, publishing tends to result in overlaying the simultaneous changes that other developers are making on the shared development database server. The comparison tool has been working fairly well, except for a very annoying crash issue that occurs when the comparison window is closed. We are using 32bit Vista with 3GB of RAM and VS 2010. This issue may not occur in other configurations.
First I'd like to comment that you seem to be fighting the intended paradigm with regards database projects.
If your project is deployed somewhere [1],
then there should be a corresponding branch / label [2] in your source repository.
If you change a single object [delta] from the above baseline [2], and build the new version [3],
then when you deploy [3] to [1], the deployment script should find only one difference, and the net effect of the change [delta] is all that will be applied.
So, in theory there's no reason not to just simply build and deploy.
TIP: When taking on a new paradigm, it pays to embrace it fully; partial adoption tends to cause its own set of problems.
However, that said, I may be able to help.
We have a similar need because actual deployment is out of our control. We control only part of the database, and have to provide our changes to another team for review.
We have to provide individual 'self-contained' scripts for each object with if exists..drop at the drop and grant permission at the bottom.
However, we want the other benefits of working with the database project, and then simply copy out the individual script files when we "deploy".
The solution we came up with was to place the extra "bits" in a comment block as follows:
/*SINGLE_OBJECT_DEPLOYMENT
if exists (...)
DROP ...
--*/
--/*SINGLE_OBJECT_DEPLOYMENT
if exists (...)
DROP ...
--*/
Note that a simple search and replace of /*SINGLE_OBJECT_DEPLOYMENT with --/*SINGLE_OBJECT_DEPLOYMENT enables the commented out code, so it can be easily put into a macro, or other semi-automated process.
Let's assume that I'm doing some sort of nontrivial change to my database, which requires "custom" work to upgrade from version A to B. For example, converting user ID columns from UUID data type to the Windows domain username.
How can I make this automatically deployable? That is, I want to allow developers to right-click the project, click on "Deploy" and have this logic executed if they are using a database old enough.
I do not see any place for such login in database projects - there does not appear to be any provision for such "upgrade scripts". Is this really not possible? To clarify, the logic cannot obviously be generated automatically, but I want it to be executed automatically, as needed.
The first logical obstacle would, of course, be that the deployment utility would not know whether any such logic needs to be updated - I'd assume I could provide the logic for this, as well (e.g. check a versions table and if the latest version is <5.0, execute this upgrade, later adding a new version row).
Is this possible? Can I have fully automated deployment with complex custom change scripts? Without me having to stick all of my custom change logic into the (soon to be) huge pre- or post-build scripts, of course...
You can indeed check the installed version, if you register your database as a data-tier application during deployment. You can do this by including the following in your publish profile:
<RegisterDataTierApplication>True</RegisterDataTierApplication>
This option will register the schema and it's version number in the msdb database during deployment. Be sure to change the dacpac version number between releases! We use msbuild to create dacpacs, example code for setting the dacpac version:
DacVersion=$(ProjectReleaseNumber).$(ProjectBuildNumber).$(ProjectRevisionNumber)
Having done this, you can build version-aware predeployment scripts.
-- Get installed version, e.g. 2.3.12309.0
DECLARE #InstalledVersion NVARCHAR(64) = (
SELECT type_version
FROM msdb.dbo.sysdac_instances
WHERE instance_name = DB_NAME()
);
-- Get the major part of the version number, e.g. 2
DECLARE #InstalledVersionMajor TINYINT = CONVERT(TINYINT, SUBSTRING(#InstalledVersion, 0, PATINDEX('%.%', #InstalledVersion)));
IF (#InstalledVersionMajor < 5)
BEGIN;
PRINT 'Do some nontrivial incremental change that only needs to be applied on version before 5';
END;
Checking for the version number that you are currently deploying is a little more cumbersome but can also be done. Check out Jamie Thomson's excellent blog for this technique:Editing sqlcmdvariable nodes in SSDT Publish Profile files using msbuild
Honestly, the best option for this is to use the concept of database migrations, which came from the Ruby world, if I'm not mistaken. I have used a framework called Migrator.Net in my applications, but there are a bunch of really good ones (with varying levels of activity) that basically do the same thing. A quick Google search turns up quite a few.
Can anyone provide some real examples as to how best to keep script files for views, stored procedures and functions in a SVN (or other) repository.
Obviously one solution is to have the script files for all the different components in a directory or more somewhere and simply using TortoiseSVN or the like to keep them in SVN, Then whenever a change is to be made I load the script up in Management Studio etc. I don't really want this.
What I'd really prefer is some kind of batch script that I can run periodically (nightly?) that would export all the stored procedures / views etc that had changed in a given timeframe and then commit them to SVN.
Ideas?
Sounds like you're not wanting to use Revision Control properly, to me.
Obviously one solution is to have the
script files for all the different
components in a directory or more
somewhere and simply using TortoiseSVN
or the like to keep them in SVN
This is what should be done. You would have your local copy you are working on (Developing new, Tweaking old, etc) and as single components/procedures/etc get finished, you would commit them individually until you have to start the process over.
Committing half-done code just because it's been 'X' time since it was last committed is sloppy and guaranteed to cause anyone else using the repository grief.
I find it best to treat Stored Procedures just like any other compilable code: Code lives in the repository, you check it out to make changes and load it in your development tool to compile or deploy the code.
You can create a batch file and schedule it:
delete the contents of your scripts directory
using something like ExportSQLScript to export all objects to script/scripts
svn commit
Please note: That although you'll have the objects under source control, you'll not have the data or it's progression (is that a renamed field, or 1 new field and 1 deleted?).
This approach is fine for maintaining change history. But, of course, you should never be automatically committing to the "production build" (unless you like broken builds).
Although you didn't ask for it: This approach also won't produce a set of scripts that will upgrade a current DB. You'll only have initial creation scripts. Recording data progression and creation upgrade scripts is beyond basic source control systems.
I'd recommend Redgate SQL Compare for this - it allows you to compare database versions and generate change scripts - it's also fairly easily scriptable.
Based on your expanded question, you really want to use DDL triggers. Check out this article that details how to create a changelog system for your database.
Not sure on your price range, however DB Ghost could be an option for you.
I don't work for this company (or own the product) but in my researching of the same issue, this product looked quite promising.
I should've been a little more descriptive. The database in question is for an internal ERP system and thus we don't have many versions of our database, just Production/Testing/Development. When we've done a change request, some new fancy feature or something, we simply execute a script or series of scripts to update the procedures in question on the Testing database, if that is all good, then we do the same to Production.
So I'm not really after a full schema script per se, just something that can keep track of the various edits to the stored procedures over time. For example, PROCESS_INVOICE does stuff. It gets updated in some minor way in March. Some time later in say May it is discovered that in a rare case customers get double invoiced (or some other crazy corner case). I'd like to be able to see what has happened over time to this procedure. Currently the way the development environment is setup here I don't have that, which I'm trying to change.
I can recommend DBPro which is part of Visual Studio Team Edition. Have been using it for a few months for storing all parts of the database in Team Foundation Server as well as for deployment and database compares, etc.
Of course, as someone else mentioned, it does depend on your environment and price range.
I wrote a utility for dumping all of the relevant parts of my db into a directory structure that I use SVN on. I never got around to trying to incorporate it into the Manager but, if you're interested, it's here: http://www.reluctantdba.com/dbas-and-programmers/sqltools/svnforsql2005.aspx
It's free and, since I regularly run it, you know any bugs get fixed quickly.
You can always try integrating SourceSafe with SQL Server. Here's a quick start : link . To work with it you've got to have Managment Studio Developers Edition.