Persist Version Number across Build Steps and Build Configurations - versioning

I'm using TeamCity 6.5.4 and I need to have 3 build configurations for the same deployment package. I'd like to persist the version number across all three build configurations and be able to use that number to version the assembly, tag vcs, version the nuspec file, etc.
Here are the configurations and desired version numbers:
Configuration | Version
-------------------|---------
CI/Nightly Build | 1.1.*
Minor Release | 1.*.0
Major Release | *.0.0
It seems that TeamCity uses a separate build incrementer for each configuration. This means every time we have a major or minor release, I'd have to manually update the persisted values (1) in all of the subsequent configurations. I'm a programmer and I'm lazy. I want a single button to do everything for me.
I've seen examples of persisting the build number through build steps of a configuration with dependent snapshots, but that only works in the same configuration.
The Autoincrementer plugin bumps up the number every time you reference the ID. This is fine for the changing numbers (*), but not so good for referencing the persisted values (1).
Is there a way for TeamCity, either natively or via plugin, to allow me to read and write that version to a file or variable that can be persisted across build configurations?

You can reference the build number of the dependent ( artifact / snapshot) configuration using dep.btx.build.number where btx is the bt id of the latter. Once you have the build number, pass the build number to your script running in the configuration, parse the build number in the script and send service messages from the script to Teamcity to set the build number in the way you want. Do this parsing and setting number as the first step in your script / first step in the build steps.

Thanks for the suggestions. I opted to write a set of custom targets to use with my MSBuild script which maintains assembly metadata in a remote xml "manifest" file. When a new TeamCity project is created, my build script calls an Init target which creates a new manifest file from an unpopulated template.
<Copy SourceFiles="#(ManifestTemplate)" DestinationFiles="#(ManifestTemplate->'$(ManifestFile)')" Condition="!Exists('$(ManifestFile)')" />
I'm using the MSBuild Extentions pack to read attributes like version information from the manifest file.
<MSBuild.ExtensionPack.Xml.XmlFile TaskAction="ReadElementText" File="$(ManifestFile)" XPath="/Package/Version/Major">
<Output PropertyName="PackageVersionMajor" TaskParameter="Value"/>
</MSBuild.ExtensionPack.Xml.XmlFile>
I have my TeamCity build configurations separated to CI, Test, Minor Release, and Major Release with different events triggering each. From the corresponding target in my project build script, I add a new target to DependsOnTargets attribute to call the custom target to update the appropriate version number and save it to the manifest file.
<Target Name="Test" DependsOnTargets="IntializeBuildProject;Build-UpdateVersion-Build">
<MSBuild Projects="$(SolutionFile)" Targets="Rebuild" Properties="Configuration=$(Configuration)" />
<TeamCitySetBuildNumber BuildNumber="$(PackageVersion)" />
The code in the custom target to handle the version update:
<MSBuild.ExtensionPack.Science.Maths TaskAction="Add" Numbers="$(PackageVersionBuild);1">
<Output PropertyName="PackageVersionBuild" TaskParameter="Result"/>
</MSBuild.ExtensionPack.Science.Maths>
<MSBuild.ExtensionPack.Xml.XmlFile TaskAction="UpdateElement" File="$(ManifestFile)" XPath="/Package/Version/Build" InnerText="$(PackageVersionBuild)"/>
This file handles persistence of the version and other metadata thus ignoring the TeamCity build number. Since the XML metadata file is centralized, I can use the values to populate my Nuspec, AssemblyInfo, and WiX Installer metadata as well as pass the version and other pertinent information back to TeamCity through service messages.
I added a simple MVC web interface to allow my team to edit the file contents remotely if package details change. Now we have one single place to update things like Copyright information and any other metadata for a given build project. I can also give non-dev folks access to the MVC site to update branding information without allowing them access to my TeamCity build configurations.
With the exception of the service messages used to relay version to TeamCity, there's very little here that's coupled with TeamCity. I like having the functionality in custom targets and build scripts removed from TeamCity on the off chance we move to another build management solution. For that reason, I don't envision taking time to build a TeamCity plugin, but there could be a blog series coming soon.
I'll be happy to provide more code and further explanation to anyone interested.

Yes, you can create a plugin to do this easy. You can take my auto increment build number ( across configurations ) plugin and modify it to fit your need. The build number will be saved in a text file that is configurable from the admin screen in TeamCity.
http://github.com/ornatwork/tc_plugins/tree/master/unique
You can hit me up for input how to change it if you need.

Related

Managing different publish profiles for each developers in SSDT

In our current dev. workflow there is main database --> DbMain. There is the process that takes the latest version of the project and automatically deploys it there and after that it triggers unit tests. As we would like to always have working version of the project in the source control each developer should be sure that he checks in the working code and all tests would be passed.
For this purpose we decided to create individual databases for each developers that has following naming convention --> DbMain_XX (where XX are the developers initial). So every developer before the check-in is suppose to publish all the changes to that database manually and run the unit tests. It is useful to setup publish config for this purpose with that is the copy of the main publish config with the only difference in the database names.
That would introduce that we will have a lot of different publish profiles in the solution that is quite a mess.
If we will not add these profiles to the source control, then .sqlproj file would still have reference to these files so the project will have reference to the not existing files.
So the actual question. Can I have single publish profile for all developers where the database name will be changed using variables? For example DbName_$(dev_initials)? Or can we have that each developer would have their own publish configs only locally and it wouldn't break the project?
UPDATE:
According to the Peter Schott comments:
I can create local publish profile, but if I don't add it to the source control, then the still be an entry in sqlproj file, but the file itself will be unavailable.
Running tests locally have at least 2 disadvantages. The first one is that everybody is supposed to install SQL Server locally. We are mainly working via virtual machines and the disk space is quite limited there. Another thing is that developers will definitely forget or not will not run tests manually every time. Sometimes they will push changes to the repo without building it or/and running tests. We would like to avoid such situations and "catch" failed build as soon as possible.
Another approach that was mentioned is to have 1 common build database. And in my case we have one (DbMain). All of developers can use it for it's needs but we will definitely catch the situation when the 2 developers will publish at the same time and that can make a lot of confusion by figuring out what's really went wrong.
A common approach to this kind of thing - not only for SSDT publish profiles but for config files in general - is to commit a generic version of the file with a name something like DbMain.publish.xml.template, and provide instructions to the developer to rename the file to DbMain.publish.xml - or whatever - and .gitignore this local copy of the file, allowing the developers to make whatever changes they want, but inherit the common settings from the .template version of the file.
Publish profiles don't need to be added to the .sqlproj to be used at deploy time, this is merely a convenience in Visual Studio to make them easier to find and edit, so you don't need to worry about broken references.
You are right in wanting to avoid multiple developers publishing to a common "build" database, this is a recipe for frustration.
Really, you want the "build" database to be published to as part of your CI process, meaning after the developers have pushed their changes.

Qooxdoo build-all deployment

I'm new to qooxdoo. I'd like to use it for an embedded web interface for an application I'm developing right now.
To keep building my application as easy as possible I'd like to stay away from using the python build scripts after every change if possible. Because the website will only be used once in a while by a single user load times etc. are also not a big concern for me.
I've read about the "build-all" target but could not find a detailed description on how to activate it with the current release. Can someone explain how I can get a complete desktop build of qooxdoo?
You don't have to run generate.py every time you change the code, only every time you reference a new class. During development it's usually relatively infrequent that you have to re-run the generator, compared to how often you will do the edit/save/alt-tab/refresh/test cycle.
But you can do what you're asking during development by using the "source-all" target, eg:
./generate.py source-all
When loading an app from a file:// url this is fine because file:// URLs are very fast, but you can optimise this manually by modifying your config.json to incorporate specific sets of classes.
To do this, in your application's config.json, add (or edit) a job called "source" and add:
"jobs": {
"source": {
"include": [ "qx.ui.*" ]
}
This will cause all of the qx.ui.* classes to be included into the ./generate.py source build of your application; obviously you can fine tune this further.
When it comes to deploying your application, use ./generate.py build because this will produce a minimised, optimised version (with debug code removed etc) which uses only those classes that are required.
If you are still looking for a build version of Qooxdoo, here is my qxSimple project. It includes some examples.
http://adeliz.github.io/qxsimple/
You can also generate your own build version following these steps :
Dowload the latest qooxdoo release
Go in the framework folder
Edit the config.json file
uncomment the //build-all line
run generate.py build-all

Twist to the standard “SQL database change workflow best practices”

Twist to the standard “SQL database change workflow best practices”
Background
ASP.NET/C# Web App
MS SQL
Environments
Production
UAT
Test
Dev
We create patch scripts (XML and sql) that are source controlled in Mercurial. We have cmd line utility that installs patches to DB (utitlity.exe install –patch) from a Release folder the build packages. Patches have meta data that helps with when patch should run and we log patches installed in a table in the target DB. All these were covered in the 3 year old question:
SQL Server database change workflow best practices
Our Problem/Twist
I think this works well for tables, views, functions and stored procedures. We struggle with application configuration data. Here are some touch points on application configurations.
New client. BA performs system study and fit analysis. Out of this comes a configuration word document of what application configurations need to be setup. Note some of these may also come in phases over time. We need to get these new configurations into the system for the developer and client UAT.
Developer works on feature request or bug fix. A new configuration change comes out of that change. The configuration needs to make it into the system for testing and promotion to UAT and up.
QA finds that the developer missed an associated configuration change. That configuration needs to make it into the system for promotion to UAT and up.
Build goes to UAT. Client performs acceptance testing but find they really want to change another unassociated configuration and have it promoted with the changes. In other words they found they want to change a business process by a configuration. The configuration needs to make it into the system for promotion to PRD.
As the client operates in PRD they may tweak application settings. These configurations need to make it into the system for future development and testing.
The general issue is making sure we are accounting for all the configurations and accidently not miss any during promotions which causes grief.
Our Attempts At A Process
a. We have had member of the QA team to write patches (xml and sql) and check those in. This requires a build to make sure those get into the package. With this approach it really just took care of item 1 above and we fell apart on the other items. The nice thing is for the items that made it into the patches it was just an install with the utility.
b. A developer threw together a Config page on the application. All the configurations could be uploaded and downloaded via XML document but it requires the app to be running. For item 1, member of QA team would manually setup configurations in the application and then would download the Config.xml file. This XML file would be used to upload configurations in other environments. We would use text diff tool to look at differences between config.xml files from different environments. This addressed item 1 and the others items but had problems. Problems were not all configurations made it into the XML document (just needs to be fixed by developer), some of the configurations didn’t have a UI in the application so you still had to manually go to the database on some, comparing the XML document with text diff was difficult at time (looked mostly due to sorting but I’m sure there are other issues), XML was not very human readable and finally the XML document did not allow for deleting existing incorrect or outdated configs.
c. Recently we went with option B, but over time for a new client we just started manually tracking configs and promoting them manually by hand (UI and DB) through the promotions. Needless to say lots of human errors.
So we have been looking at solutions. Eventually it would be great to get as much automation in as possible. I’m looking at going with the scripting approach and just focusing on process, documentation and looking at using Redgate data compare in addition to what we had been doing with compare on config.xml. With Redgate we have to create views though and there is no way to create update scripts from that approach except to manually update the scripts. It does at least allow a comparison without the app running. I’m also looking at pulling out the configs from our normal patches and making it a system independent of the build (utility.exe –patch –config). When I say focus on process it will be things like if we compare and find a config change either reported by client or not, we still script it, just means we have to have a process in place to quickly revalidate config install before promoting to the next level. As for documentation looking at making the original QA document a living document instead of just an upfront document. The goal is to try and enhance clarity and reduce missing configurations during promotion. Unfortunately it doesn’t improve speed of delivery.
Does anyone have any recommendations or best practices to pass along. Thanks.
Can I ask exactly what you mean by application configuration. I'm interpreting that as both:
Config files in the web application
Static reference data inside the database
Full disclosure I work for Red Gate. You might be interested in taking a look at Deployment Manager, it's a deployment tool that deploys applications, databases and configuration. It's free for up to 5 projects and target servers.
The approach it uses is to package application code and the database state into packages. These packages can be deployed into dev, test, staging and production environments. The same package is deployed to each environment.
Any application configuration that needs to change between environments is handled in one of the ways below:
Variable substitution in web.config. The tool allows you to specify override values for variables in these files, and set these per environment/server
Substituting the web.config file per environment.
Custom powershell scripts that are run pre/post deploy. You could use these to execute custom SQL based on the environment or server.
Static data within the database, using SQL Source Control's static
data feature. I've written a blog post about how to supply
different sets of static data to different environments/customers.
This allows you to source control the application configurations and deploy them to different environments.

VS2010 Different publishing locations based on configuration

I'm trying to divide my solution by three configurations:
Development
Testing
Release
All above will have different publishing location, so users can work with release, do their test in testing and see what is new in development release. All three versions will be build with different name postfixes and icons and installed on each user workstation.
For now I get :
Unable to install this application because an application with the
same identity is already installed. To install this application,
either modify the manifest version for this application or uninstall
the preexisting application."
I can't even install this more than once at one workstation.
So What can I do to achive this?
You can not install the same application multiple times unless you change the deployment. The easiest way to do this is by changing the assembly name. This article explains this.
As time past, I can now see that the solution was quite close, just required me to be able to specify my requirements first.
So, now I can tell that it mostly depends on number of such configurations:
if it is limited and low, i.e. live/test/dev, you can have each as separate project in solution, like AppLive, AppTest, AppDev, this requires refactoring to move everything that is common into separate projects, but it makes code and releases clearer and easier to manage.
if those configurations are unlimited, or number is high, than way to go is to load configurations from file and pick one from the pool based on custom logic.
Currently I'm using mix of both, as I want to be able to release test versions earlier than live, but also my application is used by multiple branches, and each of them has some unique styling, logos and such, so this is applied from embed xml file, and proper set is identified based on Active Directory entries.

Setting up an Ant script to upload files

I've run over only a few examples of how to do this and they didn't work for me. Mainly since i've only used an ant script to auto build jar files threw jenkins. Now though i need to build those files in jenkins then upload them to a 3rd party file site like sourceforge. This is both to save hard drive space on the server, since i don't own it, and to allow external downloads. Any help is welcome but no comments on the fact i don't know to much about ant scripts.
Also something related by a bit separate.The jar file i'm building depends on a another jar file with its own version. i also want to make a new folder each time it uploads with a different dependency version. This way the users that download this file can easily understand the main jar version it goes with while allowing me to upload 20+ sub builds.
There are several ways to upload files, so there are several kind of ant tasks to do the job.
For instance, if you want to upload to sourceforge, you can use the Ant task scp. But it seems also possible to upload there via FTP: so here is the task ftp.
Maybe you find some other service which requires you to upload via HTTP: ant-contrib have the post task.
I used to do publications as part of my ANT build logic, creating a special "publish" target that issued the scp or ftp command.Now I'm more inclined to leverage one of the publish over plugins for Jenkins.
The main reason for this shift is the management of access credentials. Using the ANT based approach I was forced to run my build on a Jenkins slave that was pre-configured with the correct SSH key to talk to the remote server. The Jenkins plugin manages private keys centrally and ensures all slaves are properly configured.
Finally if your build has dependencies on 3rd party jars, use a dependency manager like ivy to download them and include them in your project. It then becomes trivial to include their upload as part of your publish step.

Resources