I would like to disable/enable Build Controller (or Build Agents) from a bat file. I want to do this so we can schedule builds every night, but then disable them during code-freeze. "TFPT builddefinition /enabled:false" is close... but that is only for cloning build defs. If not, is there a way to disable checkins from a bat file? Then I would edit my Build Def and uncheck the box for "Build even if nothing has changed since the previous build".
Thanks
You can create a Rest call to the private and undocumented TFS API, but you should know what you are doing.
Or you can use a scheduled tasks to control your agent service installation on your build server.
But there are better ways to control your sources and releases.
It seems like the problem is your TFS project setup. For example, use
"GIT Hub Flow" with “Pull Requests” and no one can change the master
without a approved PR.
The developers can he developers can work and you don’t need plan a
“Code Freezes" or "removing permissions" or stuff like this.
I also wouldn´t stop the deployments for the dev and test systems.
If you want to avoid that anybody creates a release to a special set of environment (Stag and Prod) set an approvers to control the release process.
The understanding Git-Hub-Flow site
"GitHub Flow is a lightweight, branch-based workflow that supports
teams and projects where deployments are made regularly." GIT Hub
https://guides.github.com/introduction/flow/
Related
What is the best way to dynamically provide configuration to a vespa application?
It seems that the only method that is talked about is baking configuration values into the application package but is there any way to provide configuration values outside of that? ie are there cli tools to update individual configuration values at runtime?
Are there any recommendations or best practices for managing configuration across different environments (ie production vs development) ? At Oath/VMG is configuration checked into source control or managed outside of that?
Typically all configuration changes are made by deploying an updated application package. As you suggest, this is usually done by a CI/CD setup which builds and deploys the application package from a git repository whenever that changes.
This way it is easy to ensure changes have been reviewed (before merge), track all changes that have been made and roll them back if necessary. It is also easy to verify that the same changes which have been deployed and tested (preferably by automated tests) in a development / test environment are the ones that are deployed to production - because the same application package is deployed through each of those environments in order.
It is however also possible to update files in a deployed application package and create a new session from this, which may be useful if your application package has some huge resources. See https://docs.vespa.ai/documentation/cloudconfig/deploy-rest-api-v2.html#use-case-modify
In our current dev. workflow there is main database --> DbMain. There is the process that takes the latest version of the project and automatically deploys it there and after that it triggers unit tests. As we would like to always have working version of the project in the source control each developer should be sure that he checks in the working code and all tests would be passed.
For this purpose we decided to create individual databases for each developers that has following naming convention --> DbMain_XX (where XX are the developers initial). So every developer before the check-in is suppose to publish all the changes to that database manually and run the unit tests. It is useful to setup publish config for this purpose with that is the copy of the main publish config with the only difference in the database names.
That would introduce that we will have a lot of different publish profiles in the solution that is quite a mess.
If we will not add these profiles to the source control, then .sqlproj file would still have reference to these files so the project will have reference to the not existing files.
So the actual question. Can I have single publish profile for all developers where the database name will be changed using variables? For example DbName_$(dev_initials)? Or can we have that each developer would have their own publish configs only locally and it wouldn't break the project?
UPDATE:
According to the Peter Schott comments:
I can create local publish profile, but if I don't add it to the source control, then the still be an entry in sqlproj file, but the file itself will be unavailable.
Running tests locally have at least 2 disadvantages. The first one is that everybody is supposed to install SQL Server locally. We are mainly working via virtual machines and the disk space is quite limited there. Another thing is that developers will definitely forget or not will not run tests manually every time. Sometimes they will push changes to the repo without building it or/and running tests. We would like to avoid such situations and "catch" failed build as soon as possible.
Another approach that was mentioned is to have 1 common build database. And in my case we have one (DbMain). All of developers can use it for it's needs but we will definitely catch the situation when the 2 developers will publish at the same time and that can make a lot of confusion by figuring out what's really went wrong.
A common approach to this kind of thing - not only for SSDT publish profiles but for config files in general - is to commit a generic version of the file with a name something like DbMain.publish.xml.template, and provide instructions to the developer to rename the file to DbMain.publish.xml - or whatever - and .gitignore this local copy of the file, allowing the developers to make whatever changes they want, but inherit the common settings from the .template version of the file.
Publish profiles don't need to be added to the .sqlproj to be used at deploy time, this is merely a convenience in Visual Studio to make them easier to find and edit, so you don't need to worry about broken references.
You are right in wanting to avoid multiple developers publishing to a common "build" database, this is a recipe for frustration.
Really, you want the "build" database to be published to as part of your CI process, meaning after the developers have pushed their changes.
I'm thinking of a good deployment strategy for magento. I already have managed to deploy code with git from my local installation to my stage server. (The jump to live is not a problem then)
Now I'm thinking about how to deploy backend changes like the following:
I'm adding a new attribute set and I want it to be available on my stage and later the live server. Since these settings are in the database, I could just do a mysqldump and restore this dump on my stage/live systems.
But I can't do this, since the database has more data like orders, articles (with current stock availability) and a lot more stuff which I don't want to deploy from my testing system.
How are others handling this deployment "problem"?
After some testing, I chose the extension Mageploy, which is nicely to install via modman (I prefer modgit which relies on the same data for installation) and already captures a lot important backend settings.
If you need more, it's possible to extend it to more backend settings by yourself (and then contribute to the git project. Pullrequests are concidered quickly)
I'm looking at implementing Team City and Octopus Deploy for CI and Deployment on demand. However, database deployment is going to be tricky as many are old .net applications with messy databases.
Redgate seems to have a nice plug-in for Team City, but the price will probably be stumbling block
What do you use? I'm happy to execute scripts, but it's the comparison aspect (i.e. what has changed) I'm struggling with.
We utilize a free tool called RoundhousE for handling database changes with our project, and it was rather easy to use it with Octopus Deploy.
We created a new project in our solution called DatabaseMigration, included the RoundhousE exe in the project, a folder where we keep the db change scripts for RoundhousE, and then took advantage of how Octopus can call powershell scripts before, during, and after deployment (PreDeploy.ps1, Deploy.ps1, and PostDeploy.ps1 respectively) and added a Deploy.ps1 to the project as well with the following in it:
$roundhouse_exe_path = ".\rh.exe"
$scripts_dir = ".\Databases\DatabaseName"
$roundhouse_output_dir = ".\output"
if ($OctopusParameters) {
$env = $OctopusParameters["RoundhousE.ENV"]
$db_server = $OctopusParameters["SqlServerInstance"]
$db_name = $OctopusParameters["DatabaseName"]
} else {
$env="LOCAL"
$db_server = ".\SqlExpress"
$db_name = "DatabaseName"
}
&$roundhouse_exe_path -s $db_server -d $db_name -f $scripts_dir --env $env --silent -o > $roundhouse_output_dir
In there you can see where we check for any octopus variables (parameters) that are passed in when Octopus runs the deploy script, otherwise we have some default values we use, and then we simply call the RoundhousE executable.
Then you just need to have that project as part of what gets packaged for Octopus, and then add a step in Octopus to deploy that package and it will execute that as part of each deployment.
We've looked at the RedGate solution and pretty much reached the same conclusion you have, unfortunately it's the cost that is putting us off that route.
The only things I can think of are to generate version controlled DB migration scripts based upon your existing database, and then execute these as part of your build process. If you're looking at .NET projects in future (that don't use a CMS), could potentially consider using entity framework code first migrations.
I remember looking into this a while back, and for me it seems that there's a whole lot of trust you'd have to get put into this sort of process, as auto-deploying to a Development or Testing server isn't so bad, as the data is probably replaceable... But the idea of auto-updating a UAT or Production server might send the willies up the backs of an Operations team, who might be responsible for the database, or at least restoring it if it wasn't quite right.
Having said that, I do think its the way to go, though, as its far too easy to be scared of database deployment scripts, and that's when things get forgotten or missed.
I seem to remember looking at using Red Gate's SQL Compare and SQL Data Compare tools, as (I think) there was a command-line way into it, which would work well with scripted deployment processes, like Team City, CruiseControl.Net, etc.
The risk and complexity comes in more when using relational databases. In a NoSQL database where everything is "document" I guess continuous deployment is not such a concern. Some objects will have the "old" data structure till they are updated via the newly released code. In this situation your code would need to be able to support different data structures potentially. Missing properties or those with a different type should probably be covered in a well written, defensively coded application anyway.
I can see the risk in running scripts against the production database, however the point of CI and Continuous Delivery is that these scripts will be run and tested in other environments first to iron out any "gotchas" :-)
This doesn't reduce the amount of finger crossing and wincing when you actually push the button to deploy though!
Having database deploy automation is a real challenge especially when trying to perform the build once deploy many approach as being done to native application code.
In the build once deploy many, you compile the code and creates binaries and then copy them within the environments. From the database point of view, is the equivalent to generate the scripts once and execute them in all environments. This approach doesn't handle merges from different branches, out-of-process changes (critical fix in production) etc…
What I know works for database deployment automation (disclaimer - I'm working at DBmaestro) as I hear this from my customers is using the build and deploy on demand approach. With this method you build the database delta script as part of the deploy (execute) process. Using base-line aware analysis the solution knows if to generate the deploy script for the change or protect the target and not revert it or pause and allow you to merge changes and resolve the conflict.
Consider a simple solution we have tried successfully at this thread - How to continuously delivery SQL-based app?
Disclaimer - I work at CloudMunch
We using Octopus Deploy and database projects in visual studio solution.
Build agent creates a nuget packages using octopack with a dacpac file and publish profiles inside and pushes it onto NuGet server.
Then release process utilizes the SqlPackage.exe utility to generate the update script for the release environment and adds it as an artifact to the release.
Previously created script executed in the next step with SQLCMD.exe utility.
This separation of create and execute steps gives us a possibility to have a manual step in between, so that someone verifies before the script is executed on Live environment, not to mention, that script saved as an artifact in the release can always be referred to, at any later point.
Would there be a demand I would provide more details and step scripts.
Our team develops distributed winform apps. We use ClickOnce for deployment and are very pleased with it.
However, we've found the pain point with ClickOnce is in creating the deployments. We have the standard dev/test/production environments and need to be able to create deployments for each of these that install and update separate from one another. Also, we want control over what assemblies get deployed. Just because an assembly was compiled doesn't mean we want it deployed.
The obvious first choice for creating deployments is Visual Studio. However, VS really doesn't address the issues stated. The next in line is the SDK tool, Mage. Mage works OK but creating deployments is rather tedious and we don't want every developer having our code signing certificate and password.
What we ended up doing was rolling our own deployment app that uses the command line version of Mage to create the ClickOnce manifest files.
I'm satisfied with our current solution but is seems like there would be an industry-wide, accepted approach to this problem. Is there?
I would look at using msbuild. It has built in tasks for handling clickonce deployments. I included some references which will help you get started, if you want to go down this path. It is what I use and I have found it to fit my needs. With a good build process using msbuild, you should be able to accomplish squashing the pains you have felt.
Here is detailed post on how ClickOnce manifest generation works with MsBuild.
I've used nAnt to run the overall build strategy, but pass parameters into MSBuild to compile and create the deployment package.
Basically, nAnt calls into MSBuild for each environment you need to deploy to, and generates a separate deployment output for each. You end up with a folder and all ClickOnce files you need for every environment, which you can just copy out to the server.
This is how we handled multiple production environments as well -- we had separate instances of our application for the US, Canada, and Europe, so each build would end up creating nine deployments, three each for dev, qa, and prod.