I would need to create a dpkg out of Play 2.0 Java project. It would run stand-alone with MongoDB (or some RDBS). It should be able to shutdown older version and ensure new version starts up cleanly.
Any advice how to create such? Any Play 2.0 related issues to take into account?
Edit: Looks like I will be using fpm
All these package manager installs basically run scritps. So first I would write bash scripts for uninstall, upgrade and install. You have to consider what parts can be replaced in the reinstall and how apply migrations on the database, file copy, and start stop of services. This is the hard part compared to creating a bundle. Only thing to consider for play! is stopping and starting the server when something needs to be replaced.
This might inspire you for startup shutdown scritps: https://gist.github.com/THemming/2173037
Related
I'm looking at implementing Team City and Octopus Deploy for CI and Deployment on demand. However, database deployment is going to be tricky as many are old .net applications with messy databases.
Redgate seems to have a nice plug-in for Team City, but the price will probably be stumbling block
What do you use? I'm happy to execute scripts, but it's the comparison aspect (i.e. what has changed) I'm struggling with.
We utilize a free tool called RoundhousE for handling database changes with our project, and it was rather easy to use it with Octopus Deploy.
We created a new project in our solution called DatabaseMigration, included the RoundhousE exe in the project, a folder where we keep the db change scripts for RoundhousE, and then took advantage of how Octopus can call powershell scripts before, during, and after deployment (PreDeploy.ps1, Deploy.ps1, and PostDeploy.ps1 respectively) and added a Deploy.ps1 to the project as well with the following in it:
$roundhouse_exe_path = ".\rh.exe"
$scripts_dir = ".\Databases\DatabaseName"
$roundhouse_output_dir = ".\output"
if ($OctopusParameters) {
$env = $OctopusParameters["RoundhousE.ENV"]
$db_server = $OctopusParameters["SqlServerInstance"]
$db_name = $OctopusParameters["DatabaseName"]
} else {
$env="LOCAL"
$db_server = ".\SqlExpress"
$db_name = "DatabaseName"
}
&$roundhouse_exe_path -s $db_server -d $db_name -f $scripts_dir --env $env --silent -o > $roundhouse_output_dir
In there you can see where we check for any octopus variables (parameters) that are passed in when Octopus runs the deploy script, otherwise we have some default values we use, and then we simply call the RoundhousE executable.
Then you just need to have that project as part of what gets packaged for Octopus, and then add a step in Octopus to deploy that package and it will execute that as part of each deployment.
We've looked at the RedGate solution and pretty much reached the same conclusion you have, unfortunately it's the cost that is putting us off that route.
The only things I can think of are to generate version controlled DB migration scripts based upon your existing database, and then execute these as part of your build process. If you're looking at .NET projects in future (that don't use a CMS), could potentially consider using entity framework code first migrations.
I remember looking into this a while back, and for me it seems that there's a whole lot of trust you'd have to get put into this sort of process, as auto-deploying to a Development or Testing server isn't so bad, as the data is probably replaceable... But the idea of auto-updating a UAT or Production server might send the willies up the backs of an Operations team, who might be responsible for the database, or at least restoring it if it wasn't quite right.
Having said that, I do think its the way to go, though, as its far too easy to be scared of database deployment scripts, and that's when things get forgotten or missed.
I seem to remember looking at using Red Gate's SQL Compare and SQL Data Compare tools, as (I think) there was a command-line way into it, which would work well with scripted deployment processes, like Team City, CruiseControl.Net, etc.
The risk and complexity comes in more when using relational databases. In a NoSQL database where everything is "document" I guess continuous deployment is not such a concern. Some objects will have the "old" data structure till they are updated via the newly released code. In this situation your code would need to be able to support different data structures potentially. Missing properties or those with a different type should probably be covered in a well written, defensively coded application anyway.
I can see the risk in running scripts against the production database, however the point of CI and Continuous Delivery is that these scripts will be run and tested in other environments first to iron out any "gotchas" :-)
This doesn't reduce the amount of finger crossing and wincing when you actually push the button to deploy though!
Having database deploy automation is a real challenge especially when trying to perform the build once deploy many approach as being done to native application code.
In the build once deploy many, you compile the code and creates binaries and then copy them within the environments. From the database point of view, is the equivalent to generate the scripts once and execute them in all environments. This approach doesn't handle merges from different branches, out-of-process changes (critical fix in production) etc…
What I know works for database deployment automation (disclaimer - I'm working at DBmaestro) as I hear this from my customers is using the build and deploy on demand approach. With this method you build the database delta script as part of the deploy (execute) process. Using base-line aware analysis the solution knows if to generate the deploy script for the change or protect the target and not revert it or pause and allow you to merge changes and resolve the conflict.
Consider a simple solution we have tried successfully at this thread - How to continuously delivery SQL-based app?
Disclaimer - I work at CloudMunch
We using Octopus Deploy and database projects in visual studio solution.
Build agent creates a nuget packages using octopack with a dacpac file and publish profiles inside and pushes it onto NuGet server.
Then release process utilizes the SqlPackage.exe utility to generate the update script for the release environment and adds it as an artifact to the release.
Previously created script executed in the next step with SQLCMD.exe utility.
This separation of create and execute steps gives us a possibility to have a manual step in between, so that someone verifies before the script is executed on Live environment, not to mention, that script saved as an artifact in the release can always be referred to, at any later point.
Would there be a demand I would provide more details and step scripts.
anyone has past experience with gradle? i'm thinking of using it for continuous deployment... i'm considering either using my own scripts (python) or gradle.
can anyone tell from experience which way he thinks recommanded to go? note i already use maven and i don't intend to move away for my dependency management and project management.
thanks
We have implemented Gradle-based deployment and environment management in a big governmental project (100+ servers). But we had to develop a custom set of plugins (which is actually rather straight forward process in Gradle) to handle tasks like remote SSH command execution through Groovy DSL, creation of application server domains/clusters (we are using WebLogic), application/configuration deployment.
We also are thinking of integrating Gradle with Puppet for easier Linux administration.
If you are coming from Java world, then using Gradle (which is Groovy-based) would be rather simple for you, because you can reuse your Java/Ant/Maven/Groovy knowledge to write scripts. Also an ability to create DSLs in Groovy may allow you to build interesting abstractions. Gradle also has very clean API which allows building nice dependencies between tasks. It also integrates very well with Maven infrastructure and you can reuse all Ant tasks.
Yes, Gradle-based deployment possible with gradle-ssh-plugin
Here is an article with good usage example.
I've been using SlowCheetah to transform my app.config files and this part is all working fine. The correct transforms are applied to AppName.exe.config when I build the windows client application.
The problem I have is that the S&D Project always looks for app.config which obviously does not contain the updated values.
How can I configure the S&D project to look for AppName.exe.config and package that instead?
After some lengthy research it seems this is not possible when building an MSI using VS 2010 Setup & Deployment.
It may be possible to do with MSBUILD as this is more powerful and flexible but I currently don't have the time to explore that avenue in detail.
So for the time being as a temporary workaround I have entered the values in app.config for the production environment as this is what the MSI will use.
My transforms are still in place for other environments. But seeing as I don't deploy to other environments with MSI it doesn't really matter.
At some stage I will sort this all out with a build server and CI.
Hope this helps someone, someday.
We are developing some testing infrastructure and I have hit a coders block (lack of sleep?)...this seems like it would be a solved problem but I haven't found what I'm looking for via google.
I would like to automatically push builds from our CI server (TeamCity) to a number of machines (growing, but currently 30). These are several WinForms apps and a number of dlls. Once deployed, I would like to kick off tests (NUnit, for both unit and integration tests) and report all results (back to CI? or somewhere else? Not sure).
The target machines are a number of platforms (Win7,Vista, XP, Server 2k8, Server 2k3, Ubuntu, Fedora, Suse, x64, x86, maybe macs down the line)
This gets me part way there (the actual push). But I can't find existing solutions for 'push starting' the tests and reporting back. So far I am thinking of combining the link (or similar) with custom code running on each client machine that watches the deploy directory, runs the tests and reports the results.
Does anyone know of existing solutions?
Links?
Done something similar and care to share?
Edit
If possible, we prefer .net based solutions, but it isn't strictly necessary. I would have tagged the question as such, but ran out of tags :)
You could use KwateeSDCM to both push and start on all the platforms you mention, including mac. However, you'll have to do some coding to get reports out. I'm not familiar with TeamCity but maybe you could push a script along with your application which could then transfer the test results via ftp to a server accessible by TeamCity.
Have a look at: STAF (Software test Automation Framework)
The Software Testing Automation Framework (STAF) is an open source, multi-platform, multi-language framework designed around the idea of reusable components, called services (such as process invocation, resource management, logging, and monitoring).
Which includes STAX:
STAX is an execution engine which can help you thoroughly automate the distribution, execution, and results analysis of your testcases.
And there's an article here:
http://agiletesting.blogspot.com/2004/12/stafstax-tutorial.html
Assuming you have the push part done already, and you don't mind using a TeamCity license, you can create a TeamCity Command Line Runner build configuration or NUnit test configuration that kicks off the tests on a properly configured agent. The build trigger for this test config would be successful completion of the application build.
So far I have ended up using a seperate build step in TeamCity that executes a bat script that in turn fires of tasks to the list of machines using PsExec. So far my trial runs it is working ok, though I now need to parallelize the copying of build output...
Thanks for the input to those who have provided it.
Our team develops distributed winform apps. We use ClickOnce for deployment and are very pleased with it.
However, we've found the pain point with ClickOnce is in creating the deployments. We have the standard dev/test/production environments and need to be able to create deployments for each of these that install and update separate from one another. Also, we want control over what assemblies get deployed. Just because an assembly was compiled doesn't mean we want it deployed.
The obvious first choice for creating deployments is Visual Studio. However, VS really doesn't address the issues stated. The next in line is the SDK tool, Mage. Mage works OK but creating deployments is rather tedious and we don't want every developer having our code signing certificate and password.
What we ended up doing was rolling our own deployment app that uses the command line version of Mage to create the ClickOnce manifest files.
I'm satisfied with our current solution but is seems like there would be an industry-wide, accepted approach to this problem. Is there?
I would look at using msbuild. It has built in tasks for handling clickonce deployments. I included some references which will help you get started, if you want to go down this path. It is what I use and I have found it to fit my needs. With a good build process using msbuild, you should be able to accomplish squashing the pains you have felt.
Here is detailed post on how ClickOnce manifest generation works with MsBuild.
I've used nAnt to run the overall build strategy, but pass parameters into MSBuild to compile and create the deployment package.
Basically, nAnt calls into MSBuild for each environment you need to deploy to, and generates a separate deployment output for each. You end up with a folder and all ClickOnce files you need for every environment, which you can just copy out to the server.
This is how we handled multiple production environments as well -- we had separate instances of our application for the US, Canada, and Europe, so each build would end up creating nine deployments, three each for dev, qa, and prod.