Spring Roo - Database Reverse Engineer freezes - database

We are new spring-roo but very familiar with RAD on PHP using Yii & Active Record.
I was able to run roo> database reverse engineer --schema to create models off an Oracle database for a proof of concept I am working on. The command line freezes since the 3rd attempt to update the schema. The difference between the first two attempts and the 3rd one is that we used the --includeTables option without knowing that it would overwrite the entire dbre.xml (instead of doing an incremental change). We have cleaned the cache and even reinstalled roo but the issue persists. Even creating a new project did not help. I can see the following in spring-roo logs:
// Spring Roo 1.3.2.RELEASE [rev 8387857] log opened at 2016-04-13 19:39:41
database properties list
// [failed] database reverse engineer --schema pfadmin --package ~.domain
Any idea or help is welcomed.

Found the solution after half a day of investigation. Spring roo performs an analyze table while reverse engineering the models. If your database is very large, then the compute statistics will take a long long long time :D
My advise export the database as DDL only not data, create an empty development database and run spring roo against to get your models.

Related

OpenCart - Sensible workflow, database migration?

I'm working on an OpenCart project. (Note: this is my first time dealing with it.)
I want to somehow implement my usual workflow of:
working on localhost, experimenting, etc,
deploying the changes to the production server (sometimes to a staging server before that),
adding the database changes.
Now, how should I achieve this?
What I already did with GIT is I created an automated deployment flow, which consists of the following:
building a deployment version (Checking out master/HEAD's upload/ directory, and removing the upload/install directory.),
copy the upload/ dir's contents to the target server.
This work fine, but won't solve the database migration issue.
I think it's not even as simple as updating certain tables in the target server's database from my local database, since for example: the "settings" table contains data that's specific to the environment.
So I can't just overwrite the settings table with my local version.
It seems to me, that the easiest - and ugliest - solution would be to develop on the prod server in parallel to the localhost changes. So for example: If I install a module, which causes changes in the database, then I would need to replicate every step I took in the local environment installing and placing that module. Same goes for every admin setup I take. (Meta changes, etc.)
This sounds awfully painful to me, so I hope there's a better solution out there other than doing every database-related change twice...
Thanks in advance!

When using Continuous or Automated Deployment, how do you deploy databases?

I'm looking at implementing Team City and Octopus Deploy for CI and Deployment on demand. However, database deployment is going to be tricky as many are old .net applications with messy databases.
Redgate seems to have a nice plug-in for Team City, but the price will probably be stumbling block
What do you use? I'm happy to execute scripts, but it's the comparison aspect (i.e. what has changed) I'm struggling with.
We utilize a free tool called RoundhousE for handling database changes with our project, and it was rather easy to use it with Octopus Deploy.
We created a new project in our solution called DatabaseMigration, included the RoundhousE exe in the project, a folder where we keep the db change scripts for RoundhousE, and then took advantage of how Octopus can call powershell scripts before, during, and after deployment (PreDeploy.ps1, Deploy.ps1, and PostDeploy.ps1 respectively) and added a Deploy.ps1 to the project as well with the following in it:
$roundhouse_exe_path = ".\rh.exe"
$scripts_dir = ".\Databases\DatabaseName"
$roundhouse_output_dir = ".\output"
if ($OctopusParameters) {
$env = $OctopusParameters["RoundhousE.ENV"]
$db_server = $OctopusParameters["SqlServerInstance"]
$db_name = $OctopusParameters["DatabaseName"]
} else {
$env="LOCAL"
$db_server = ".\SqlExpress"
$db_name = "DatabaseName"
}
&$roundhouse_exe_path -s $db_server -d $db_name -f $scripts_dir --env $env --silent -o > $roundhouse_output_dir
In there you can see where we check for any octopus variables (parameters) that are passed in when Octopus runs the deploy script, otherwise we have some default values we use, and then we simply call the RoundhousE executable.
Then you just need to have that project as part of what gets packaged for Octopus, and then add a step in Octopus to deploy that package and it will execute that as part of each deployment.
We've looked at the RedGate solution and pretty much reached the same conclusion you have, unfortunately it's the cost that is putting us off that route.
The only things I can think of are to generate version controlled DB migration scripts based upon your existing database, and then execute these as part of your build process. If you're looking at .NET projects in future (that don't use a CMS), could potentially consider using entity framework code first migrations.
I remember looking into this a while back, and for me it seems that there's a whole lot of trust you'd have to get put into this sort of process, as auto-deploying to a Development or Testing server isn't so bad, as the data is probably replaceable... But the idea of auto-updating a UAT or Production server might send the willies up the backs of an Operations team, who might be responsible for the database, or at least restoring it if it wasn't quite right.
Having said that, I do think its the way to go, though, as its far too easy to be scared of database deployment scripts, and that's when things get forgotten or missed.
I seem to remember looking at using Red Gate's SQL Compare and SQL Data Compare tools, as (I think) there was a command-line way into it, which would work well with scripted deployment processes, like Team City, CruiseControl.Net, etc.
The risk and complexity comes in more when using relational databases. In a NoSQL database where everything is "document" I guess continuous deployment is not such a concern. Some objects will have the "old" data structure till they are updated via the newly released code. In this situation your code would need to be able to support different data structures potentially. Missing properties or those with a different type should probably be covered in a well written, defensively coded application anyway.
I can see the risk in running scripts against the production database, however the point of CI and Continuous Delivery is that these scripts will be run and tested in other environments first to iron out any "gotchas" :-)
This doesn't reduce the amount of finger crossing and wincing when you actually push the button to deploy though!
Having database deploy automation is a real challenge especially when trying to perform the build once deploy many approach as being done to native application code.
In the build once deploy many, you compile the code and creates binaries and then copy them within the environments. From the database point of view, is the equivalent to generate the scripts once and execute them in all environments. This approach doesn't handle merges from different branches, out-of-process changes (critical fix in production) etc…
What I know works for database deployment automation (disclaimer - I'm working at DBmaestro) as I hear this from my customers is using the build and deploy on demand approach. With this method you build the database delta script as part of the deploy (execute) process. Using base-line aware analysis the solution knows if to generate the deploy script for the change or protect the target and not revert it or pause and allow you to merge changes and resolve the conflict.
Consider a simple solution we have tried successfully at this thread - How to continuously delivery SQL-based app?
Disclaimer - I work at CloudMunch
We using Octopus Deploy and database projects in visual studio solution.
Build agent creates a nuget packages using octopack with a dacpac file and publish profiles inside and pushes it onto NuGet server.
Then release process utilizes the SqlPackage.exe utility to generate the update script for the release environment and adds it as an artifact to the release.
Previously created script executed in the next step with SQLCMD.exe utility.
This separation of create and execute steps gives us a possibility to have a manual step in between, so that someone verifies before the script is executed on Live environment, not to mention, that script saved as an artifact in the release can always be referred to, at any later point.
Would there be a demand I would provide more details and step scripts.

Integration of different works by different people in moodle

We are developing a moodle site. We are a group of 5 people and each one is working on different module locally. But now we wwant to integrate the work of all in one machine or server. Is there any way to version control it or integrate it as the databse of each one is different because of different data. Please provide the solutuion as early as possible.
It is not completely clear as to whether you are separately working on the content of the site or the code for the new site, so I will attempt to answer both questions.
For content the easiest way to integrate it all together into one site is to use the Moodle backup and restore mechanism ( http://docs.moodle.org/26/en/Course_backup ) - backup each of the courses and then restore them onto the main site. If you have a lot of courses to transfer, then it may make more sense to write some code to automate certain aspects of this, but that can be quite a bit of work, so usually it is easier to just manually do the backup and restore.
For code the answer is Git. All the core Moodle code is version controlled via git. Make sure that each developer is working with their own clone of your main git repository (you can find the core Moodle repository at . Once they have committed each of their changes, then they can be pushed (to a central repository) or pulled to your production site. Read more at http://docs.moodle.org/dev/Git_for_developers
Note that if the code for each module has been written with the proper DB installation / upgrade code ( http://docs.moodle.org/dev/Upgrade_API ) then it should simply be possible to take the code from each of the developed modules, put them together into one codebase and then create a fully-working fresh install. Once you have that, you should be able to use backup and restore to transfer any required courses from the development servers to the live server.

Auto-deploy Zend Framework 2 application + Database schema + Actual data

Background:
I am using GitHub to store a ZF2 application.
The database schema + the actual data stored inside the schema are not being stored inside a version control. At the moment I am in development mode, so I have some database dump scripts that I load into the database when I need to. I also tweak entries in the database via phpMyAdmin when I need ongoing granular control for immediate testing purposes. I am also looking into using Doctrire ORM, so my schema will be part of my code via Annotations, and that will be checked into GitHub. Doctrine ORM will generate the actual schema for me, although it is still a separate step in the deployment process. The actual data however, will still be outside of the application and outside of the repository and currently has to be dealt with separately and is not automated.
Goal:
I want to be able to deploy ZF2 application and the database schema, and the data onto Zend Server and have it "just work" in the most automated, least manual way possible.
Question:
What is a recommended, best practice way to deploy every aspect of ZF2 application in the most automated, least manual way possible and have it "just work"? Let's focus on the Development and Testing mode here, as in Production it may be good to have separate deployment steps to protect against accidental live data overwrites.
You can try Phing (http://www.phing.info/) for deploying your PHP application, adjusting directory permissions, running database migrations, running unit tests, etc. I used Phing in couple of my projects with great success.

Incremental development with subsonic

I'm in the process of starting up a web site project. My plan is to roll out the site in a somewhat rudimentary form first and then add to the site functionality along the way.
I'm using Subsonic 3 for my DAL, and I'm expecting the database will go through multiple versions as the sites evolve. This means I'll need some kind of versioning and migration tools. I understand that Subsonic has built in migration possibilities, but I'm having difficulties grasping how to use these tools, in my scenario.
First there's the SimpleRepository model, where the Subsonic "automagically" handles the migrations as i develop my site. I can see how this works on my dev-machine, but I'm not sure how to handle deployments with this.
Would Subsonic run the necessary migrations on my live site as the appropriate methods are called?
Is there some way I can force all necessary migrations on a site while taking the site offline, when using the Simplerepository model? (Else I would expect random users to experience severe performance cuts, as the migration routines kick in)
Would I be better off using the ActiveRecord model, and then handling migrations with the Subsonic.Schema.Migrator? (I suspect so)
Do you know of any good resources explaining how to handle this situation with the migrator? (I read the doc, but I can't piece together how I would use this in practice)
Thanks for listening/replying.
Regards
Jesper Hauge
I would advise against ever running migrations against a live site. SubSonic's migrations are really there to make development simpler and should never be used against a live environment. To be honest even using SubSonic.Schema.Migrator you're still going to bump into the fact that refactoring databases is an incredibly hard problem. For example renaming a column in a table using management studio is trivial, but what happens in the background involves creating an entirely new table and migrating all the constraints, data etc. before renaming the new table.
The most effective way I've found for dealing with this is:
Script all database changes as you make them in your development environment (SQL Server Management Studio will do this for you) and add these scripts to your source control.
As part of deployment (obviously backup first) run the migration scripts and then deploy the updated application on success.
Whether you use ActiveRecord or SimpleRepository is then down to whether you want the extra features/complexity of ActiveRecord.
Hope this helps
i would use activerecord easy to use and any changes you just run the TT files, you would then just build or publish your slution and done ???? SVN will keep your multiple versions of the build stage so if you make a tit of it you just drop back a revision.

Resources